Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1083 Articles
article-image-flex-101-flash-builder-4-part-1
Packt
16 Oct 2009
11 min read
Save for later

Flex 101 with Flash Builder 4: Part 1

Packt
16 Oct 2009
11 min read
  The article is intended towards developers who have never used Flex before and would like to exercise a “Hello World” kind of tutorial. The article does not aim to cover Flex and FB4 in detail but rather focuses on the mechanics of FB4 and getting an application running with minimal effort. For developers familiar with Flex and the predecessor to Flash Builder 4 (Flex Builder 2 or 3), it contains an introduction to FB4 and some differences in the way you go about building Flex Applications using FB4. Even if you have not programmed before and are looking at understanding how to make a start in developing applications, this would serve as a good start. The Flex Ecosystem The Flex ecosystem is a set of libraries, tools, languages and a deployment runtime that provides an end-to-end framework for designing, developing and deploying RIAs. All these together are being branded as a part of the Flash platform. In its latest release, Flex 4, special efforts have been put in to address the designer to developer workflow by letting graphic designers address layout, skinning, effects and general look and feel of your application and then the developers taking over to address the application logic, events, etc. To understand this at a high level, take a look at the diagram shown below. This is a very simplified diagram and the intention is to project a 10,000 ft view of the development, compilation and execution process. Let us understand the diagram now: The developer will typically work in the Flash Builder Application. Flash Builder is the Integrated Development Environment (IDE) that provides an environment for coding, compiling, running / debugging your Flex based applications. Your Flex Application will typically consist of MXML and ActionScript code. ActionScript is an ECMAScript compatible Object Oriented language, whereas MXML is an XML-based markup language. Using MXML you can define/layout your visual components like buttons, combobox, data grids, and others. Your application logic will be typically coded inside ActionScript classes/methods. While coding your Flex Application, you will make use of the Flex framework classes that provide most of the core functionality. Additional libraries like Flex Charting libraries and 3rd party components can be used in your application too. Flash Builder compiles all of this into object byte code that can be executed inside the Flash Player. Flash Player is the runtime host that executes your application. This is high level introduction to the ecosystem and as we work through the samples later on in the article, things will start falling into place. Flash Builder 4 Flash Builder is the new name for the development IDE previously known as Flex Builder. The latest release is 4 and it is currently in public beta. Flash Builder 4 is based on the Eclipse IDE, so if you are familiar with Eclipse based tools, you will be able to navigate your way quite easily. Flash Builder 4 like Flex Builder 3 previously is a commercial product and you need to purchase a development license. FB4 currently is in public beta and is available as a 30-day evaluation. Through the rest of the article, we will make use of FB4 and will be focused completely on that to build and run the sample applications. Let us now take a look at setting up FB4. Setting up your Development Environment To setup Flash Builder 4, follows these steps: The first step should be installing Flash Player 10 on your system. We will be developing with the Flex 4 SDK that comes along with Flash Builder 4 and it requires Flash Player 10. You can download the latest version of Flash Player from here: http://www.adobe.com/products/flashplayer/ Download Flash Builder 4 Public Beta from http://labs.adobe.com/technologies/flashbuilder4/. The page is shown below: After you download, run the installer program and proceed with the rest of the installation. Launch the Adobe Flash Builder Beta. It will prompt first with a message that it is a Trial version as shown below: To continue in evaluation mode, select the option highlighted above and click Next. This will launch the Flash Builder IDE. Let us start coding with Flash Builder 4 IDE. We will stick to tradition and write the “Hello World” application. Hello World using Flash Builder 4 In this section, we will be developing a basic Hello World application. While the application does not do much, it will help you get comfortable with the Flash Builder IDE. Launch the Flash Builder IDE. We will be creating a Flex Project. Flash Builder will help us create the Project that will contain all our files. To create a new Flex Project, click on the File → New → Flex Project as shown below: This will bring up a dialog in which you will need to specify more details about the Flex Project that you plan to develop. The dialog is shown below: You will need to provide at least the following information: Project Name: This is the name of your project. Enter a name that you want over here. In our case, we have named our project MyFirstFB4App. Application Type: We can develop both a Web version and a desktop version of our application using Flash Builder. The web application will then run inside of a web browser and execute within the Flash Player plug-in. We will go with the Web option over here. The Desktop application runs inside the Adobe Integrated Runtime environment and can have more desktop like features. We will skip that option for now. We will let the other options remain as is. We will use the Flex 4.0 SDK and currently we are not integrating with any Server side layer so we will leave that option as None/Other. Click on Finish at this point to create your Flex Project. This will create a main application file called MyFirstFB4App.mxml as shown below. We will come back to our coding a little later but first we must familiarize ourselves with the Flash Builder IDE. Let us first look at the Package Explorer to understand the files that have been created for the Flex Project. The screenshot is shown below: It consists of the main source file MyFirstFB4App.mxml. This is the main application file or in other words the bootstrap. All your source files (MXML and ActionScript code along with assets like images, and others should go under the src folder. They can optionally be placed in packages too. The Flex 4.0 framework consists of several libraries that you compile your code against. You would end up using its framework code, components (visual and non-visual) and other classes. These classes are packaged in a library file with an extension .swc. A list of library files is shown above. You do not need to typically do anything with it. Optionally, you can also use 3rd party components written by other companies and developers that are not part of the Flex framework. These libraries are packages as .SWC files too and they can be placed in the libs folder as shown in the previous screenshot. The typical step is to write and compile your code—build your project. If your build is successful, the object code is generated in the bin-debug folder. When you deploy your application to a Web Server, you will need to pickup the contents from this folder. We will come to that a little later. The html-template folder contains some boiler-plate code that contains the container HTML into which your object code will be referenced. It is possible to customize this but for now, we will not discuss that. Double-click MyFirstFB4App.mxml file. This is our main application file. The code listing is given below: <?xml version="1.0" encoding="utf-8"?> <s:Application minWidth="1024" minHeight="768"> </s:Application> As discussed before, you will typically write one or more MXML files that will contain typically your visual components (although there can be non-visual components also). By visual components, we mean controls like button, combobox, list, tree, and others. It could also contain layout components and containers that help you layout your design as per the application screen design. To view what components, you can place on the main application canvas, select the Design View as shown below: Have a look at the lower half of the left pane. You will see the Components tab as shown below, which would address most needs of your Application Visual design. Click on the Controls tree node as shown below. You will see several controls that you can use and from which, we will use the Button control for this application. Simply select the Button control and drag it to the Design View Canvas as shown below: This will drop an instance of the Button control on the Design View as shown below: Select the Button to see its properties panel as shown below. Properties Panel is where you can set several attributes at design time for the control. In case the Properties panel is not visible, you can get to that by selecting Window → Properties from the main menu. In the Properties panel, we can change several key attributes. All controls can be uniquely identified and addressed in your code via the ID attribute. This is a unique name that you need to provide. Go ahead and give it some meaningful name. In our case, we name it btnSayHello. Next we can change the label so that instead of Button, it can display a message for example, Say Hello. Finally we want to wire some code such that if the button is clicked, we can do some action like display a Message Box saying Hello World. To do that, click the icon next to the On click edit field as shown below. It will provide you two options. Select the option for Generate Event Handler. This will generate the code and switch to the Source view. The code is listed below for your reference. <?xml version="1.0" encoding="utf-8"?> <s:Application minWidth="1024" minHeight="768"> <fx:Script> <![CDATA[ protected function btnSayHello_clickHandler(event:MouseEvent):void { // TODO Auto-generated method stub } ]]> </fx:Script> <s:Button x="17" y="14" label="Button" id="btnSayHello" click="btnSayHello_clickHandler(event)"/> </s:Application> There are few things to note here. As mentioned most of your application logic will be written in ActionScript and that is exactly what Flash Builder has generated for you. All such code is typically added inside a scripting block marked with the <fx:Script> tag. You can place your ActionScript methods over here that can be used by the rest of the application. When we clicked on Generate Event Handler, Flash Builder generated the Event Handler code. This code is in ActionScript and was appropriately placed inside the <fx:Script> block for us. If you look at the code, you can see that it has added a function that is invoked when the click event is fired on the button. The method is btnSayHello_clickHandler and if you notice it has an empty method that is, no implementation. Let us run the application now to see what it looks like. To run the application, click on the   Run icon in the main toolbar of Flash Builder. This will launch the web application as shown below. Clicking the Say Hello button will not do anything at this point since there is no code written inside the handler as we saw above. To display the MessageBox, we add the code shown below (Only the Script section is shown below): <fx:Script> <![CDATA[ import mx.controls.Alert; protected function btnSayHello_clickHandler(event:MouseEvent):void { Alert.show("Hello World"); } ]]> </fx:Script> We use one of the classes (called Alert) from the Flex framework. Like any other language, we need to specify which package we are using the class from so that the compiler can understand it. The Alert class belongs to the mx.controls package and it has a static method called show() which takes a single parameter of type String. This String parameter is the message to be displayed and in our case it is "Hello World". To run this, click Ctrl-S to save your file or File →  Save from the main menu. And click on Run icon in the main toolbar. This will launch the application and on clicking the SayHello button, you will see the Hello World Alert window as shown below.
Read more
  • 0
  • 0
  • 3150

article-image-quick-start-guide-scratch-20
Packt
10 Apr 2014
6 min read
Save for later

A Quick Start Guide to Scratch 2.0

Packt
10 Apr 2014
6 min read
(For more resources related to this topic, see here.) The anticipation of learning a new programming language can sometimes leave us frozen on the starting line, not knowing what to expect or where to start. Together, we'll take our first steps into programming with Scratch, and block-by-block, we'll create our first animation. Our work in this article will focus on getting ourselves comfortable with some fundamental concepts before we create projects in the rest of the book. Joining the Scratch community If you're planning to work with the online project editor on the Scratch website, I highly recommend you set up an account on scratch.mit.edu so that you can save your projects. If you're going to be working with the offline editor, then there is no need to create an account on the Scratch website to save your work; however, you will be required to create an account to share a project or participate in the community forums. Let's take a moment to set up an account and point out some features of the main account. That way, you can decide if creating an online account is right for you or your children at this time. Time for action – creating an account on the Scratch website Let's walk through the account creation process, so we can see what information is generally required to create a Scratch account. Open a web browser and go to http://scratch.mit.edu, and click on the link titled Join Scratch. At the time of writing this book, you will be prompted to pick a username and a password, as shown in the following screenshot. Select a username and password. If the name is taken, you'll be prompted to enter a new username. Make sure you don't use your real name. This is shown in the following screenshot: After you enter a username and password, click on Next. Then, you'll be prompted for some general demographic information, including the date of birth, gender, country, and e-mail address, as shown in the following screenshot. All fields need to be filled in. After entering all the information, click on Next. The account is now created, and you receive a confirmation screen as shown in the following screenshot: Click on the OK Let's Go! button to log in to Scratch and go to your home page. What just happened? Creating an account on the Scratch website generally does not require a lot of detailed information. The Scratch team has made an effort to maximize privacy. They strongly discourage the use of real names in user names, and for children, this is probably a wise decision. The birthday information is not publicized and is used as an account verification step while resetting passwords. The e-mail address is also not publicized and is used to reset passwords. The country and gender information is also not publically displayed and is generally just used by Scratch to identify the users of Scratch. For more information on Scratch and privacy, visit: http://scratch.mit.edu/help/faq/#privacy. Time for action – understanding the key features of your account When we log in to the Scratch website, we see our home page, as shown in the following screenshot: All the projects we create online will be saved to My Stuff. You can go to this location by clicking on the folder icon with the S on it, next to the account avatar, at the top of the page. The following screenshot shows my projects: Next to the My Stuff icon in the navigation pane is Messages, which is represented by a letter icon. This is where you'll find notifications of comments and activity on your shared projects. Clicking on this icon displays a list of messages. The next primary community feature available to the subscribed users is the Discuss page. The Discuss page shows a list of forums and topics that can be viewed by anyone; however, an account is required to be able to post on the forums or topics. What just happened? A Scratch account provides users with four primary features when they view the website: saving projects, sharing projects, receiving notifications, and participating in community discussions. When we view our saved projects in the My Stuff page, as we can see in the previous screenshot, we have the ability to See inside the project to edit it, share it, or delete it. Abiding by the terms of use It's important that we take a few moments to read the terms of use policy so that we know what the community expects from us. Taken directly from Scratch's terms of use, the major points are: Be respectful Offer constructive comments Share and give credit Keep your personal information private Help keep the site friendly Creating projects under Creative Commons licenses Every work published on the Scratch website is shared under the Attribution-ShareAlike license. That doesn't mean you can surf the web and use copyrighted images in your work. Rather, the Creative Commons licensing ensures the collaboration objective of Scratch by making it easy for anyone to build upon what you do. When you look inside an existing project and begin to change it, the project keeps a remix tree, crediting the original sources of the work. A shout out to the original author in your projects would also be a nice way to give credit. For more information about the Creative Commons Attribution-ShareAlike license, visit http://creativecommons.org/licenses/by-sa/3.0/. Closely related to the licensing of Scratch projects is the understanding that you as a web user can not inherently browse the web, find media files, incorporate them into your project, and then share the project for everyone. Respect the copyrights of other people. To this end, the Scratch team enforces the Digital Millennium Copyright Act (DMCA), which protects the intellectual rights and copyrights of others. More information on this is available at http://scratch.mit.edu/DMCA. Finding free media online As we'll see throughout the book, Scratch provides libraries of media, including sounds and images that are freely available for use in our Scratch projects. However, we may find instances where we want to incorporate a broader range of media into our projects. A great search page to find free media files is http://search.creativecommons.org. Taking our first steps in Scratch From this point forward, we're going to be project editor agnostic, meaning you may choose to use the online project editor or the offline editor to work through the projects. When we encounter software that's unfamiliar to us, it's common to wonder, "Where do I begin?". The Scratch interface looks friendly enough, but the blank page can be a daunting thing to overcome. The rest of this article will be spent on building some introductory projects to get us comfortable with the project editor. If you're not already on the Scratch site, go to http://scratch.mit.edu and let's get started.
Read more
  • 0
  • 0
  • 3148

article-image-debugging-sikuli-scripts
Packt
30 Jul 2013
3 min read
Save for later

Debugging Sikuli scripts

Packt
30 Jul 2013
3 min read
This is the last topic with test automation is the debugging of scripts. A portion of all time working on script development will be spent running the script trying to debug problems with the scripts to get them to run reliably. Once you have a collection of scripts that you run on a regular basis without supervision, identifying causes of errors can become much more difficult. There are two main techniques for debugging Sikuli scripts when running them in the test harness presented here. The first method is to look at the logs. If you look back over the test runner script, you can see that it logs a complete record of the console output to a file. These files end in .final.log. You can open these in your text editor, and see what your script did and get feedback about the errors. The errors in the logs will tell you what happened. For example, you might get something like this: This one is telling us that Sikuli couldn't find the requested image on the screen. Or, you might see errors with your Python code. In situations like this, it's handy to know that Sikuli scripts are just a collection of files in a directory. You can actually open it up and look at the images and Python code within it. Another handy technique is to record videos of your test runs. This allows you to review what happened during a test (passing or failing) to see what went wrong, or analyze the execution for possible improvements to execution speed. For Mac OS X, this can be done using QuickTime Player, which is included with the OS. For Windows or Linux, you will need to investigate a similar solution solution (the examples prepared for this book contain a working example for Windows), but the general technique should still apply. Let's see how this would work in practice. Firstly, we need to create two additional scripts, one to start recording and another to stop it. The script is broken into two parts, so they can be executed independently. Here's the startup script (see startcapture.sikuli): And here's the script to stop recording (see stopcapture.sikuli): These are then pretty easy to integrate with our test runner scripts (see runtests_withrecording.sikuli): Depending on your machine, you may also encounter some performance degradations when recording video along with your tests. To compensate, you can adjust the default amount of time that Sikuli will wait to find something from 5 seconds to 10 seconds, or more (you may need to experiment), by adding the following line to the end of your library.sikuli script: Summary This article helped you debug your Sikuli scripts either by looking at the logs or by recording videos of your test runs. Useful Links: Visual Studio 2008 Test Types Android Application Testing: Getting Started Python Testing: Installing the Robot Framework
Read more
  • 0
  • 0
  • 3144

article-image-architecting-and-coding-high-performance-net-applications
Packt
09 Jul 2015
15 min read
Save for later

Architecting and coding high performance .NET applications

Packt
09 Jul 2015
15 min read
In this article by Antonio Esposito, author of Learning .NET High Performance Programming, we will learn about low-pass audio filtering implemented using .NET, and also learn about MVVM and XAML. Model-View-ViewModel and XAML The MVVM pattern is another descendant of the MVC pattern. Born from an extensive update to the MVP pattern, it is at the base of all eXtensible Application Markup Language (XAML) language-based frameworks, such as Windows presentation foundation (WPF), Silverlight, Windows Phone applications, and Store Apps (formerly known as Metro-style apps). MVVM is different from MVC, which is used by Microsoft in its main web development framework in that it is used for desktop or device class applications. The first and (still) the most powerful application framework using MVVM in Microsoft is WPF, a desktop class framework that can use the full .NET 4.5.3 environment. Future versions within Visual Studio 2015 will support built-in .NET 4.6. On the other hand, all other frameworks by Microsoft that use the XAML language supporting MVVM patterns are based on a smaller edition of .NET. This happens with Silverlight, Windows Store Apps, Universal Apps, or Windows Phone Apps. This is why Microsoft made the Portable Library project within Visual Studio, which allows us to create shared code bases compatible with all frameworks. While a Controller in MVC pattern is sort of a router for requests to catch any request and parsing input/output Models, the MVVM lies behind any View with a full two-way data binding that is always linked to a View's controls and together at Model's properties. Actually, multiple ViewModels may run the same View and many Views can use the same single/multiple instance of a given ViewModel. A simple MVC/MVVM design comparative We could assert that the experience offered by MVVM is like a film, while the experience offered by MVC is like photography, because while a Controller always makes one-shot elaborations regarding the application user requests in MVC, in MVVM, the ViewModel is definitely the view! Not only does a ViewModel lie behind a View, but we could also say that if a VM is a body, then a View is its dress. While the concrete View is the graphical representation, the ViewModel is the virtual view, the un-concrete view, but still the View. In MVC, the View contains the user state (the value of all items showed in the UI) until a GET/POST invocation is sent to the web server. Once sent, in the MVC framework, the View simply binds one-way reading data from a Model. In MVVM, behaviors, interaction logic, and user state actually live within the ViewModel. Moreover, it is again in the ViewModel that any access to the underlying Model, domain, and any persistence provider actually flows. Between a ViewModel and View, a data connection called data binding is established. This is a declarative association between a source and target property, such as Person.Name with TextBox.Text. Although it is possible to configure data binding by imperative coding (while declarative means decorating or setting the property association in XAML), in frameworks such as WPF and other XAML-based frameworks, this is usually avoided because of the more decoupled result made by the declarative choice. The most powerful technology feature provided by any XAML-based language is actually the data binding, other than the simpler one that was available in Windows Forms. XAML allows one-way binding (also reverted to the source) and two-way binding. Such data binding supports any source or target as a property from a Model or ViewModel or any other control's dependency property. This binding subsystem is so powerful in XAML-based languages that events are handled in specific objects named Command, and this can be data-bound to specific controls, such as buttons. In the .NET framework, an event is an implementation of the Observer pattern that lies within a delegate object, allowing a 1-N association between the only source of the event (the owner of the event) and more observers that can handle the event with some specific code. The only object that can raise the event is the owner itself. In XAML-based languages, a Command is an object that targets a specific event (in the meaning of something that can happen) that can be bound to different controls/classes, and all of those can register handlers or raise the signaling of all handlers. An MVVM performance map analysis Performance concerns Regarding performance, MVVM behaves very well in several scenarios in terms of data retrieval (latency-driven) and data entry (throughput- and scalability-driven). The ability to have an impressive abstraction of the view in the VM without having to rely on the pipelines of MVC (the actions) makes the programming very pleasurable and give the developer the choice to use different designs and optimization techniques. Data binding itself is done by implementing specific .NET interfaces that can be easily centralized. Talking about latency, it is slightly different from previous examples based on web request-response time, unavailable in MVVM. Theoretically speaking, in the design pattern of MVVM, there is no latency at all. In a concrete implementation within XAML-based languages, latency can refer to two different kinds of timings. During data binding, latency is the time between when a VM makes new data available and a View actually renders it. Instead, during a command execution, latency is the time between when a command is invoked and all relative handlers complete their execution. We usually use the first definition until differently specified. Although the nominal latency is near zero (some milliseconds because of the dictionary-based configuration of data binding), specific implementation concerns about latency actually exist. In any Model or ViewModel, an updated data notification is made by triggering the View with the INotifyPropertyChanged interface. The .NET interface causes the View to read the underlying data again. Because all notifications are made by a single .NET event, this can easily become a bottleneck because of the serialized approach used by any delegate or event handlers in the .NET world. On the contrary, when dealing with data that flows from the View to the Model, such an inverse binding is usually configured declaratively within the {Binding …} keyword, which supports specifying binding directions and trigger timing (to choose from the control's lost focus CLR event or anytime the property value changes). The XAML data binding does not add any measurable time during its execution. Although this, as said, such binding may link multiple properties or the control's dependency properties together. Linking this interaction logic could increase latency time heavily, adding some annoying delay at the View level. One fact upon all, is the added latency by any validation logic. It is even worse if such validation is other than formal, such as validating some ID or CODE against a database value. Talking about scalability, MVVM patterns does some work here, while we can make some concrete analysis concerning the XAML implementation. It is easy to say that scaling out is impossible because MVVM is a desktop class layered architecture that cannot scale. Instead, we can say that in a multiuser scenario with multiple client systems connected in a 2-tier or 3-tier system architecture, simple MVVM and XAML-based frameworks will never act as bottlenecks. The ability to use the full .NET stack in WPF gives us the chance to use all synchronization techniques available, in order to use a directly connected DBMS or middleware tier. Instead of scaling up by moving the application to an increased CPU clock system, the XAML-based application would benefit more from an increased CPU core count system. Obviously, to profit from many CPU cores, mastering parallel techniques is mandatory. About the resource usage, MVVM-powered architectures require only a simple POCO class as a Model and ViewModel. The only additional requirement is the implementation of the INotifyPropertyChanged interface that costs next to nothing. Talking about the pattern, unlike MVC, which has a specific elaboration workflow, MVVM does not offer this functionality. Multiple commands with multiple logic can process their respective logic (together with asynchronous invocation) with the local VM data or by going down to the persistence layer to grab missing information. We have all the choices here. Although MVVM does not cost anything in terms of graphical rendering, XAML-based frameworks make massive use of hardware-accelerated user controls. Talking about an extreme choice, Windows Forms with Graphics Device Interface (GDI)-based rendering require a lot less resources and can give a higher frame rate on highly updatable data. Thus, if a very high FPS is needed, the choice of still rendering a WPF area in GDI is available. For other XAML languages, the choice is not so easy to obtain. Obviously, this does not mean that XAML is slow in rendering with its DirectX based engine. Simply consider that WPF animations need a good Graphics Processing Unit (GPU), while a basic GDI animation will execute on any system, although it is obsolete. Talking about availability, MVVM-based architectures usually lead programmers to good programming. As MVC allows it, MVVM designs can be tested because of the great modularity. While a Controller uses a pipelined workflow to process any requests, a ViewModel is more flexible and can be tested with multiple initialization conditions. This makes it more powerful but also less predictable than a Controller, and hence is tricky to use. In terms of design, the Controller acts as a transaction script, while the ViewModel acts in a more realistic, object-oriented approach. Finally, yet importantly, throughput and efficiency are simply unaffected by MVVM-based architectures. However, because of the flexibility the solution gives to the developer, any interaction and business logic design may be used inside a ViewModel and their underlying Models. Therefore, any success or failure regarding those performance aspects are usually related to programmer work. In XAML frameworks, throughput is achieved by an intensive use of asynchronous and parallel programming assisted by a built-in thread synchronization subsystem, based on the Dispatcher class that deals with UI updates. Low-pass filtering for Audio Low-pass filtering has been available since 2008 in the native .NET code. NAudio is a powerful library helping any CLR programmer to create, manipulate, or analyze audio data in any format. Available through NuGet Package Manager, NAudio offers a simple and .NET-like programming framework, with specific classes and stream-reader for audio data files. Let's see how to apply the low-pass digital filter in a real audio uncompressed file in WAVE format. For this test, we will use the Windows start-up default sound file. The chart is still made in a legacy Windows Forms application with an empty Form1 file, as shown in the previous example: private async void Form1_Load(object sender, EventArgs e) {    //stereo wave file channels    var channels = await Task.Factory.StartNew(() =>        {            //the wave stream-like reader            using (var reader = new WaveFileReader("startup.wav"))            {                var leftChannel = new List<float>();              var rightChannel = new List<float>();                  //let's read all frames as normalized floats                while (reader.Position < reader.Length)                {                    var frame = reader.ReadNextSampleFrame();                   leftChannel.Add(frame[0]);                    rightChannel.Add(frame[1]);                }                  return new                {                    Left = leftChannel.ToArray(),                    Right = rightChannel.ToArray(),                };            }        });      //make a low-pass digital filter on floating point data    //at 200hz    var leftLowpassTask = Task.Factory.StartNew(() => LowPass(channels.Left, 200).ToArray());    var rightLowpassTask = Task.Factory.StartNew(() => LowPass(channels.Right, 200).ToArray());      //this let the two tasks work together in task-parallelism    var leftChannelLP = await leftLowpassTask;    var rightChannelLP = await rightLowpassTask;      //create and databind a chart    var chart1 = CreateChart();      chart1.DataSource = Enumerable.Range(0, channels.Left.Length).Select(i => new        {            Index = i,            Left = channels.Left[i],            Right = channels.Right[i],            LeftLP = leftChannelLP[i],            RightLP = rightChannelLP[i],        }).ToArray();      chart1.DataBind();      //add the chart to the form    this.Controls.Add(chart1); }   private static Chart CreateChart() {    //creates a chart    //namespace System.Windows.Forms.DataVisualization.Charting      var chart1 = new Chart();      //shows chart in fullscreen    chart1.Dock = DockStyle.Fill;      //create a default area    chart1.ChartAreas.Add(new ChartArea());      //left and right channel series    chart1.Series.Add(new Series    {        XValueMember = "Index",        XValueType = ChartValueType.Auto,        YValueMembers = "Left",        ChartType = SeriesChartType.Line,    });    chart1.Series.Add(new Series    {        XValueMember = "Index",        XValueType = ChartValueType.Auto,        YValueMembers = "Right",        ChartType = SeriesChartType.Line,    });      //left and right channel low-pass (bass) series    chart1.Series.Add(new Series    {        XValueMember = "Index",        XValueType = ChartValueType.Auto,        YValueMembers = "LeftLP",        ChartType = SeriesChartType.Line,        BorderWidth = 2,    });    chart1.Series.Add(new Series    {        XValueMember = "Index",        XValueType = ChartValueType.Auto,        YValueMembers = "RightLP",        ChartType = SeriesChartType.Line,        BorderWidth = 2,    });      return chart1; } Let's see the graphical result: The Windows start-up sound waveform. In bolt, the bass waveform with a low-pass filter at 200hz. The usage of parallelism in elaborations such as this is mandatory. Audio elaboration is a canonical example of engineering data computation because it works on a huge dataset of floating points values. A simple file, such as the preceding one that contains less than 2 seconds of audio sampled at (only) 22,050 Hz, produces an array greater than 40,000 floating points per channel (stereo = 2 channels). Just to have an idea of how hard processing audio files is, note that an uncompressed CD quality song of 4 minutes sampled at 44,100 samples per second * 60 (seconds) * 4 (minutes) will create an array greater than 10 million floating-point items per channel. Because of the FFT intrinsic logic, any low-pass filtering run must run in a single thread. This means that the only optimization we can apply when running FFT based low-pass filtering is parallelizing in a per channel basis. For most cases, this choice can only bring a 2X throughput improvement, regardless of the processor count of the underlying system. Summary In this article we got introduced to the applications of .NET high-performance performance. We learned how MVVM and XAML play their roles in .NET to create applications for various platforms, also we learned about its performance characteristics. Next we learned how high-performance .NET had applications in engineering aspects through a practical example of low-pass audio filtering. It showed you how versatile it is to apply high-performance programming to specific engineering applications. Resources for Article: Further resources on this subject: Windows Phone 8 Applications [article] Core .NET Recipes [article] Parallel Programming Patterns [article]
Read more
  • 0
  • 0
  • 3138

article-image-working-xml-flex-3-and-java-part1
Packt
28 Oct 2009
10 min read
Save for later

Working with XML in Flex 3 and Java-part1

Packt
28 Oct 2009
10 min read
In today's world, many server-side applications make use of XML to structure data because XML is a standard way of representing structured information. It is easy to work with, and people can easily read, write, and understand XML without the need of any specialized skills. The XML standard is widely accepted and used in server communications such as Simple Object Access Protocol (SOAP) based web services. XML stands for eXtensible Markup Language. The XML standard specification is available at http://www.w3.org/XML/. Adobe Flex provides a standardized ECMAScript-based set of API classes and functionality for working with XML data. This collection of classes and functionality provided by Flex are known as E4X. You can use these classes provided by Flex to build sophisticated Rich Internet Applications using XML data. XML basics XML is a standard way to represent categorized data into a tree structure similar to HTML documents. XML is written in plain-text format, and hence it is very easy to read, write, and manipulate its data. A typical XML document looks like this: <book>    <title>Flex 3 with Java</title>    <author>Satish Kore</author>    <publisher>Packt Publishing</publisher>    <pages>300</pages> </book> Generally, XML data is known as XML documents and it is represented by tags wrapped in angle brackets (< >). These tags are also known as XML elements. Every XML document starts with a single top-level element known as the root element. Each element is distinguished by a set of tags known as the opening tag and the closing tag. In the previous XML document, <book> is the opening tag and </book> is the closing tag. If an element contains no content, it can be written as an empty statement (also called self-closing statement). For example, <book/> is as good as writing <book></book>. XML documents can also be more complex with nested tags and attributes, as shown in the following example: <book ISBN="978-1-847195-34-0">   <title>Flex 3 with Java</title>   <author country="India" numberOfBooks="1">    <firstName>Satish</firstName>    <lastName>Kore</lastName> </author>   <publisher country="United Kingdom">Packt Publishing</publisher>   <pages>300</pages> </book> Notice that the above XML document contains nested tags such as <firstName> and <lastName> under the <author> tag. ISBN, country, and numberOfBooks, which you can see inside the tags, are called XML attributes. To learn more about XML, visit the W3Schools' XML Tutorial at http://w3schools.com/xml/. Understanding E4X Flex provides a set of API classes and functionality based on the ECMAScript for XML (E4X) standards in order to work with XML data. The E4X approach provides a simple and straightforward way to work with XML structured data, and it also reduces the complexity of parsing XML documents. Earlier versions of Flex did not have a direct way of working with XML data. The E4X provides an alternative to DOM (Document Object Model) interface that uses a simpler syntax for reading and querying XML documents. More information about other E4X implementations can be found at http://en.wikipedia.org/wiki/E4X. The key features of E4X include: It is based on standard scripting language specifications known as ECMAScript for XML. Flex implements these specifications in the form of API classes and functionality for simplifying the XML data processing. It provides easy and well-known operators, such as the dot (.) and @, to work with XML objects. The @ and dot (.) operators can be used not only to read data, but also to assign data to XML nodes, attributes, and so on. The E4X functionality is much easier and more intuitive than working with the DOM documents to access XML data. ActionScript 3.0 includes the following E4X classes: XML, XMLList, QName, and Namespace. These classes are designed to simplify XML data processing into Flex applications. Let's see one quick example: Define a variable of type XML and create a sample XML document. In this example, we will assign it as a literal. However, in the real world, your application might load XML data from external sources, such as a web service or an RSS feed. private var myBooks:XML =   <books publisher="Packt Pub">    <book title="Book1" price="99.99">    <author>Author1</author>    </book>    <book title="Book2" price="59.99">    <author>Author2</author>    </book>    <book title="Book3" price="49.99">    <author>Author3</author>    </book> </books>; Now, we will see some of the E4X approaches to read and parse the above XML in our application. The E4X uses many operators to simplify accessing XML nodes and attributes, such as dot (.) and attribute identifier (@), for accessing properties and attributes. private function traceXML():void {    trace(myBooks.book.(@price < 50.99).@title); //Output: Book3    trace(myBooks.book[1].author); //Output: Author2    trace(myBooks.@publisher); //Output: Packt Pub    //Following for loop outputs prices of all books    for each(var price in myBooks..@price) {    trace(price);    } } In the code above, we are using a conditional expression to extract the title of the book(s) whose price is set below 50.99$ in the first trace statement. If we have to do this manually, imagine how much code would have been needed to parse the XML. In the second trace, we are accessing a book node using index and printing its author node's value. And in the third trace, we are simply printing the root node's publisher attribute value and finally, we are using a for loop to traverse through prices of all the books and printing each price. The following is a list of XML operators: Operator Name Description    @   attribute identifier Identifies attributes of an XML or XMLList object.     { }     braces(XML) Evaluates an expression that is used in an XML or XMLList initializer.   [ ]     brackets(XML) Accesses a property or attribute of an XML or XMLList object, for example myBooks.book["@title"].     + concatenation(XMLList) Concatenates (combines) XML or XMLList values into an XMLList object.     += concatenation assignment (XMLList) Assigns expression1 The XML object An XML class represents an XML element, attribute, comment, processing instruction, or a text element. We have used the XML class in our example above to initialize the myBooks variable with an XML literal. The XML class is included into an ActionScript 3.0 core class, so you don't need to import a package to use it. The XML class provides many properties and methods to simplify XML processing, such as ignoreWhitespace and ignoreComments properties, used for ignoring whitespaces and comments in XML documents respectively. You can use the prependChild() and appendChild() methods to prepend and append XML nodes to existing XML documents. Methods such as toString() and toXMLString() allow you to convert XML to a string. An example of an XML object: private var myBooks:XML = <books publisher="Packt Pub"> <book title="Book1" price="99.99"> <author>Author1</author> </book> <book title="Book2" price="120.00"> <author>Author2</author> </book> </books>;   In the above example, we have created an XML object by assigning an XML literal to it. You can also create an XML object from a string that contains XML data, as shown in the following example: private var str:String = "<books publisher="Packt Pub"> <book title="Book1" price="99.99"> <author>Author1</author> </book> <book title="Book2" price="59.99"> <author>Author2</author> </book> </books>"; private var myBooks:XML = new XML(str); trace(myBooks.toXMLString()); //outputs formatted xml as string If the XML data in string is not well-formed (for example, a closing tag is missing), then you will see a runtime error. You can also use binding expressions in the XML text to extract contents from a variable data. For example, you could bind a node's name attribute to a variable value, as in the following line: private var title:String = "Book1" var aBook:XML = <book title="{title}">; To read more about XML class methods and properties, go through Flex 3 LiveDocs at http://livedocs.adobe.com/flex/3/langref/XML.html. The XMLList object As the class name indicates, XMLList contains one or more XML objects. It can contain full XML documents, XML fragments, or the results of an XML query. You can typically use all of the XML class's methods and properties on the objects from XMLList. To access these objects from the XMLList collection, iterate over it using a for each… statement. The XMLList provides you with the following methods to work with its objects: child(): Returns a specified child of every XML object children(): Returns specified children of every XML object descendants(): Returns all descendants of an XML object elements(): Calls the elements() method of each XML object in the XMLList. Returns all elements of the XML object parent(): Returns the parent of the XMLList object if all items in the XMLList object have the same parent attribute(attributeName): Calls the attribute() method of each XML object and returns an XMLList object of the results. The results match the given attributeName parameter attributes(): Calls the attributes() method of each XML object and returns an XMLList object of attributes for each XML object contains(): Checks if the specified XML object is present in the XMLList copy(): Returns a copy of the given XMLList object length(): Returns the number of properties in the XMLList object valueOf(): Returns the XMLList object For details on these methods, see the ActionScript 3.0 Language Reference. Let's return to the example of the XMLList: var xmlList:XMLList = myBooks.book.(@price == 99.99); var item:XML; for each(item in xmlList) { trace("item:"+item.toXMLString()); } Output: item:<book title="Book1" price="99.99"> <author>Author1</author> </book> In the example above, we have used XMLList to store the result of the myBooks.book.(@price == 99.99); statement. This statement returns an XMLList containing XML node(s) whose price is 99.99$. Working with XML objects The XML class provides many useful methods to work with XML objects, such as the appendChild() and prependChild() methods to add an XML element to the beginning or end of an XML object, as shown in the following example: var node1:XML = <middleInitial>B</middleInitial> var node2:XML = <lastName>Kore</lastName> var root:XML = <personalInfo></personalInfo> root = root.appendChild(node1); root = root.appendChild(node2); root = root.prependChild(<firstName>Satish</firstName>); The output is as follows: <personalInfo> <firstName>Satish</firstName> <middleInitial>B</middleInitial> <lastName>Kore</lastName> </personalInfo> You can use the insertChildBefore() or insertChildAfter() method to add a property before or after a specified property, as shown in the following example: var x:XML = <count> <one>1</one> <three>3</three> <four>4</four> </count>; x = x.insertChildBefore(x.three, "<two>2</two>"); x = x.insertChildAfter(x.four, "<five>5</five>"); trace(x.toXMLString()); The output of the above code is as follows: <count> <one>1</one> <two>2</two> <three>3</three> <four>4</four> <five>5</five> </count>
Read more
  • 0
  • 0
  • 3136

article-image-modernizing-our-spring-boot-app
Packt
26 Nov 2014
15 min read
Save for later

Modernizing our Spring Boot app

Packt
26 Nov 2014
15 min read
In this article by Greg L. Turnquist, the author of the book, Learning Spring Boot, we will discuss modernizing our Spring Boot app with JavaScript and adding production-ready support features. (For more resources related to this topic, see here.) Modernizing our app with JavaScript We just saw that, with a single @Grab statement, Spring Boot automatically configured the Thymeleaf template engine and some specialized view resolvers. We took advantage of Spring MVC's ability to pass attributes to the template through ModelAndView. Instead of figuring out the details of view resolvers, we instead channeled our efforts into building a handy template to render data fetched from the server. We didn't have to dig through reference docs, Google, and Stack Overflow to figure out how to configure and integrate Spring MVC with Thymeleaf. We let Spring Boot do the heavy lifting. But that's not enough, right? Any real application is going to also have some JavaScript. Love it or hate it, JavaScript is the engine for frontend web development. See how the following code lets us make things more modern by creating modern.groovy: @Grab("org.webjars:jquery:2.1.1")@Grab("thymeleaf-spring4")@Controllerclass ModernApp {def chapters = ["Quick Start With Groovy","Quick Start With Java","Debugging and Managing Your App","Data Access with Spring Boot","Securing Your App"]@RequestMapping("/")def home(@RequestParam(value="name", defaultValue="World")String n) {new ModelAndView("modern").addObject("name", n).addObject("chapters", chapters)}} A single @Grab statement pulls in jQuery 2.1.1. The rest of our server-side Groovy code is the same as before. There are multiple ways to use JavaScript libraries. For Java developers, it's especially convenient to use the WebJars project (http://webjars.org), where lots of handy JavaScript libraries are wrapped up with Maven coordinates. Every library is found on the /webjars/<library>/<version>/<module> path. To top it off, Spring Boot comes with prebuilt support. Perhaps you noticed this buried in earlier console outputs: ...2014-05-20 08:33:09.062 ... : Mapped URL path [/webjars/**] onto handlerof [...... With jQuery added to our application, we can amp up our template (templates/modern.html) like this: <html><head><title>Learning Spring Boot - Chapter 1</title><script src="webjars/jquery/2.1.1/jquery.min.js"></script><script>$(document).ready(function() {$('p').animate({fontSize: '48px',}, "slow");});</script></head><body><p th_text="'Hello, ' + ${name}"></p><ol><li th_each="chapter : ${chapters}"th:text="${chapter}"></li></ol></body></html> What's different between this template and the previous one? It has a couple extra <script> tags in the head section: The first one loads jQuery from /webjars/jquery/2.1.1/jquery.min.js (implying that we can also grab jquery.js if we want to debug jQuery) The second script looks for the <p> element containing our Hello, world! message and then performs an animation that increases the font size to 48 pixels after the DOM is fully loaded into the browser If we run spring run modern.groovy and visit http://localhost:8080, then we can see this simple but stylish animation. It shows us that all of jQuery is available for us to work with on our application. Using Bower instead of WebJars WebJars isn't the only option when it comes to adding JavaScript to our app. More sophisticated UI developers might use Bower (http://bower.io), a popular JavaScript library management tool. WebJars are useful for Java developers, but not every library has been bundled as a WebJar. There is also a huge community of frontend developers more familiar with Bower and NodeJS that will probably prefer using their standard tool chain to do their jobs. We'll see how to plug that into our app. First, it's important to know some basic options. Spring Boot supports serving up static web resources from the following paths: /META-INF/resources/ /resources/ /static/ /public/ To craft a Bower-based app with Spring Boot, we first need to craft a .bowerrc file in the same folder we plan to create our Spring Boot CLI application. Let's pick public/ as the folder of choice for JavaScript modules and put it in this file, as shown in the following code: {"directory": "public/"} Do I have to use public? No. Again, you can pick any of the folders listed previously and Spring Boot will serve up the code. It's a matter of taste and semantics. Our first step towards a Bower-based app is to define our project by answering a series of questions (this only has to be done once): $ bower init[?] name: app_with_bower[?] version: 0.1.0[?] description: Learning Spring Boot - bower sample[?] main file:[?] what types of modules does this package expose? amd[?] keywords:[?] authors: Greg Turnquist <gturnquist@pivotal.io>[?] license: ASL[?] homepage: http://blog.greglturnquist.com/category/learning-springboot[?] set currently installed components as dependencies? No[?] add commonly ignored files to ignore list? Yes[?] would you like to mark this package as private which prevents it frombeing accidentally published to the registry? Yes...[?] Looks good? Yes Now that we have set our project, let's do something simple such as install jQuery with the following command: $ bower install jquery --savebower jquery#* cached git://github.com/jquery/jquery.git#2.1.1bower jquery#* validate 2.1.1 against git://github.com/jquery/jquery.git#* These two commands will have created the following bower.json file: {"name": "app_with_bower","version": "0.1.0","authors": ["Greg Turnquist <gturnquist@pivotal.io>"],"description": "Learning Spring Boot - bower sample","license": "ASL","homepage": "http://blog.greglturnquist.com/category/learningspring-boot","private": true,"ignore": ["**/.*","node_modules","bower_components","public/","test","tests"],"dependencies": {"jquery": "~2.1.1"}} It will also have installed jQuery 2.1.1 into our app with the following directory structure: public└── jquery├── MIT-LICENSE.txt├── bower.json└── dist├── jquery.js└── jquery.min.js We must include --save (two dashes) whenever we install a module. This ensures that our bower.json file is updated at the same time, allowing us to rebuild things if needed. The altered version of our app with WebJars removed should now look like this: @Grab("thymeleaf-spring4")@Controllerclass ModernApp {def chapters = ["Quick Start With Groovy","Quick Start With Java","Debugging and Managing Your App","Data Access with Spring Boot","Securing Your App"]@RequestMapping("/")def home(@RequestParam(value="name", defaultValue="World")String n) {new ModelAndView("modern_with_bower").addObject("name", n).addObject("chapters", chapters)}} The view name has been changed to modern_with_bower, so it doesn't collide with the previous template if found in the same folder. This version of the template, templates/modern_with_bower.html, should look like this: <html><head><title>Learning Spring Boot - Chapter 1</title><script src="jquery/dist/jquery.min.js"></script><script>$(document).ready(function() {$('p').animate({fontSize: '48px',}, "slow");});</script></head><body><p th_text="'Hello, ' + ${name}"></p><ol><li th_each="chapter : ${chapters}"th:text="${chapter}"></li></ol></body></html> The path to jquery is now jquery/dist/jquery.min.js. The rest is the same as the WebJars example. We just launch the app with spring run modern_with_bower.groovy and navigate to http://localhost:8080. (Might need to refresh the page to ensure loading of the latest HTML.) The animation should work just the same. The options shown in this section can quickly give us a taste of how easy it is to use popular JavaScript tools with Spring Boot. We don't have to fiddle with messy tool chains to achieve a smooth integration. Instead, we can use them the way they are meant to be used. What about an app that is all frontend with no backend? Perhaps we're building an app that gets all its data from a remote backend. In this age of RESTful backends, it's not uncommon to build a single page frontend that is fed data updates via AJAX. Spring Boot's Groovy support provides the perfect and arguably smallest way to get started. We do so by creating pure_javascript.groovy, as shown in the following code: @Controllerclass JsApp { } That doesn't look like much, but it accomplishes a lot. Let's see what this tiny fragment of code actually does for us: The @Controller annotation, like @RestController, causes Spring Boot to auto-configure Spring MVC. Spring Boot will launch an embedded Apache Tomcat server. Spring Boot will serve up static content from resources, static, and public. Since there are no Spring MVC routes in this tiny fragment of code, things will fall to resource resolution. Next, we can create a static/index.html page as follows: <html>Greetings from pure HTML which can, in turn, load JavaScript!</html> Run spring run pure_javascript.groovy and navigate to http://localhost:8080. We will see the preceding plain text shown in our browser as expected. There is nothing here but pure HTML being served up by our embedded Apache Tomcat server. This is arguably the lightest way to serve up static content. Use spring jar and it's possible to easily bundle up our client-side app to be installed anywhere. Spring Boot's support for static HTML, JavaScript, and CSS opens the door to many options. We can add WebJar annotations to JsApp or use Bower to introduce third-party JavaScript libraries in addition to any custom client-side code. We might just manually download the JavaScript and CSS. No matter what option we choose, Spring Boot CLI certainly provides a super simple way to add rich-client power for app development. To top it off, RESTful backends that are decoupled from the frontend can have different iteration cycles as well as different development teams. You might need to configure CORS (http://spring.io/understanding/CORS) to properly handle making remote calls that don't go back to the original server. Adding production-ready support features So far, we have created a Spring MVC app with minimal code. We added views and JavaScript. We are on the verge of a production release. Before deploying our rapidly built and modernized web application, we might want to think about potential issues that might arise in production: What do we do when the system administrator wants to configure his monitoring software to ping our app to see if it's up? What happens when our manager wants to know the metrics of people hitting our app? What are we going to do when the Ops center supervisor calls us at 2:00 a.m. and we have to figure out what went wrong? The last feature we are going to introduce in this article is Spring Boot's Actuator module and CRaSH remote shell support (http://www.crashub.org). These two modules provide some super slick, Ops-oriented features that are incredibly valuable in a production environment. We first need to update our previous code (we'll call it ops.groovy), as shown in the following code: @Grab("spring-boot-actuator")@Grab("spring-boot-starter-remote-shell")@Grab("org.webjars:jquery:2.1.1")@Grab("thymeleaf-spring4")@Controllerclass OpsReadyApp {@RequestMapping("/")def home(@RequestParam(value="name", defaultValue="World")String n) {new ModelAndView("modern").addObject("name", n)}} This app is exactly like the WebJars example with two key differences: it adds @Grab("spring-boot-actuator") and @Grab("spring-boot-starter-remote-shell"). When you run this version of our app, the same business functionality is available that we saw earlier, but there are additional HTTP endpoints available: Actuator endpoint Description /autoconfig This reports what Spring Boot did and didn't auto-configure and why /beans This reports all the beans configured in the application context (including ours as well as the ones auto-configured by Boot) /configprops This exposes all configuration properties /dump This creates a thread dump report /env This reports on the current system environment /health This is a simple endpoint to check life of the app /info This serves up custom content from the app /metrics This shows counters and gauges on web usage /mappings This gives us details about all Spring MVC routes /trace This shows details about past requests Pinging our app for general health Each of these endpoints can be visited using our browser or using other tools such as curl. For example, let's assume we ran spring run ops.groovy and then opened up another shell. From the second shell, let's run the following curl command: $ curl localhost:8080/health{"status":"UP"} This immediately solves our first need listed previously. We can inform the system administrator that he or she can write a management script to interrogate our app's health. Gathering metrics Be warned that each of these endpoints serves up a compact JSON document. Generally speaking, command-line curl probably isn't the best option. While it's convenient on *nix and Mac systems, the content is dense and hard to read. It's more practical to have: A JSON plugin installed in our browser (such as JSONView at http://jsonview.com) A script that uses a JSON parsing library if we're writing a management script (such as Groovy's JsonSlurper at http://groovy.codehaus.org/gapi/groovy/json/JsonSlurper.html or JSONPath at https://code.google.com/p/json-path) Assuming we have JSONView installed, the following screenshot shows a listing of metrics: It lists counters for each HTTP endpoint. According to this, /metrics has been visited four times with a successful 200 status code. Someone tried to access /foo, but it failed with a 404 error code. The report also lists gauges for each endpoint, reporting the last response time. In this case, /metrics took 2 milliseconds. Also included are some memory stats as well as the total CPUs available. It's important to realize that the metrics start at 0. To generate some numbers, you might want to first click on some links before visiting /metrics. The following screenshot shows a trace report: It shows the entire web request and response for curl localhost:8080/health. This provides a basic framework of metrics to satisfy our manager's needs. It's important to understand that metrics gathered by Spring Boot Actuator aren't persistent across application restarts. So to gather long-term data, we have to gather them and then write them elsewhere. With these options, we can perform the following: Write a script that gathers metrics every hour and appends them to a running spreadsheet somewhere else in the filesystem, such as a shared drive. This might be simple, but probably also crude. To step it up, we can dump the data into a Hadoop filesystem for raw collection and configure Spring XD (http://projects.spring.io/spring-xd/) to consume it. Spring XD stands for Spring eXtreme Data. It is an open source product that makes it incredibly easy to chain together sources and sinks comprised of many components, such as HTTP endpoints, Hadoop filesystems, Redis metrics, and RabbitMQ messaging. Unfortunately, there is no space to dive into this subject. With any monitoring, it's important to check that we aren't taxing the system too heavily. The same container responding to business-related web requests is also serving metrics data, so it will be wise to engage profilers periodically to ensure that the whole system is performing as expected. Detailed management with CRaSH So what can we do when we receive that 2:00 a.m. phone call from the Ops center? After either coming in or logging in remotely, we can access the convenient CRaSH shell we configured. Every time the app launches, it generates a random password for SSH access and prints this to the local console: 2014-06-11 23:00:18.822 ... : Configuring property ssh.port=2000 fromproperties2014-06-11 23:00:18.823 ... : Configuring property ssh.authtimeout=600000 fro...2014-06-11 23:00:18.824 ... : Configuring property ssh.idletimeout=600000 fro...2014-06-11 23:00:18.824 ... : Configuring property auth=simple fromproperties2014-06-11 23:00:18.824 ... : Configuring property auth.simple.username=user f...2014-06-11 23:00:18.824 ... : Configuring property auth.simple.password=bdbe4a... We can easily see that there's SSH access on port 2000 via a user if we use this information to log in: $ ssh -p 2000 user@localhostPassword authenticationPassword:. ____ _ __ _ _/\ / ___'_ __ _ _(_)_ __ __ _ ( ( )___ | '_ | '_| | '_ / _' | \/ ___)| |_)| | | | | || (_| | ) ) ) )' |____| .__|_| |_|_| |___, | / / / /=========|_|==============|___/=/_/_/_/:: Spring Boot :: (v1.1.6.RELEASE) on retina> There's a fistful of commands: help: This gets a listing of available commands dashboard: This gets a graphic, text-based display of all the threads, environment properties, memory, and other things autoconfig: This prints out a report of which Spring Boot auto-configuration rules were applied and which were skipped (and why) All of the previous commands have man pages: > man autoconfigNAMEautoconfig - Display auto configuration report fromApplicationContextSYNOPSISautoconfig [-h | --help]STREAMautoconfig <java.lang.Void, java.lang.Object>PARAMETERS[-h | --help]Display this help message... There are many commands available to help manage our application. More details are available at http://www.crashub.org/1.3/reference.html. Summary In this article, we learned about modernizing our Spring Boot app with JavaScript and adding production-ready support features. We plugged in Spring Boot's Actuator module as well as the CRaSH remote shell, configuring it with metrics, health, and management features so that we can monitor it in production by merely adding two lines of extra code. Resources for Article: Further resources on this subject: Getting Started with Spring Security [Article] Spring Roo 1.1: Working with Roo-generated Web Applications [Article] Spring Security 3: Tips and Tricks [Article]
Read more
  • 0
  • 0
  • 3132
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-setting-msmq-your-mobile-and-writing-msmq-application-net-compact-framework-35
Packt
29 Apr 2010
3 min read
Save for later

Setting up MSMQ on your Mobile and Writing MSMQ Application with .NET Compact Framework 3.5

Packt
29 Apr 2010
3 min read
Let's get started. Setting up Microsoft Messaging Queue Service (MSMQ) on your mobile device MSMQ is not installed by default on the Windows Mobile platform. This section will guide you on how to install MSMQ on your mobile device or device emulator. You will first need to download the Redistributable Server Components for Windows Mobile 5.0 package (which can also be used for Windows Mobile 6.0) from this location: href="http://www.microsoft.com/downloads/details.aspx?FamilyID=cdfd2bb2-fa13-4062-b8d1-4406ccddb5fd&displaylang=en After downloading and unzipping this file, you will have access to the MSMQ.arm.cab file in the following folder: Optional Windows Mobile 5.0 Server Componentsmsmq Copy this file via ActiveSync to your mobile device and run it on the device. This package contains two applications (and a bunch of other DLL components) that you will be using frequently on the device: msmqadm.exe:This is the command line tool that allows you to start and stop the MSMQ service on the mobile device and also configure MSMQ settings. It can also be invoked programmatically from code. visadm.exe: This tool does the same thing as above, but provides a visual interface. These two files will be unpacked into the Windows folder of your mobile device. The following DLL files will also be unpacked into the Windows folder: msmqd.dll msmqrt.dll Verify that these files exist. The next thing you need to do is to change the name of your device (if you haven't done so earlier). In most cases, you are probably using the Windows Mobile Emulator, which comes with an unassigned device name by default. To change your device name, navigate to Settings | System | About on your mobile device. You can change the device name in the Device ID tab. At this point, you have the files for MSMQ unpacked, but it isn't exactly installed yet. To do this, you must invoke either msmqadm.exe or visadm.exe. Launch the following application: Windowsvisadm.exe A pop-up window will appear. This window contains a text box and a Run button that allows you to type in the desired command and to execute it. The first command you need to issue is the register install command. Type in the command and click the Run button. No message will be displayed in the window. This command will install MSMQ (as a device driver) on your device. Run the following commands in the given order next (one after the other): MSMQ Command Name Purpose register You will need to run the register command one more time (without the install keyword) to create the MSMQ configuration keys in the registry. enable binary This command enables the proprietary MSMQ binary protocol to send messages to remote queues. enable srmp This command enables SRMP (SOAP Reliable Messaging Protocol), for sending messages to remote queues over HTTP. start This command starts the MSMQ service   Verify that the MSMQ service has been installed successfully by clicking on the Shortcuts button and then clicking the Verify button in the ensuing pop-up window. You will be presented with a pop-up dialog as shown in the following screenshot: MSMQ log information If you scroll down in this same window above, you will find the Base Dir path, which contains the MSMQ auto-generated log file. This log file, named MQLOGFILE by default, contains useful MSMQ related information and error messages. After you've done the preceding steps, you will need to do a soft-reset of your device. The MSMQ service will automatically start upon boot up.
Read more
  • 0
  • 0
  • 3126

article-image-manage-sql-azure-databases-web-interface-houston
Packt
21 Jan 2011
2 min read
Save for later

Manage SQL Azure Databases with the Web Interface 'Houston'

Packt
21 Jan 2011
2 min read
  Microsoft SQL Azure Enterprise Application Development Build enterprise-ready applications and projects with SQL Azure Develop large scale enterprise applications using Microsoft SQL Azure Understand how to use the various third party programs such as DB Artisan, RedGate, ToadSoft etc developed for SQL Azure Master the exhaustive Data migration and Data Synchronization aspects of SQL Azure. Includes SQL Azure projects in incubation and more recent developments including all 2010 updates Appendix In order to use this program and follow the article you should have an account on the Windows Azure Platform on which preferably an SQL Azure server has been provisioned. This would also imply that you have a Windows Live ID to access the portal. As mentioned, in this article we look at some of the features of this web based tool and carry out a few tasks. Click the Launch Houston button in the Project Houston CTP1 page shown here on the SQLAzureLabs portal page. This brings up a world map displaying the current Windows Azure Data Centers available and you have to choose the data center on which you have an account. For the present article we will use the Southeast Asia data center and sometimes the North Central US data center. Click on Southeast Asia location. The Silverlight application gets launched from the URL: https://manage-sgp.cloudapp.net/ displaying the license information that you need to agree to before going forward. When you click OK, the Login in page is displayed as shown. You need to enter the server information at the Southeast Asia data center as shown. Click Connect. The connection gets established to the above SQL Azure server as shown in the next image. This is much better looking than the somewhat ‘drab’ looking SSMS interface (albeit fully mature)shown here for comparison. Changing the database If you need to work with a different database, click on Connect DB at the top left of 'Houston' user interface, as shown in the next image. The conneciton interface comes up again where you indicate the name of database as shown. Here the database has been changed to master. Click Connect now connects you to the master database as shown.
Read more
  • 0
  • 0
  • 3122

article-image-dwr-java-ajax-user-interface-basic-elements-part-2
Packt
20 Oct 2009
21 min read
Save for later

DWR Java AJAX User Interface: Basic Elements (Part 2)

Packt
20 Oct 2009
21 min read
Implementing Tables and Lists The first actual sample is very common in applications: tables and lists. In this sample, the table is populated using the DWR utility functions, and a remoted Java class. The sample code also shows how DWR is used to do inline table editing. When a table cell is double-clicked, an edit box opens, and it is used to save new cell data. The sample will have country data in a CSV file: country Name, Long Name, two-letter Code, Capital, and user-defined Notes. The user interface for the table sample appears as shown in the following screenshot: Server Code for Tables and Lists The first thing to do is to get the country data. Country data is in a CSV file (named countries.csv and located in the samples Java package). The following is an excerpt of the content of the CSV file (data is from http://www.state.gov ). Short-form name,Long-form name,FIPS Code,CapitalAfghanistan,Islamic Republic of Afghanistan,AF,KabulAlbania,Republic of Albania,AL,TiranaAlgeria,People's Democratic Republic of Algeria,AG,AlgiersAndorra,Principality of Andorra,AN,Andorra la VellaAngola,Republic of Angola,AO,LuandaAntigua andBarbuda,(no long-form name),AC,Saint John'sArgentina,Argentine Republic,AR,Buenos AiresArmenia,Republic of Armenia,AM,Yerevan The CSV file is read each time a client requests country data. Although this is not very efficient, it is good enough here. Other alternatives include an in-memory cache or a real database such as Apache Derby or IBM DB2. As an example, we have created a CountryDB class that is used to read and write the country CSV. We also have another class, DBUtils, which has some helper methods. The DBUtils code is as follows: package samples;import java.io.BufferedReader;import java.io.File;import java.io.FileReader;import java.io.FileWriter;import java.io.IOException;import java.io.InputStream;import java.io.InputStreamReader;import java.io.PrintWriter;import java.util.List;import java.util.Vector;public class DBUtils { private String fileName=null; public void initFileDB(String fileName) { this.fileName=fileName; // copy csv file to bin-directory, for easy // file access File countriesFile = new File(fileName); if (!countriesFile.exists()) { try { List<String> countries = getCSVStrings(null); PrintWriter pw; pw = new PrintWriter(new FileWriter(countriesFile)); for (String country : countries) { pw.println(country); } pw.close(); } catch (IOException e) { e.printStackTrace(); } } } protected List<String> getCSVStrings(String letter) { List<String> csvData = new Vector<String>(); try { File csvFile = new File(fileName); BufferedReader br = null; if(csvFile.exists()) { br=new BufferedReader(new FileReader(csvFile)); } else { InputStream is = this.getClass().getClassLoader() .getResourceAsStream("samples/"+fileName); br=new BufferedReader(new InputStreamReader(is)); br.readLine(); } for (String line = br.readLine(); line != null; line = br.readLine()) { if (letter == null || (letter != null && line.startsWith(letter))) { csvData.add(line); } } br.close(); } catch (IOException ioe) { ioe.printStackTrace(); } return csvData; }} The DBUtils class is a straightforward utility class that returns CSV content as a List of Strings. It also copies the original CSV file to the runtime directory of any application server we might be running. This may not be the best practice, but it makes it easier to manipulate the CSV file, and we always have the original CSV file untouched if and when we need to go back to the original version. The code for CountryDB is given here: package samples;import java.io.FileWriter;import java.io.IOException;import java.io.PrintWriter;import java.util.Arrays;import java.util.List;import java.util.Vector;public class CountryDB { private DBUtils dbUtils = new DBUtils(); private String fileName = "countries.csv"; public CountryDB() { dbUtils.initFileDB(fileName); } public String[] getCountryData(String ccode) { List<String> countries = dbUtils.getCSVStrings(null); for (String country : countries) { if (country.indexOf("," + ccode + ",") > -1) { return country.split(","); } } return new String[0]; } public List<List<String>> getCountries(String startLetter) { List<List<String>> allCountryData = new Vector<List<String>>(); List<String> countryData = dbUtils.getCSVStrings(startLetter); for (String country : countryData) { String[] data = country.split(","); allCountryData.add(Arrays.asList(data)); } return allCountryData; } public String[] saveCountryNotes(String ccode, String notes) { List<String> countries = dbUtils.getCSVStrings(null); try { PrintWriter pw = new PrintWriter(new FileWriter(fileName)); for (String country : countries) { if (country.indexOf("," + ccode + ",") > -1) { if (country.split(",").length == 4) { // no existing notes country = country + "," + notes; } else { if (notes.length() == 0) { country = country.substring(0, country .lastIndexOf(",")); } else { country = country.substring(0, country .lastIndexOf(",")) + "," + notes; } } } pw.println(country); } pw.close(); } catch (IOException ioe) { ioe.printStackTrace(); } String[] rv = new String[2]; rv[0] = ccode; rv[1] = notes; return rv; }} The CountryDB class is a remoted class. The getCountryData() method returns country data as an array of strings based on the country code. The getCountries() method returns all the countries that start with the specified parameter, and saveCountryNotes() saves user added notes to the country specified by the country code. In order to use CountryDB, the following script element must be added to the index.jsp file together with other JavaScript elements. <script type='text/javascript' src='/DWREasyAjax/dwr/interface/CountryDB.js'></script> There is one other Java class that we need to create and remote. That is the AppContent class that was already present in the JavaScript functions of the home page. The AppContent class is responsible for reading the content of the HTML file and parses the possible JavaScript function out of it, so it can become usable by the existing JavaScript functions in index.jsp file. package samples;import java.io.ByteArrayOutputStream;import java.io.IOException;import java.io.InputStream;import java.util.List;import java.util.Vector;public class AppContent { public AppContent() { } public List<String> getContent(String contentId) { InputStream is = this.getClass().getClassLoader().getResourceAsStream( "samples/"+contentId+".html"); String content=streamToString(is); List<String> contentList=new Vector<String>(); //Javascript within script tag will be extracted and sent separately to client for(String script=getScript(content);!script.equals("");script=getScript(content)) { contentList.add(script); content=removeScript(content); } //content list will have all the javascript //functions, last element is executed last //and all other before html content if(contentList.size()>1) { contentList.add(contentList.size()-1, content); } else { contentList.add(content); } return contentList; } public List<String> getLetters() { List<String> letters=new Vector<String>(); char[] l=new char[1]; for(int i=65;i<91;i++) { l[0]=(char)i; letters.add(new String(l)); } return letters; } public String removeScript(String html) { //removes first script element int sIndex=html.toLowerCase().indexOf("<script "); if(sIndex==-1) { return html; } int eIndex=html.toLowerCase().indexOf("</script>")+9; return html.substring(0, sIndex)+html.substring(eIndex); } public String getScript(String html) { //returns first script element int sIndex=html.toLowerCase().indexOf("<script "); if(sIndex==-1) { return ""; } int eIndex=html.toLowerCase().indexOf("</script>")+9; return html.substring(sIndex, eIndex); } public String streamToString(InputStream is) { String content=""; try { ByteArrayOutputStream baos=new ByteArrayOutputStream(); for(int b=is.read();b!=-1;b=is.read()) { baos.write(b); } content=baos.toString(); } catch(IOException ioe) { content=ioe.toString(); } return content; }} The getContent() method reads the HTML code from a file based on the contentId. ContentId was specified in the dwrapplication.properties file, and the HTML is just contentId plus the extension .html in the package directory. There is also a getLetters() method that simply lists letters from A to Z and returns a list of letters to the browser. If we test the application now, we will get an error as shown in the following screenshot: We know why the AppContent is not defined error occurs, so let's fix it by adding AppContent to the allow element in the dwr.xml file. We also add CountryDB to the allow element. The first thing we do is to add required elements to the dwr.xml file. We add the following creators within the allow element in the dwr.xml file. <create creator="new" javascript="AppContent"> <param name="class" value="samples.AppContent" /> <include method="getContent" /> <include method="getLetters" /> </create> <create creator="new" javascript="CountryDB"> <param name="class" value="samples.CountryDB" /> <include method="getCountries" /> <include method="saveCountryNotes" /> <include method="getCountryData" /></create> We explicitly define the methods we are remoting using the include elements. This is a good practice, as we don't accidentally allow access to any methods that are not meant to be remoted. Client Code for Tables and Lists We also need to add a JavaScript interface to the index.jsp page. Add the following with the rest of the scripts in the index.jsp file. <script type='text/javascript' src='/DWREasyAjax/dwr/interface/AppContent.js'></script> Before testing, we need the sample HTML for the content area. The following HTML is in the TablesAndLists.html file under the samples directory: <h3>Countries</h3><p>Show countries starting with <select id="letters" onchange="selectLetter(this);return false;"> </select><br/>Doubleclick "Notes"-cell to add notes to country.</p><table border="1"> <thead> <tr> <th>Name</th> <th>Long name</th> <th>Code</th> <th>Capital</th> <th>Notes</th> </tr> </thead> <tbody id="countryData"> </tbody></table><script type='text/javascript'>//TO BE EVALEDAppContent.getLetters(addLetters);</script> The script element at the end is extracted by our Java class, and it is then evaluated by the browser when the client-side JavaScript receives the HTML. There is the select element, and its onchange event calls the selectLetter() JavaScript function. We will implement the selectLetter() function shortly. JavaScript functions are added in the index.jsp file, and within the head element. Functions could be in separate JavaScript files, but the embedded script is just fine here. function selectLetter(selectElement){ var selectedIndex = selectElement.selectedIndex; var selectedLetter= selectElement.options[selectedIndex ].value; CountryDB.getCountries(selectedLetter,setCountryRows);}function addLetters(letters){dwr.util.addOptions('letters',['letter...']);dwr.util.addOptions('letters',letters);}function setCountryRows(countryData){var cellFuncs = [ function(data) { return data[0]; }, function(data) { return data[1]; }, function(data) { return data[2]; }, function(data) { return data[3]; }, function(data) { return data[4]; }];dwr.util.removeAllRows('countryData');dwr.util.addRows( 'countryData',countryData,cellFuncs, { cellCreator:function(options) { var td = document.createElement("td"); if(options.cellNum==4) { var notes=options.rowData[4]; if(notes==undefined) { notes='&nbsp;';// + options.rowData[2]+'notes'; } var ccode=options.rowData[2]; var divId=ccode+'_Notes'; var tdId=divId+'Cell'; td.setAttribute('id',tdId); var html=getNotesHtml(ccode,notes); td.innerHTML=html; options.data=html; } return td; }, escapeHtml:false });}function getNotesHtml(ccode,notes){ var divId=ccode+'_Notes'; return "<div onDblClick="editCountryNotes('"+divId+"','"+ccode+"');" id=""+divId+"">"+notes+"</div>";}function editCountryNotes(id,ccode){ var notesElement=dwr.util.byId(id); var tdId=id+'Cell'; var notes=notesElement.innerHTML; if(notes=='&nbsp;') { notes=''; } var editBox='<input id="'+ccode+'NotesEditBox" type="text" value="'+notes+'"/><br/>'; editBox+="<input type='button' id='"+ccode+"SaveNotesButton' value='Save' onclick='saveCountryNotes(""+ccode+"");'/>"; editBox+="<input type='button' id='"+ccode+"CancelNotesButton' value='Cancel' onclick='cancelEditNotes(""+ccode+"");'/>"; tdElement=dwr.util.byId(tdId); tdElement.innerHTML=editBox; dwr.util.byId(ccode+'NotesEditBox').focus();}function cancelEditNotes(ccode){ var countryData=CountryDB.getCountryData(ccode, { callback:function(data) { var notes=data[4]; if(notes==undefined) { notes='&nbsp;'; } var html=getNotesHtml(ccode,notes); var tdId=ccode+'_NotesCell'; var td=dwr.util.byId(tdId); td.innerHTML=html; } });}function saveCountryNotes(ccode){ var editBox=dwr.util.byId(ccode+'NotesEditBox'); var newNotes=editBox.value; CountryDB.saveCountryNotes(ccode,newNotes, { callback:function(newNotes) { var ccode=newNotes[0]; var notes=newNotes[1]; var notesHtml=getNotesHtml(ccode,notes); var td=dwr.util.byId(ccode+"_NotesCell"); td.innerHTML=notesHtml; } });} There are lots of functions for table samples, and we go through each one of them. The first is the selectLetter() function. This function gets the selected letter from the select element and calls the CountryDB.getCountries() remoted Java method. The callback function is setCountryRows. This function receives the return value from the Java getCountries() method, that is List<List<String>>, a List of Lists of Strings. The second function is addLetters(letters), and it is a callback function for theAppContent.getLetters() method, which simply returns letters from A to Z. The addLetters() function uses the DWR utility functions to populate the letter list. Then there is a callback function for the CountryDB.getCountries() method. The parameter for the function is an array of countries that begin with a specified letter. Each array element has a format: Name, Long name, (country code) Code, Capital, Notes. The purpose of this function is to populate the table with country data; and let's see how it is done. The variable, cellFuncs, holds functions for retrieving data for each cell in a column. The parameter named data is an array of country data that was returned from the Java class. The table is populated using the DWR utility function, addRows(). The cellFuncs variable is used to get the correct data for the table cell. The cellCreator function is used to create custom HTML for the table cell. Default implementation generates just a td element, but our custom implementation generates the td-element with the div placeholder for user notes. The getNotesHtml() function is used to generate the div element with the event listener for double-click. The editCountryNotes() function is called when the table cell is double-clicked. The function creates input fields for editing notes with the Save and Cancel buttons. The cancelEditNotes() and saveCountryNotes() functions cancel the editing of new notes, or saves them by calling the CountryDB.saveCountryNotes() Java method. The following screenshot shows what the sample looks like with the populated table: Now that we have added necessary functions to the web page we can test the application. Testing Tables and Lists The application should be ready for testing if we have had the test environment running during development. Eclipse automatically deploys our new code to the server whenever something changes. So we can go right away to the test page http://127.0.0.1:8080/DWREasyAjax. On clicking Tables and lists we can see the page we have developed. By selecting some letter, for example "I" we get a list of all the countries that start with letter "I" (as shown in the previous screenshot). Now we can add notes to countries. We can double-click any table cell under Notes. For example, if we want to enter notes to Iceland, we double-click the Notes cell in Iceland's table row, and we get the edit box for the notes as shown in the following screenshot: The edit box is a simple text input field. We didn't use any forms. Saving and canceling editing is done using JavaScript and DWR. If we press Cancel, we get the original notes from the CountryDB Java class using DWR and saving also uses DWR to save data. CountryDB.saveCountryNotes() takes the country code and the notes that the user entered in the edit box and saves them to the CSV file. When notes are available, the application will show them in the country table together with other country information as shown in the following screenshot: Afterword The sample in this section uses DWR features to get data for the table and list from the server. We developed the application so that most of the application logic is written in JavaScript and Java beans that are remoted. In principle, the application logic can be thought of as being fully browser based, with some extensions in the server. Implementing Field Completion Nowadays, field completion is typical of many web pages. A typical use case is getting a stock quote, and field completion shows matching symbols as users type letters. Many Internet sites use this feature. Our sample here is a simple license text finder. We enter the license name in the input text field, and we use DWR to show the license names that start with the typed text. A list of possible completions is shown below the input field. The following is a screenshot of the field completion in action: Selected license content is shown in an iframe element from http://www.opensource.org. Server Code for Field Completion We will re-use some of the classes we developed in the last section. AppContent is used to load the sample page, and the DBUtils class is used in the LicenseDB class. The LicenseDB class is shown here: package samples;import java.util.List;import java.util.Vector;public class LicenseDB{ private DBUtils dbUtils=new DBUtils(); public LicenseDB() { dbUtils.initFileDB("licenses.csv"); } public List<String> getLicensesStartingWith(String startLetters) { List<String> list=new Vector<String>(); List<String> licenses=dbUtils.getCSVStrings(startLetters); for(String license : licenses) { list.add(license.split(",")[0]); } return list; } public String getLicenseContentUrl(String licenseName) { List<String> licenses=dbUtils.getCSVStrings(licenseName); if(licenses.size()>0) { return licenses.get(0).split(",")[1]; } return ""; }} The getLicenseStartingWith() method goes through the license names and returns valid license names and their URLs. Similar to the data in the previous section, license data is in a CSV file named licenses.csv in the package directory. The following is an excerpt of the file content: Academic Free License, http://opensource.org/licenses/afl-3.0.phpAdaptive Public License, http://opensource.org/licenses/apl1.0.phpApache Software License, http://opensource.org/licenses/apachepl-1.1.phpApache License, http://opensource.org/licenses/apache2.0.phpApple Public Source License, http://opensource.org/licenses/apsl-2.0.phpArtistic license, http://opensource.org/licenses/artistic-license-1.0.php... There are quite a few open-source licenses. Some are more popular than others (like the Apache Software License) and some cannot be re-used (like the IBM Public License). We want to remote the LicenseDB class, so we add the following to the dwr.xml file. <create creator="new" javascript="LicenseDB"> <param name="class" value="samples.LicenseDB"/> <include method="getLicensesStartingWith"/> <include method="getLicenseContentUrl"/></create> Client Code for Field Completion The following script element will go in the index.jsp page. <script type='text/javascript' src='/DWREasyAjax/dwr/interface/LicenseDB.js'></script> The HTML for the field completion is as follows: <h3>Field completion</h3><p>Enter Open Source license name to see it's contents.</p><input type="text" id="licenseNameEditBox" value="" onkeyup="showPopupMenu()" size="40"/><input type="button" id="showLicenseTextButton" value="Show license text" onclick="showLicenseText()"/><div id="completionMenuPopup"></div><div id="licenseContent"></div> The input element, where we enter the license name, listens to the onkeyup event which calls the showPopupMenu() JavaScript function. Clicking the Input button calls the showLicenseText() function (the JavaScript functions are explained shortly). Finally, the two div elements are place holders for the pop-up menu and the iframe element that shows license content. For the pop-up box functionality, we use existing code and modify it for our purpose (many thanks to http://www.jtricks.com). The following is the popup.js file, which is located under the WebContent | js directory. //<script type="text/javascript"><!--/* Original script by: www.jtricks.com * Version: 20070301 * Latest version: * www.jtricks.com/javascript/window/box.html * * Modified by Sami Salkosuo. */// Moves the box object to be directly beneath an object.function move_box(an, box){ var cleft = 0; var ctop = 0; var obj = an; while (obj.offsetParent) { cleft += obj.offsetLeft; ctop += obj.offsetTop; obj = obj.offsetParent; } box.style.left = cleft + 'px'; ctop += an.offsetHeight + 8; // Handle Internet Explorer body margins, // which affect normal document, but not // absolute-positioned stuff. if (document.body.currentStyle && document.body.currentStyle['marginTop']) { ctop += parseInt( document.body.currentStyle['marginTop']); } box.style.top = ctop + 'px';}var popupMenuInitialised=false;// Shows a box if it wasn't shown yet or is hidden// or hides it if it is currently shownfunction show_box(html, width, height, borderStyle,id){ // Create box object through DOM var boxdiv = document.getElementById(id); boxdiv.style.display='block'; if(popupMenuInitialised==false) { //boxdiv = document.createElement('div'); boxdiv.setAttribute('id', id); boxdiv.style.display = 'block'; boxdiv.style.position = 'absolute'; boxdiv.style.width = width + 'px'; boxdiv.style.height = height + 'px'; boxdiv.style.border = borderStyle; boxdiv.style.textAlign = 'right'; boxdiv.style.padding = '4px'; boxdiv.style.background = '#FFFFFF'; boxdiv.style.zIndex='99'; popupMenuInitialised=true; //document.body.appendChild(boxdiv); } var contentId=id+'Content'; var contents = document.getElementById(contentId); if(contents==null) { contents = document.createElement('div'); contents.setAttribute('id', id+'Content'); contents.style.textAlign= 'left'; boxdiv.contents = contents; boxdiv.appendChild(contents); } move_box(html, boxdiv); contents.innerHTML= html; return false;}function hide_box(id){ document.getElementById(id).style.display='none'; var boxdiv = document.getElementById(id+'Content'); if(boxdiv!=null) { boxdiv.parentNode.removeChild(boxdiv); } return false;}//--></script> Functions in the popup.js file are used as menu options directly below the edit box. The show_box() function takes the following arguments: HTML code for the pop-up, position of the pop-up window, and the "parent" element (to which the pop-up box is related). The function then creates a pop-up window using DOM. The move_box() function is used to move the pop-up window to its correct place under the edit box and the hide_box() function hides the pop-up window by removing the pop-up window from the DOM tree. In order to use functions in popup.js, we need to add the following script-element to the index.jsp file: <script type='text/javascript' src='js/popup.js'></script> Our own JavaScript code for the field completion is in the index.jsp file. The following are the JavaScript functions, and an explanation follows the code: function showPopupMenu(){ var licenseNameEditBox=dwr.util.byId('licenseNameEditBox'); var startLetters=licenseNameEditBox.value; LicenseDB.getLicensesStartingWith(startLetters, { callback:function(licenses) { var html=""; if(licenses.length==0) { return; } if(licenses.length==1) { hidePopupMenu(); licenseNameEditBox.value=licenses[0]; } else { for (index in licenses) { var licenseName=licenses[index];//.split(",")[0]; licenseName=licenseName.replace(/"/g,"&quot;"); html+="<div style="border:1px solid #777777;margin-bottom:5;" onclick="completeEditBox('"+licenseName+"');">"+licenseName+"</div>"; } show_box(html, 200, 270, '1px solid','completionMenuPopup'); } } });}function hidePopupMenu(){ hide_box('completionMenuPopup');}function completeEditBox(licenseName){ var licenseNameEditBox=dwr.util.byId('licenseNameEditBox'); licenseNameEditBox.value=licenseName; hidePopupMenu(); dwr.util.byId('showLicenseTextButton').focus();}function showLicenseText(){ var licenseNameEditBox=dwr.util.byId('licenseNameEditBox'); licenseName=licenseNameEditBox.value; LicenseDB.getLicenseContentUrl(licenseName,{ callback:function(licenseUrl) { var html='<iframe src="'+licenseUrl+'" width="100%" height="600"></iframe>'; var content=dwr.util.byId('licenseContent'); content.style.zIndex="1"; content.innerHTML=html; } });} The showPopupMenu() function is called each time a user enters a letter in the input box. The function gets the value of the input field and calls the LicenseDB. getLicensesStartingWith() method. The callback function is specified in the function parameters. The callback function gets all the licenses that match the parameter, and based on the length of the parameter (which is an array), it either shows a pop-up box with all the matching license names, or, if the array length is one, hides the pop-up box and inserts the full license name in the text field. In the pop up box, the license names are wrapped within the div element that has an onclick event listener that calls the completeEditBox() function. The hidePopupMenu() function just closes the pop-up menu and the competeEditBox() function inserts the clicked license text in the input box and moves the focus to the button. The showLicenseText() function is called when we click the Show license text button. The function calls the LicenseDB. getLicenseContentUrl() method and the callback function creates an iframe element to show the license content directly from http://www.opensource.org, as shown in the following screenshot: Afterword Field completion improves user experience in web pages and the sample code in this section showed one way of doing it using DWR. It should be noted that the sample for field completion presented here is only for demonstration purposes.
Read more
  • 0
  • 0
  • 3104

article-image-tapestry-5-advanced-components
Packt
23 Oct 2009
10 min read
Save for later

Tapestry 5 Advanced Components

Packt
23 Oct 2009
10 min read
Following are some of the components, we'll examine in Tapestry 5: The Grid component allows us to display different data in a fairly sophisticated table. We are going to use it to display our collection of celebrities. The BeanEditForm component greatly simplifies creating forms for accepting user input. We shall use it for adding a new celebrity to our collection. The DateField component provides an easy and attractive way to enter or edit the date. The FCKEditor component is a rich text editor, and it is as easy to incorporate into a Tapestry 5 web application, just as a basic TextField is. This is a third party component, and the main point here is to show that using a library of custom components in a Tapestry 5 application requires no extra effort. It is likely that a similar core component will appear in a future version of the framework. Grid Component It is possible to display our collection of celebrities with the help of the Loop component. It isn't difficult, and in many cases, that will be exactly the solution you need for the task at hand. But, as the number of displayed items grow (our collection grows) different problems may arise. We might not want to display the whole collection on one page, so we'll need some kind of pagination mechanism and some controls to enable navigation from page to page. Also, it would be convenient to be able to sort celebrities by first name, last name, occupation, and so on. All this can be achieved by adding more controls and more code to finally achieve the result that we want, but a table with pagination and sorted columns is a very common part of a user interface, and recreating it each time wouldn't be efficient. Thankfully, the Grid component brings with it plenty of ready to use functionality, and it is very easy to deal with. Open the ShowAll.tml template in an IDE of your choice and remove the Loop component and all its content, together with the surrounding table: <table width="100%"> <tr t_type="loop" t_source="allCelebrities" t:value="celebrity"> <td> <a href="#" t_type="PageLink" t_page="Details" t:context="celebrity.id"> ${celebrity.lastName} </a> </td> <td>${celebrity.firstName}</td> <td> <t:output t_format="dateFormat" t:value="celebrity.dateOfBirth"/> </td> <td>${celebrity.occupation}</td> </tr> </table> In place of this code, add the following line: <t:grid t_source="allCelebrities"/> Run the application, log in to be able to view the collection, and you should see the following result: Quite an impressive result for a single short line of code, isn't it? Not only are our celebrities now displayed in a neatly formatted table, but also, we can sort the collection by clicking on the columns' headers. Also note that occupation now has only the first character capitalized—much better than the fully capitalized version we had before. Here, we see the results of some clever guesses on Tapestry's side. The only required parameter of the Grid component is source, the same as the required parameter of the Loop component. Through this parameter, Grid receives a number of objects of the same class. It takes the first object of this collection and finds out its properties. It tries to create a column for each property, transforming the property's name for the column's header (for example, lastName property name gives Last Name column header) and makes some additional sensible adjustments like changing the case of the occupation property values in our example. All this is quite impressive, but the table, as it is displayed now, has a number of deficiencies: All celebrities are displayed on one page, while we wanted to see how pagination works. This is because the default number of records per page for  Grid component is 25—more than we have in our collection at the moment. The last name of the celebrities does not provide a link to the Details page anymore. It doesn't make sense to show the Id column. The order of the columns is wrong. It would be more sensible to have the Last Name in the first column, then First Name, and finally the Date of Birth. By default, to define the display of the order of columns in the table, Tapestry will use the order in which getter methods are defined in the displayed class. In the Celebrity class, the getFirstName method is the first of the getters and so the First Name column will go first, and so on. There are also some other issues we might want to take care of, but let's first deal with these four. Tweaking the Grid First of all let's change the number of records per page. Just add the following parameter to the component's declaration: <t:grid t_source="allCelebrities" rowsPerPage="5"/> Run the application, and here is what you should see: You can now easily page through the records using the attractive pager control that appeared at the bottom of the table. If you would rather have the pager at the top, add another parameter to the Grid declaration: <t:grid t_source="allCelebrities" rowsPerPage="5" pagerPosition="top"/> You can even have two pagers, at the top and at the bottom, by specifying pagerPosition="both", or no pagers at all (pagerPosition="none"). In the latter case however, you will have to provide some custom way of paging through records. The next enhancement will be a link surrounding the celebrity's last name and linking to the Details page. We'll be adding an ActionLink and will need to know which Celebrity to link to, so we have the Grid store using the row parameter. This is how the Grid declaration will look: <t:grid t_source="allCelebrities" rowsPerPage="5" row="celebrity"/> As for the page class, we already have the celebrity property in it. It should have been left from our experiments with the Loop component. It will also be used in exactly the same way as with Loop, while iterating through the objects provided by its source parameter, Grid will assign the object that is used to display the current row to the celebrity property. The next thing to do is to tell Tapestry that when it comes to the contents of the Last Name column, we do not want Grid to display it in a default way. Instead, we shall provide our own way of displaying the cells of the table that contain the last name. Here is how we do this: <t:grid t_source="allCelebrities" rowsPerPage="5" row="celebrity"> <t:parameter name="lastNameCell"> <t:pagelink t_page="details" t_context="celebrity.id"> ${celebrity.lastName} </t:pagelink> </t:parameter> </t:grid> Here, the Grid component contains a special Tapestry element , similar to the one that we used in the previous chapter, inside the If component. As before, it serves to provide an alternative content to display, in this case, the content which will fill in the cells of the Last Name column. How does Tapestry know this? By the name of the element, lastNameCell. The first part of this name, lastName, is the name of one of the properties of the displayed objects. The last part, Cell, tells Tapestry that it is about the content of the table cells displaying the specified property. Finally, inside , you can see an expansion displaying the name of the current celebrity and surrounding it with the PageLink component that has for its context the ID of the current celebrity. Run the application, and you should see that we have achieved what we wanted: Click on the last name of a celebrity, and you should see the Details page with the appropriate details on it. All that is left now is to remove the unwanted Id column and to change the order of the remaining columns. For this, we'll use two properties of the Grid—remove and reorder. Modify the component's definition in the page template to look like this: <t:grid t_source="celebritySource" rowsPerPage="5" row="celebrity" remove="id" reorder="lastName,firstName,occupation,dateOfBirth"> <t:parameter name="lastNameCell"> <t:pagelink t_page="details" t_context="celebrity.id"> ${celebrity.lastName} </t:pagelink> </t:parameter> </t:grid> Please note that re-ordering doesn't delete columns. If you omit some columns while specifying their order, they will simply end up last in the table. Now, if you run the application, you should see that the table with a collection of celebrities is displayed exactly as we wanted: Changing the Column Titles Column titles are currently generated by Tapestry automatically. What if we want to have different titles? Say we want to have the title, Birth Date, instead of Date Of Birth. The easiest and the most efficient way to do this is to use the message catalog, the same one that we used while working with the Select component in the previous chapter. Add the following line to the app.properties file: dateOfBirth-label=Birth Date Run the application, and you will see that the column title has changed appropriately. This way, appending -label to the name of the property displayed by the column, you can create the key for a message catalog entry, and thus change the title of any column. Now you should be able to adjust the Grid component to most of the possible requirements and to display with its help many different kinds of objects. However, one scenario can still raise a problem. Add an output statement to the getAllCelebrities method in the ShowAll page class, like this: public List<Celebrity> getAllCelebrities() { System.out.println("Getting all celebrities..."); return dataSource.getAllCelebrities(); } The purpose of this is simply to be aware when the method is called. Run the application, log in, and as soon as the table with celebrities is shown, you will see the output, as follows: Getting all celebrities... The Grid component has the allCelebrities property defined as its source, so it invokes the getAllCelebrities method to obtain the content to display. Note however that Grid, after invoking this method, receives a list containing all 15 celebrities in collection, but displays only the first five. Click on the pager to view the second page—the same output will appear again. Grid requested for the whole collection again, and this time displayed only the second portion of five celebrities from it. Whenever we view another page, the whole collection is requested from the data source, but only one page of data is displayed. This is not too efficient but works for our purpose. Imagine, however, that our collection contains as many as 10,000 celebrities, and it's stored in a remote database. Requesting for the whole collection would put a lot of strain on our resources, especially if we are going to have 2,000 pages. We need to have the ability to request the celebrities, page-by-page—only the first five for the first page, only the second five for the second page and so on. This ability is supported by Tapestry. All we need to do is to provide an implementation of the GridDataSource interface. Here is a somewhat simplified example of such an implementation.
Read more
  • 0
  • 0
  • 3096
article-image-setting-environment-cucumber-bdd-rails
Packt
31 Jul 2013
4 min read
Save for later

Setting up environment for Cucumber BDD Rails

Packt
31 Jul 2013
4 min read
(For more resources related to this topic, see here.) Getting ready This article will focus on how to use Cucumber in daily BDD development on the Ruby on Rails platform. Please install the following software to get started: Ruby Version Manager Version 1.9.3 of Ruby Version 3.2 of Rails The latest version of Cucumber A handy text editor; Vim or Sublime Text How to do it... To install RVM, bundler, and Rails we need to complete the following steps: Install RVM (read the latest installation guide from http://rvm.io ). $ curl -L https://get.rvm.io | bash -s stable --ruby Install the latest version of Ruby as follows: $ rvm install ruby-1.9.3 Install bundler as follows: $ gem install bundler Install the latest version of Rails as follows: $ gem install rails Cucumber is a Ruby gem. To install it we can run the following command in the terminal: Cucumber contains two parts: features and step definitions. They are explained in the following section: $ gem install cucumber If you are using bundler in your project, you need to add the following lines into your Gemfile: gem 'cucumber' How it works... We will have to go through the following files to see how this recipe works: Feature files (their extension is .feature): Each feature is captured as a "story", which defines the scope of the feature along with its acceptance criteria. A feature contains a feature title and a description of one or more scenarios. One scenario contains describing steps. Feature: A unique feature title within the project scope with a description. Its format is as follows: Feature: <feature title><feature description> Scenario: This elaborates how the feature ought to behave. Its format is as follows: Scenario: <Scenario short description>Given <some initial context>When <an event occurs>Then <ensure some outcomes> Step definition files: A step definition is essentially a block of code associated with one or more steps by a regular expression (or, in simple cases, an exact equivalent string). Given "I log into system through login page" dovisit login_pagefill_in "User name", :with => "wayne"fill_in "Password", :with => "123456"click_button "Login"end When running a Cucumber feature, each step in the feature file is like a method invocation targeting the related step definition. Each step definition is like a Ruby method which takes one or more arguments (the arguments are interpreted and captured by the Cucumber engine and passed to the step method; this is essentially done by regular expression). The engine reads the feature steps and tries to find the step definition one by one. If all the steps match and are executed without any exceptions thrown, then the result will be passed; otherwise, if one or more exceptions are thrown during the run, the exception can be one of the following: Cucumber::Undefined: Step was an undefined exception Cucumber::Pending: Step was defined but is pending implementation Ruby runtime exception: Any kind of exception thrown during step execution Similar with other unit-testing frameworks, Cucumber runs will either pass or fail depending on whether or not exception(s) are thrown, whereas the difference is that according to different types of exceptions, running a Cucumber could result in the following four kinds: Passed Pending Undefined Failed The following figure demonstrates the flow chart of running a Cucumber feature: There's more... Cucumber is not only for Rails, and the Cucumber feature can be written in many other languages other than English. Cucumber in other languages/platforms Cucumber is now available on many platforms. The following is a list of a number of popular ones: JVM: Cucumber-JVM .NET: SpecFlow Python: RubyPython, Lettuce PHP: Behat Erlang: Cucumberl Cucumber in your mother language We can actually write Gherkin in languages other than English too, which is very important because domain experts might not speak English. Cucumber now supports 37 different languages. There are many great resources online for learning Cucumber: The Cucumber home page: http://cukes.info/ The Cucumber project on Github: https://github.com/cucumber/cucumber The Cucumber entry on Wikipedia: http://en.wikipedia.org/wiki/ Cucumber_(software) The Cucumber backgrounder: https://github.com/cucumber/cucumber/ wiki/Cucumber-Backgrounder Summary: In this article we saw what is Cucumber, how to use Cucumber in daily BDD development on the Ruby on Rails, how to install RVM, bundler, and Rails, running a Cucumber feature, and Cucumber in different language and platform. Resources for Article : Further resources on this subject: Introducing RubyMotion and the Hello World app [Article] Building tiny Web-applications in Ruby using Sinatra [Article] Xen Virtualization: Work with MySQL Server, Ruby on Rails, and Subversion [Article]
Read more
  • 0
  • 0
  • 3094

article-image-microsoft-lightswitch-application-using-sql-azure-database
Packt
30 Aug 2010
3 min read
Save for later

Microsoft LightSwitch Application using SQL Azure Database

Packt
30 Aug 2010
3 min read
(For more resources on Microsoft, see here.) Your computer has to satisfy the system requirements that you can look up at the product site (while downloading) and you should have an account on Microsoft Windows Azure Services. Although this article retrieves data from SQL Azure, you can retrieve database from a local server or other data sources as well. However it is presently limited to SQL Server databases. The article content was developed using Microsoft LightSwitch Beta 1, SQL Azure database, on an Acer 4810TZ-4011 notebook with Windows 7 Ultimate OS. Installing Microsoft LightSwitch The LightSwitch beta is now available at this site here, the file name is vs_vslsweb.exe: http://www.microsoft.com/visualstudio/en-us/lightswitch When you download and install the program you may get into the problem that some requirement not being present. While installing the program for this article there was an initial problem. The Microsoft LightSwitch requires Microsoft SQL Server Compact 3.5 SP2. While this was already present on the computer it did not recognize. However in addition to SP2 there were also present the Microsoft SQL Server Compact 3.5 SP1 as well as SQL Server Compact 4.0. After removing Microsoft SQL Server Compact SP1 and SP2 the program installed without further problems after installing SQL Server Compact 3.5 SP2 again. Please review this link (http://hodentek.blogspot.com/2010/08/are-you-ready-to-see-light-with.html) for more detailed information. The next image shows the Compact products presently installed on this machine. Creating a LightSwitch Program After installation you may not find a shortcut that displays an icon for Microsoft LightSwitch. But you may find a Visual Studio 2010 shortcut as shown. The Visual Studio 2010 Express is a different product which is free to install. You cannot create a LightSwitch application with Visual Studio 2010 Express. Click on Microsoft Visual Studio 2010 shown highlighted. This opens the program with a splash screen. After a while the user interface displays the Start Page as shown. You can have more than one instance open at a time. The Recent Projects is a catalog of all projects in the Visual Studio 2010 default project directory. Just as you cannot develop a LightSwitch application with VS 2010 Express, you cannot open a project developed in VS 2010 Express with the LightSwitch interface as you will encounter the message shown. This means that LightSwitch projects are isolated in the development environment although the same shell program is used. When you click File | New Project you will see the New Project window displayed as shown here. Make sure you set the target to .NET Framework 4.0 otherwise you may not see any projects. It is strictly .NET Framework 4.0 for now. Also trying to create, File | New web site will not show any templates no matter what the .NET Framework you have chosen. In order to see Team Project you must have a Team Foundation Server present. In what follows we will be creating LightSwitch application (default name is Application1 for both C# and VB). From what is displayed you will see more Silverlight project templates than LightSwitch project templates. In fact you have just one template either in C# or in VB. Highlight LightSwitch Application (VB) and change the default name from Application1 to something different. Herein it is named SwitchOn as shown. If you were to look at the project properties in the property window you will see that the filename of the project is SwitchOn.lsproj. This file type is exclusively used by LightSwitch. The folder structure of the project is deceptively simple consisting of Data Sources and Screens.
Read more
  • 0
  • 0
  • 3078

article-image-introducing-sametime-852
Packt
23 Nov 2011
11 min read
Save for later

Introducing Sametime 8.5.2

Packt
23 Nov 2011
11 min read
(For more resources on IBM Sametime, see here.) What's new in Sametime 8.5.2 IBM Sametime 8.5 and 8.5.2 introduces many new capabilities to the Sametime product suite. In addition to the numerous features already included with the Sametime 8.x family of clients, Sametime 8.5.2 has extended client usability and collaboration. Let us take a look at a few of those enhancements: Sametime Connect Client software is now supported on Microsoft Windows 7.0, Apple Macintosh 10.6, and Linux desktop operating systems including Red Hat Enterprise Desktop (RHED), Ubuntu, and SUSE Linux Enterprise Desktop (SLED) A lightweight browser-based client that requires no additional downloads is available for instant messaging for Apple iPhone and iPad users A browser-based client is available for Sametime meetings Sametime Mobile Client support has been added for Android devices (OS 2.0 and higher), Blackberry 5.0 and 6.0 devices, and Microsoft Mobile 6.5 devices Rich text messaging is now available for chats with users connected through the Sametime Gateway If you deployed Sametime Standard in a previous release or are interested in the online meeting conferencing features of Sametime 8.5.2, then you and your users will be happy to know that meeting attendees now can attend online meetings "instantly" without having to load any additional software in their browser. Meetings start quickly and are retained for future use. Probably the most significant change for you as a Sametime administrator is the introduction of IBM WebSphere Application Server (WAS) as an application hosting platform for Sametime. In previous versions of Sametime, with the exceptio of the Sametime Advanced and Sametime Gateway features, the Sametime server was deployed on Lotus Domino servers. If you know how to install and manage a Lotus Domino server, then you will most likely be the same individual who will manage a Sametime server as the skill sets are similar. But with the addition of WAS comes flexibility in server architecture. As an administrator, you have the ability to choose features and configure servers based on your organization's unique needs. The linkage between Domino and Sametime still exists through the Sametime Community Server. So not only can Sametime be sized appropriately for the needs of your organization, it can also run on multiple operating systems and servers as per your requirements. Some highlights include: With the release of Sametime 8.5.2, Lotus Domino 8.5.2 is now supported. A Sametime Proxy Server has been introduced as a component of the Sametime server architecture. The Sametime Proxy Server hosts the lightweight browser-based Sametime client. It runs on WAS and is different than the WAS Proxy Server. Media Manager Server is another new Sametime server component. This server manages conferences using Session Initiation Protocol (SIP) to support point-to-point and multi-point calls and integrates into the Sametime environment through your Community Server. Sametime 8.5.2 introduces support for standard audio and video codec for improved integration in the Sametime client and the Sametime Meeting Center. This allows for interoperability with third-party conferencing systems. Transversal Using Relay NAT (TURN) server is a Java program that runs in conjunction with the Media Manager Server and behaves as a reflector, routing audio and video traffic between clients on different networks. The technology used by this Network Address Translation (NAT) Traversal server (ICE) uses both TURN and Session Transversal Utilities for NAT (STUN) protocols and behaves similarly to the Sametime reflector service that was part of earlier versions of Sametime. Improved network performance and support for IPv6 networking. A new central administration console called the Sametime System Console (SSC) for managing Sametime server and configuration resources from a consolidated web interface. Sametime Bandwidth Manager is a new optional WAS-based Sametime server component that allows you to create rules and policies that determine the use of audio and video within Sametime. The Bandwidth Manager monitors Sametime traffic and uses your rules to dynamically select codec and quality of video streams as calls are initiated by users. No matter if you are new to Sametime or a long-time Sametime administrator, our aim is to guide you through the planning, installation, management, and troubleshooting steps so that you can successfully implement and support Sametime 8.5.2 in your environment. Sametime 8.5.2 server architecture As we have described briefly, the server architecture for Sametime 8.5.2 has changed significantly from previous versions. Prior to this version, Sametime was a single server installation and ran as an add-in task under a Domino server. It provided both instant messaging and web conferencing features combined into a single server. Although there was a license model that only installed and enabled the instant messaging features (Sametime Entry), the installer was the same if you wanted to include web conferencing functionality as well. The new architecture still includes a Domino-based component but the Domino server is intended strictly for instant messaging and awareness. All other Sametime functionality has been re-engineered into separate server components running on top of the WAS platform. By moving all but the instant messaging and awareness services from Domino onto WebSphere, IBM has constructed an environment better suited to the needs of enterprise customers who have a high demand for services that require significant non-Domino resources such as audio, video, and web conferencing. Additionally, the new architecture of Sametime 8.5.2 is about enhancing the client experience, dramatically improving performance, and bringing the technology in line with modern audio, video, and browser standards. Let us begin by taking a look at the new server components and learning about their role and function. Sametime System Console Core to the entire Sametime multi-server architecture is the management interface which runs as a WebSphere application. It is called the Sametime System Console (SSC). The SSC actually plugs into the standard WAS 7.x menu as an additional option. The SSC provides the configuration and management tools needed to work with all the other Sametime components, including the Domino-based Instant Messaging server. It also comes with a series of step-by-step guides called Sametime Guided Activities to walk you through the installation of each server component in the proper sequence. The SSC also has a Sametime Servers section that allows you manage the Sametime servers. The SSC installs as an add-in to WAS and is accessed through a browser on its own dedicated port. It also uses a custom DB2 database named STSC for storage of its management information. Sametime Community Server Sametime Community Server is the instant messaging and presence awareness component of Sametime, which is installed as an add-in task for Domino. It must be installed on Domino versions 8.5 or 8.5.1, but it can work with earlier versions of Sametime already installed in your environment. Keep in mind, however, that pre-8.5.x clients will not benefit from many of the new features provided by your Sametime 8.5.2 servers. If your requirement is solely for instant messaging, then this is the only component you will need installed alongside Domino itself. The Sametime Community Server "standard" install also includes the original Domino-based Meeting Center. This browser-based component has not been updated in any way from pre-8.5.x versions and is there purely for backwards compatibility and to maintain any existing scheduled meetings. There is no integration or interaction between the Domino-based Meeting Center and the Sametime 8.5.2 Meeting Center(s). Other than being updated to run on top of a Domino 8.5 or 8.5.1 server, the actual Community Server component has changed very little and includes no significant new features from previous versions. Its browser administration interface and options remain the same. However, if you have deployed the SSC, the native Domino administration is over-ridden. Following is a chart of the Sametime Community Server infrastructure. Note the optional management of the server by the SSC. Although the use of Domino as a directory is still supported, it is highly recommended you deploy Sametime using a Lightweight Directory Access Protocol (LDAP) directory. If you will be deploying other Sametime 8.5.2 components, then your deployment will usually require an LDAP directory to be used. Sametime Meeting Server The Sametime Meeting server has been completely re-engineered to bring it up to the standards of modern web conferencing solutions. It is also better aligned with IBM's Sametime Unyte online service. The new Sametime Meeting Server (versus the Domino-based Meeting Center) runs as an application under WAS. In addition, as it requires a data store to hold meeting information, it utilizes a dedicated DB2 database for managing the content of each meeting room. The previous Sametime meeting client was entirely browser-based. To improve performance and functionality for 8.5.2, a rich meeting center client has been introduced which plugs into the Sametime Eclipse environment. A browser interface for meetings is still available but it provides a reduced set of functions. Sametime Proxy Server The Sametime Proxy Server re-introduces a lightweight browser-based client for Sametime, which has not been available in versions shipped since 6.5. The new browser client is designed to be lightweight and fully customizable and it is based on Ajax technology and themed using CSS. This allows it to launch quickly and be customized to match your organization's design. The Proxy Server installs as an application under WAS, although it has no data store of its own and does not require any database connectivity. In the configuration for the Proxy Server, you direct it to a specific Community Server to supply the Sametime services. The following diagram gives a brief overview: The Proxy Server ships with a default client designed as a JavaServer Page, which can be modified using customizable style sheets. It gives a feature-rich Sametime experience including multi-way chats, browser-based meetings, and privacy settings. Sametime Media Manager The Sametime Media Manager takes on the role of providing audio and video services for both the Sametime clients for peer-to-peer VoIP and video chats, and for web conferencing within the meeting rooms in the new meeting center. It is designed to provide services for multiple Meeting Servers and through them for instant meetings from the Sametime client. Installed on a WAS platform, it has no need for a data store and does not require any database connectivity. The Media Manager is designed to provide a multi-way audio and video conferencing experience using modern codecs; however, it does not support Sametime clients in versions prior to 8.5.2. It is the audio and video "glue" that connects all the other Sametime server elements in 8.5.2. Sametime TURN Server In its default configuration, the Media Manager creates a SIP connection from itself to the requesting client. However, where the client is not on the same network as the Media Manager, no SIP connection can be made directly. To address this issue, which affects users outside of your firewall as well as those on different internal networks, IBM has introduced the TURN Server with Sametime 8.5.2. The TURN server uses both TURN and STUN protocols to create a connection with the client. It routes audio and video traffic between itself and the Media Manager, allowing connections between clients across networks. The technology is sometimes referred to as a reflector and pre-8.5 versions of Sametime came with a reflector service of their own. The TURN server is a Java program that runs in a command window on any Windows or Linux server sharing the same subnet as the Media Manager. It doesn't require WAS or any data store but runs with a separately installed IBM Java Virtual Machine (JVM). Sametime Bandwidth Manager The Sametime Bandwidth Manager is a new optional WAS-based component that is designed to help Sametime administrators manage the traffic generated by the Media Manager and its audio and video services. Within the Bandwidth Manager configuration, an administrator can create sites, links, and call-rate policies that define the service provided by the Media Manager. The Bandwidth Manager analyzes its rules when a new call is initiated and instructs the Media Manager on how to service that call. Among the extremely granular levels of customization available are options for sites to have link rules that constrain the traffic between them. You can also create specific policies that specify the services available to named users or groups during peak and off-peak periods. Depending upon network load, user identity, and call participation, the Bandwidth Manager can be configured to control the bandwidth. It can do this by reducing the audio to a lower codec, reducing the video frame rate, or even denying video completely, informing the user that they should retry at a later time.
Read more
  • 0
  • 0
  • 3074
article-image-troubleshooting-lotus-notesdomino-7-applications
Packt
23 Oct 2009
19 min read
Save for later

Troubleshooting Lotus Notes/Domino 7 applications

Packt
23 Oct 2009
19 min read
Introduction The major topics that we'll cover in this article are: Testing your application (in other words, uncovering problems before your users do it for you). Asking the right questions when users do discover problems. Using logging to help troubleshoot your problems. We'll also examine two important new Notes/Domino 7 features that can be critical for troubleshooting applications: Domino Domain Monitoring (DDM) Agent Profiler   For more troubleshooting issues visit: TroubleshootingWiki.org Testing your Application Testing an application before you roll it out to your users may sound like an obvious thing to do. However, during the life cycle of a project, testing is often not allocated adequate time or money. Proper testing should include the following: A meaningful amount of developer testing and bug fixing: This allows you to catch most errors, which saves time and frustration for your user community. User representative testing: A user representative, who is knowledgeable about the application and how users use it, can often provide more robust testing than the developer. This also provides early feedback on features. Pilot testing: In this phase, the product is assumed to be complete, and a pilot group uses it in production mode. This allows for limited stress testing as well as more thorough testing of the feature set. In addition to feature testing, you should test the performance of the application. This is the most frequently skipped type of testing, because some consider it too complex and difficult. In fact, it can be difficult to test user load, but in general, it's not difficult to test data load. So, as part of any significant project, it is a good practice to programmatically create the projected number of documents that will exist within the application, one or two years after it has been fully deployed, and have a scheduled agent trigger the appropriate number of edits-per-hour during the early phases of feature testing. Although this will not give a perfect picture of performance, it will certainly help ascertain whether and why the time to create a new document is unacceptable (for example, because the @Db formulas are taking too long, or because the scheduled agent that runs every 15 minutes takes too long due to slow document searches). Asking the Right Questions Suppose that you've rolled out your application and people are using it. Then the support desk starts getting calls about a certain problem. Maybe your boss is getting an earful at meetings about sluggish performance or is hearing gripes about error messages whenever users try to click a button to perform some action. In this section, we will discuss a methodology to help you troubleshoot a problem when you don't necessarily have all the information at your disposal. We will include some specific questions that can be asked verbatim for virtually any application. The first key to success in troubleshooting an application problem is to narrow down where and when it happens. Let's take these two very different problems suggested above (slow performance and error messages), and pose questions that might help unravel them: Does the problem occur when you take a specific action? If so, what is that action? Your users might say, "It's slow whenever I open the application", or "I get an error when I click this particular button in this particular form". Does the problem occur for everyone who does this, or just for certain people? If just certain people, what do they have in common? This is a great way to get your users to help you help them. Let them be a part of the solution, not just "messengers of doom". For example, you might ask questions such as, "Is it slow only for people in your building or your floor? Is it slow only for people accessing the application remotely? Is it slow only for people who have your particular access (for example, SalesRep)?" Does this problem occur all the time, at random times, or only at certain times? It's helpful to check whether or not the time of day or the day of week/month is relevant. So typical questions might be similar to the following: "Do you get this error every time you click the button or just sometimes? If just sometimes, does it give you the error during the middle of the day, but not if you click it at 7 AM when you first arrive? Do you only get the error on Mondays or some other day of the week? Do you only see the error if the document is in a certain status or has certain data in it? If it just happens for a particular document, please send me a link to that document so that I can inspect it carefully to see if there is invalid or unexpected data." Logging Ideally, your questions have narrowed down the type of problem it could be. So at this point, the more technical troubleshooting can start. You will likely need to gather concrete information to confirm or refine what you're hearing from the users. For example, you could put a bit of debugging code into the button that they're clicking so that it gives more informative errors, or sends you an email (or creates a log document) whenever it's clicked or whenever an error occurs. Collecting the following pieces of information might be enough to diagnose the problem very quickly: Time/date User name Document UNID (if the button is pushed in a document) Error Status or any other likely field that might affect your code By looking for common denominators (such as the status of the documents in question, or access or roles of the users), you will likely be able to further narrow down the possibilities of why the problem is happening. This doesn't solve your problem of course, but it helps in advancing you a long way towards that goal. A trickier problem to troubleshoot might be one we mentioned earlier: slow performance. Typically, after you've determined that there is some kind of performance delay, it's a good idea to first collect some server logging data. Set the following Notes.ini variables in the Server Configuration document in your Domino Directory, on the Notes.ini tab: Log_Update=1Log_AgentManager=1 These variables instruct the server to write output to the log.nsf database in the Miscellaneous Events view. Note that they may already be set in your environment. If not, they're fairly unobtrusive, and shouldn't trouble your administration group. Set them for a 24-hour period during a normal business week, and then examine the results to see if anything pops out as being suspicious. For view indexing, you should look for lines like these in the Miscellaneous Events (Log_Update=1): 07/01/2006 09:29:57 AM Updating views in appsSalesPipeline.nsf07/01/2006 09:30:17 AM Finished updating views in appsSalesPipeline.nsf07/01/2006 09:30:17 AM Updating views in appsTracking.nsf07/01/2006 09:30:17 AM Finished updating views in appsTracking.nsf07/01/2006 09:30:17 AM Updating views in appsZooSchedule.nsf07/01/2006 09:30:18 AM Finished updating views in appsZooSchedule.nsf And lines like these for Agent execution (Log_AgentManager=1): 06/30/2006 09:43:49 PM AMgr: Start executing agent 'UpdateTickets' in 'appsSalesPipeline.nsf ' by Executive '1'06/30/2006 09:43:52 PM AMgr: Start executing agent 'ZooUpdate' in 'appsZooSchedule.nsf ' by Executive '2'06/30/2006 09:44:44 PM AMgr: Start executing agent 'DirSynch' in 'appsTracking.nsf ' by Executive '1' Let's examine these lines to see whether or not there is anything we can glean from them. Starting with the Log_Update=1 setting, we see that it gives us the start and stop times for every database that gets indexed. We also see that the database file paths appear alphabetically. This means that, if we search for the text string updating views and pull out all these lines covering (for instance) an hour during a busy part of the day, and copy/paste these lines into a text editor so that they're all together, then we should see complete database indexing from A to Z on your server repeating every so often. In the log.nsf database, there may be many thousands of lines that have nothing to do with your investigation, so culling the important lines is imperative for you to be able to make any sense of what's going on in your environment. You will likely see dozens or even hundreds of databases referenced. If you have hundreds of active databases on your server, then culling all these lines might be impractical, even programmatically. Instead, you might focus on the largest group of databases. You will notice that the same databases are referenced every so often. This is the Update Cycle, or view indexing cycle. It's important to get a sense of how long this cycle takes, so make sure you don't miss any references to your group of databases. Imagine that SalesPipeline.nsf and Tracking.nsf were the two databases that you wanted to focus on. You might cull the lines out of the log that have updating views and which reference these two databases, and come up with something like the following: 07/01/2006 09:29:57 AM Updating views in appsSalesPipeline.nsf07/01/2006 09:30:17 AM Finished updating views in appsSalesPipeline.nsf07/01/2006 09:30:17 AM Updating views in appsTracking.nsf07/01/2006 09:30:20 AM Finished updating views in appsTracking.nsf07/01/2006 10:15:55 AM Updating views in appsSalesPipeline.nsf07/01/2006 10:16:33 AM Finished updating views in appsSalesPipeline.nsf07/01/2006 10:16:33 AM Updating views in appsTracking.nsf07/01/2006 10:16:43 AM Finished updating views in appsTracking.nsf07/01/2006 11:22:31 AM Updating views in appsSalesPipeline.nsf07/01/2006 11:23:33 AM Finished updating views in appsSalesPipeline.nsf07/01/2006 11:23:33 AM Updating views in appsTracking.nsf07/01/2006 11:23:44 AM Finished updating views in appsTracking.nsf This gives us some very important information: the Update task (view indexing) is taking approximately an hour to cycle through the databases on the server; that's too long. The Update task is supposed to run every 15 minutes, and ideally should only run for a few minutes each time it executes. If the cycle is an hour, then that means update is running full tilt for that hour, and as soon as it stops, it realizes that it's overdue and kicks off again. It's possible that if you examine each line in the log, you'll find that certain databases are taking the bulk of the time, in which case it might be worth examining the design of those databases. But it might be that every database seems to take a long time, which might be more indicative of a general server slowdown. In any case, we haven't solved the problem; but at least we know that the problem is probably server-wide. More complex applications, and newer applications, tend to reflect server‑performance problems more readily, but that doesn't necessarily mean they carry more responsibility for the problem. In a sense, they are the "canary in the coal mine". If you suspect the problem is confined to one database (or a few), then you can increase the logging detail by setting Log_Update=2. This will give you the start time for every view in every database that the Update task indexes. If you see particular views taking a long time, then you can examine the design of those views. If no database(s) stand out, then you might want to see if the constant indexing occurs around the clock or just during business hours. If it's around the clock, then this might point to some large quantities of data that are changing in your databases. For example, you may be programmatically synchronizing many gigabytes of data throughout the day, not realizing the cost this brings in terms of indexing. If slow indexing only occurs during business hours, then perhaps the user/data load has not been planned out well for this server. As the community of users ramps up in the morning, the server starts falling behind and never catches up until evening. There are server statistics that can help you determine whether or not this is the case. (These server statistics go beyond the scope of this book, but you can begin your investigation by searching on the various Notes/Domino forums for "server AND performance AND statistics".) As may be obvious at this point, troubleshooting can be quite time-consuming. The key is to make sure that you think through each step so that it either eliminates something important, or gives you a forward path. Otherwise, you can find yourself still gathering information weeks and months later, with users and management feeling very frustrated. Before moving on from this section, let's take a quick look at agent logging. Agent Manager can run multiple agents in different databases, as determined by settings in your server document. Typically, production servers only allow two or three concurrent agents to run during business hours, and these are marked in the log as Executive '1', Executive '2', and so on. If your server is often busy with agent execution, then you can track Executive '1' and see how many different agents it runs, and for how long. If there are big gaps between when one agent starts and when the next one does (for Executive '1'), this might raise suspicion that the first agent took that whole time to execute. To verify this, turn up the logging by setting the Notes.ini variable debug_amgr=*. (This will output a fair amount of information into your log, so it's best not to leave it on for too long, but normally one day is not a problem.) Doing this will give you a very important piece of information: the number of "ticks" it took for the agent to run. One second equals 100 ticks, so if the agent takes 246,379 ticks, this equals 2,463 seconds (about 41 minutes). As a general rule, you want scheduled agents to run in seconds, not minutes; so any agent that is taking this long will require some examination. In the next section, we will talk about some other ways you can identify problematic agents. Domino Domain Monitoring (DDM) Every once in a while, a killer feature is introduced—a feature so good, so important, so helpful, that after using it, we just shake our heads and wonder how we ever managed without it for so long. Domino Domain Monitor (DDM) is just such a feature. DDM is too large to be completely covered in this one section, so we will confine our overview to what it can do in terms of troubleshooting applications. For a more thorough explanation of DDM and all its features, see the book, Upgrading to Lotus Notes and Domino (www.packtpub.com/upgrading_lotus/book). In the events4.nsf database, you will find a new group of documents you can create for tracking agent or application performance. On Domino 7 servers, a new database is created automatically with the filename ddm.nsf. This stores the DDM output you will examine. For application troubleshooting, some of the most helpful areas to track using DDM are the following: Full-text index needs to be built. If you have agents that are creating a full‑text index on the fly because the database has no full‑text index built, DDM can track that potential problem for you. Especially useful is the fact that DDM compiles the frequency per database, so (for instance) you can see if it happens once per month or once per hour. Creating full‑text indexes on the fly can result in a significant demand on server resources, so having this notification is very useful. We discuss an example of this later in this section. Agent security warnings. You can manually examine the log to try to find errors about agents not being able to execute due to insufficient access. However, DDM will do this for you, making it much easier to find (and therefore fix) such problems. Resource utilization. You can track memory, CPU, and time utilization of your agents as run by Agent Manager or by the HTTP task. This means that at any time you can open the ddm.nsf database and spot the worst offenders in these categories, over your entire server/domain. We will discuss an example of CPU usage later in this section. The following illustration shows the new set of DDM views in the events4.nsf (Monitoring configuration) database: The following screenshot displays the By Probe Server view after we've made a few document edits: Notice that there are many probes included out-of-the-box (identified by the property "author = Lotus Notes Template Development") but set to disabled. In this view, there are three that have been enabled (ones with checkmarks) and were created by one of the authors of this book. If you edit the probe document highlighted above, Default Application Code/Agents Evaluated By CPU Usage (Agent Manager), the document consists of three sections. The first section is where you choose the type of probe (in this case Application Code) and the subtype (in this case Agents Evaluated By CPU Usage). The second section allows you to choose the servers to run against, and whether you want this probe to run against agents/code executed by Agent Manager or by the HTTP task (as shown in the following screenshot). This is an important distinction. For one thing, they are different tasks, and therefore one can hit a limit while the other still has room to "breathe". But perhaps more significantly, if you choose a subtype of Agents Evaluated By Memory Usage, then the algorithms used to evaluate whether or not an agent is using too much memory are very different. Agents run by the HTTP task will be judged much more harshly than those run by the Agent Manager task. This is because with the HTTP task, it is possible to run the same agent with up to hundreds of thousands of concurrent executions. But with Agent Manager, you are effectively limited to ten concurrent instances, and none within the same database. The third section allows you to set your threshold for when DDM should report the activity: You can select up to four levels of warning: Fatal, Failure, Warning (High), and Warning (Low). Note that you do not have the ability to change the severity labels (which appear as icons in the view). Unless you change the database design of ddm.nsf, the icons displayed in the view and documents are non-configurable. Experiment with these settings until you find the approach that is most useful for your corporation. Typically, customers start by overwhelming themselves with information, and then fine-tuning the probes so that much less information is reported. In this example, only two statuses are enabled: one for six seconds, with a label of Warning (High), and one for 60 seconds, with a label of Failure. Here is a screenshot of the DDM database: Notice that there are two Application Code results, one with a status of Failure (because that agent ran for more than 60 seconds), and one with a status of Warning (High) (because that agent ran for more than six seconds but less than 60 seconds). These are the parameters set in the Probe document shown previously, which can easily be changed by editing that Probe document. If you want these labels to be different, you must enable different rows in the Probe document. If you open one of these documents, there are three sections. The top section gives header information about this event, such as the server name, the database and agent name, and so on. The second section includes the following table, with a tab for the most recent infraction and a tab for previous infractions. This allows you to see how often the problem is occurring, and with what severity. The third section provides some possible solutions, and (if applicable) automation. For example, in our example, you might want to "profile" your agent. (We will profile one of our agents in the final section of this article.) DDM can capture full-text operations against a database that is not full‑text indexed. It tracks the number of times this happens, so you can decide whether to full‑text index the database, change the agent, or neither. For a more complete list of the errors and problems that DDM can help resolve, check the Domino 7 online help or the product documentation (www.lotus.com). Agent Profiler If any of the troubleshooting tips or techniques we've discussed in this article causes you to look at an agent and think, "I wonder what makes this agent so slow", then the Agent Profiler should be the next tool to consider. Agent Profiler is another new feature introduced in Notes/Domino 7. It gives you a breakdown of many methods/properties in your LotusScript agent, telling you how often each one was executed and how long they took to execute. In Notes/Domino 7, the second (security) tab of Agent properties now includes a checkbox labeled Profile this agent. You can select this option if you want an agent to be profiled. The next time the agent runs, a profile document in the database is created and filled with the information from that execution. This document is then updated every time the agent runs. You can view these results from the Agent View by highlighting your agent and selecting Agent | View Profile Results. The following is a profile for an agent that performed slow mail searches: Although this doesn't completely measure (and certainly does not completely troubleshoot) your agents, it is an important step forward in troubleshooting code. Imagine the alternative: dozens of print statements, and then hours of collating results! Summary In closing, we hope that this article has opened your eyes to new possibilities in troubleshooting, both in terms of techniques and new Notes/Domino 7 features. Every environment has applications that users wish ran faster, but with a bit of care, you can troubleshoot your performance problems and find resolutions. After you have your servers running Notes/Domino 7, you can use DDM and Agent Profiler (both exceptionally easy to use) to help nail down poorly performing code in your applications. These tools really open a window on what had previously been a room full of mysterious behavior. Full-text indexing on the fly, code that uses too much memory, and long running agents are all quickly identified by Domino Domain Monitoring (DDM). Try it!
Read more
  • 0
  • 0
  • 3072

article-image-getting-started-openstreetmap
Packt
21 Sep 2010
6 min read
Save for later

Getting Started with OpenStreetMap

Packt
21 Sep 2010
6 min read
(For more resources on OpenStreetMap, see here.) Not all the tools and features on the site are obvious from the front page, so we'll go on a tour of the site, and cover some other tools hosted by the project. By the end of the article, you should have a good idea about where to find answers to the questions you have about OpenStreetMap. A quick tour of the front page The project's main "shop front" is www.openstreetmap.org. It's the first impression most people get of what OpenStreetMap does, and is designed to be easy to use, rather than show as much information as possible. In the following diagram, you can see the layout of the front page. We'll be referring to many of the features on the front page, so let's have a look at what's there: Most of the page is taken up by the map viewer, which is nicknamed the slippy map by mappers. This has its own controls, which we'll cover later in the article. Along the top of the map are the navigation tabs, showing most of the data management tools on openstreetmap.org. To the right of these are the user account links. Down the left-hand side of the page is the sidebar, containing links to the wiki, news blog, merchandise page, and map key. The wiki is covered later in this article. The news blog is www.opengeodata.org, and it's an aggregation of many OSM-related blogs. The Shop page is a page on the wiki listing various pieces of OpenStreetMap-related merchandise from several sources. Most merchandise generates income for the OpenStreetMap Foundation or a local group. Clicking on the map key will show the key on the left-hand side of the map. As you'd expect, the key shows what the symbols and shading on the map mean. The key is dynamic, and will change with zoom level and which base layer you're looking at. Not all base layers are supported by the dynamic map key at present. Below this is the search box. The site search uses two separate engines: Nominatim: This is an OpenStreetMap search engine or geocoder. This uses the OpenStreetMap database to find features by name, including settlements, streets, and points of interest. Nominatim is usually fast and accurate, but can only find places that have been mapped in OpenStreetMap. Geonames: This is an external location service that has greater coverage than OpenStreetMap at present, but can sometimes be inaccurate. Geonames contains settlement names and postcodes, but few other features. Clicking on a result from either search engine will center the map on that result and mark it with an arrow. Creating your account To register, go to http://www.openstreetmap.org/, and choose sign up in the top right-hand corner. This will take you to the following registration form: At present, you only really need an account on openstreetmap.org if you're planning to contribute mapping data to the project. Outside the main site and API, only the forums and issue tracker use the same username and password as openstreetmap.org. You don't need to register to download data, export maps, or subscribe to the mailing lists. Conversely, even if you're not planning to do any mapping, there are still good reasons to register at the site, such as the ability to contact and be contacted by other mappers. OpenStreetMap doesn't allow truly anonymous editing of data. T he OSM community decided to disallow this in 2007, so that any contributors could be contacted if necessary. If you're worried about privacy, you can register using a pseudonym, and this will be the only identifying information used for your account. Registering with openstreetmap.org requires a valid e-mail address, but this is never disclosed to any other user under any circumstance, unless you choose to do so. It is possible to change your display name after registration, and this changes it for all current OpenStreetMap data. However, it won't change in any archived data, such as old planet files. Once you've completed the registration form, you'll receive an e-mail asking you to confirm the registration. Your account won't be active until you click on the link in this e-mail. Once you've activated your account, you can change your settings, as follows: You can add a short description of yourself if you like, and add a photo of yourself or some other avatar. You can also set your home location by clicking on it in the small slippy map on your settings page. This allows other mappers nearby to see who else is contributing in their area, and allows you to see them. You don't have to use your house or office as your home location; any place that gives a good idea of where you'll be mapping is enough. Adding a location may lead to you being invited to OpenStreetMap-related events in your area, such as mapping parties or social events. If you do add a location, you get a home link in your user navigation on the home page that will take the map view back to that place. You'll also see a map on your user page showing other nearby mappers limited to the nearest 10 users within 50km. If you know other mappers personally, you can indicate this by adding them as your friend on openstreetmap.org. This is just a convenience to you, and your friends aren't publicly shown on your user page, although anyone you add as a friend will receive an e-mail telling them you've done it. Once you've completed the account settings, you can view your user page (shown in the following screenshot). You can do this at any time by clicking on your display name in the top right-hand corner. This shows the information about yourself that you've just entered, links to your diary and to add a new diary entry, a list of your edits to OpenStreetMap, your GPS traces, and to your settings. These will be useful once you've done some mapping, and when you need to refer to others' activities on the site. Every user on openstreetmap.org has a diary that they can use to keep the community informed of what they've been up to. Each diary entry can have a location attached, so you can see where people have been mapping. There's an RSS feed for each diary, and a combined feed for all diary entries. You can find any mapper's diary using the link on their user page, and you can comment on other mappers' diary entries, and they'll get an e-mail notification when you do.
Read more
  • 0
  • 0
  • 3062
Modal Close icon
Modal Close icon