Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1083 Articles
article-image-java-development
Packt
18 Jul 2013
16 min read
Save for later

Java Development

Packt
18 Jul 2013
16 min read
(For more resources related to this topic, see here.) Creating a Java project To create a new Java project, navigate to File | New | Project . You will be presented with the New Project wizard window that is shown in the following screenshot: Choose the Java Project option, and click on Next . The next page of the wizard contains the basic configuration of the project that you will create. The JRE section allows you to use a specific JRE to compile and run your project. The Project layout section allows you to choose if both source and binary files are created in the project's root folder or if they are to be separated into different folders (src and bin by default). The latter is the default option. You can create your project inside a working set. This is a good idea if you have too many projects in your workspace and want to keep them organized. Check the Creating working sets section of this article for more information on how to use and manage working sets. The next page of the wizard contains build path options. In the Managing the project build path section of this article , we will talk more about these options. You can leave everything as the default for now, and make the necessary changes after the project is created. Creating a Java class To create a new Java class, right-click on the project in the Package Explorer view and navigate to New | Class . You will be presented with the New Java Class window, where you will input information about your class. You can change the class's superclass, and add interfaces that it implements, as well as add stubs for abstract methods inherited from interfaces and abstract superclasses, add constructors from superclasses, and add the main method. To create your class inside a package, simply enter its name in the appropriate field, or click on the Browse button beside it and select the package. If you input a package name that doesn't exist, Eclipse will create it for you. New packages can also be created by right-clicking on the project in the Package Explorer and navigating to New | Package . Right-clicking on a package instead of a project in the Project Explorer and navigating to New | Class will cause the class to be created inside that package. Creating working sets Working sets provide a way to organize your workspace's projects into subsets. When you have too many projects in your workspace, it gets hard to find the project you're looking for in the Package Explorer view. Projects you are not currently working on, for example, can be kept in separate working sets. They won't get in the way of your current work but will be there in case you need them. To create a new working set, open the Package Explorer's view menu (white triangle in the top-right corner of the view), and choose Select Working Set . Click on New and select the type of projects that the working set will contain (Java , in this case). On the next page, insert the name of the working set, and choose which projects it will contain. Once the working set is created, choose the Selected Working Sets option, and mark your working set. Click on OK , and the Package Explorer will only display the projects inside the working set you've just created. Once your working sets are created, they are listed in the Package Explorer's view menu. Selecting one of them will make it the only working set visible in the Package Explorer. To view more than one working set at once, choose the Select Working Set option and mark the ones you want to show. To view the whole workspace again, choose Deselect Workspace in the view menu. You can also view all the working sets with their nested projects by selecting working sets as the top-level element of the Package Explorer view. To do this, navigate to Top Level Elements | Working Sets in the view menu. Although you don't see projects that belong to other working sets when a working set is selected, they are still loaded in your workspace, and therefore utilize resources of your machine. To avoid wasting these resources, you can close unrelated projects by right-clicking on them and selecting Close Project . You can select all the projects in a working set by using the Ctrl + A keyboard shortcut. If you have a big number of projects, but you never work with all of them at the same time (personal/business projects, different clients' projects, and so on), you can also create a specific workspace for each project set. To create a new workspace, navigate to File | Switch Workspace | Other in the menu, enter the folder name of your new workspace, and click on OK . You can choose to copy the current workspace's layout and working sets in the Copy Settings section. Importing a Java project If you are going to work on an existing project, there are a number of different ways you can import it into Eclipse, depending on how you have obtained the project's source code. To open the Import wizard, navigate to File | Import . Let's go through the options under the General category: Archive file : Select this option if the project you are working on already exists in your workspace, and you just want to import an archive file containing new resources to it. The Import wizard will list all the resources inside the archive file so that you can select the ones you wish to import. To select to which project the resources will be imported click on the Browse button. You can also select in which folder the resources are to be included. Click on Finish when you are done. The imported resources will be decompressed and copied into the project's folder. Existing Projects into Workspace : If you want to import a new project, select this option from the Import wizard. If the project's source file has been compressed into an archive file (the .zip, .tar, .jar, or .tgz format), there's no need to decompress it; just mark the Select archive file option on the following page of the wizard, and point to the archive file. If you have already decompressed the code, mark Select root directory and point to the project. The wizard will list all the Eclipse projects found in the folder or archive file. Select the ones you wish to import and click on Finish . You can add the imported projects to a specific working set and choose whether the projects are to be copied into your workspace folder or not. It's highly recommended to do so for both simplicity and portability; you know where all your Eclipse projects are, and it's easy to backup or move all of them to a different machine. File System : Use this wizard if you already have a project in your workspace and want to add new existing resources in your filesystem. On the next page, select the resources you wish to import by checking them. Click on the Browse button to select the project and the folder where the resources will be imported. When you click on the Finish button, the resources will be copied to the project's folder inside your workspace. Preferences : You can import Eclipse preferences files to your workspace by selecting this option. Preferences file contains code style and compiler preferences, the list of installed JREs, and the Problems view configurations. You can choose which of these preferences you wish to import from the selected configuration file. Importing a project from Version Control Servers Projects that are stored in Version Control Servers can be imported directly into Eclipse. There's a number of version control softwares, each with its pros and cons, and most of them are supported by Eclipse via plugins. GIT is one of the most used softwares for version control. CVS is the only version control system supported by default. To import a project managed by it, navigate to CVS | Projects from CVS in the Import wizard. Fill in the server information on the following page, and click on Finish . Introducing Java views Eclipse's user interface consists of elements called views. The following sections will introduce the main views related to Java development. The Package Explorer view The Package Explorer view is the default view used to display a project's contents. As the name implies, it uses the package hierarchy of the project to display its classes, regardless of the actual file hierarchy. This view also displays the project's build path. The following screenshot shows how the Package Explorer view looks: The Java Editor view The Java Editor is the Eclipse component that will be used to edit Java source files. It is the main view in the Java perspective and is located in the middle of the screen. The following screenshot shows the Java Editor view: The Java Editor is much more than an ordinary text editor. It contains a number of features that makes it easy for newcomers to start writing Java code and increases the productivity of experienced Java programmers. Let's talk about some of these features. Compiling errors and warnings annotations As you will see in the Building and running section with more details, Eclipse builds your code automatically after every saved modification by default. This allows Eclipse to get the Java Compiler output and mark errors and warnings through the code, making it easier to spot and correct them. Warnings are underlined in yellow and errors in red. Content assist This is probably the most used Java Editor feature both by novice and experienced Java programmers. It allows you to list all the methods callable by a given instance, along with their documentation. This feature will work by default for all Java classes and for the ones in your workspace. To enable it for external libraries, you will have to configure the build path for your project. We'll talk more about build paths further in this article in the Managing the project build path section. To see this feature in action, open a Java Editor, and create a new String instance. String s = new String(); Now add a reference to this String instance, followed by a dot, and press Ctrl + Space bar . You will see a list of all the String() and Object() methods. This is way more practical than searching for the class's API in the Java documentation or memorizing it. The following screenshot shows the content assist feature in action: This list can be filtered by typing the beginning of the method's name after the dot. Let's suppose you want to replace some characters in this String instance. As a novice Java programmer, you are not sure if there's a method for that; and if there is, you are not sure which parameters it receives. It's a fair guess that the method's name probably starts with replace, right? So go ahead and type: s.replace When you press Ctrl along with the space bar, you will get a list of all the String() methods whose name starts with replace. By choosing one of them and pressing Enter , the editor completes the code with the rest of the method's name and its parameters. It will even suggest some variables in your code that you might want to use as parameters, as shown in the following screenshot: Content assist will work with all classes in the project's classpath. You can disable content assist's automatic activation by unmarking Enable auto activation inside the Preferences window and navigating to Java | Editor | Content Assist . Code navigation When the project you are working on is big enough, finding a class in the Package Explorer can be a pain. You will frequently find yourself asking, "In which package is that class again?". You can leave the source code of the classes you are working on open in different tabs, but soon enough you will have more open tabs than you would like to have. Eclipse has an easy solution for this. In the toolbar, select Navigate | Open Type . Now, just type in the class's name, and click on OK . If you don't remember the full name of the class, you can use the wildcard characters, ? (matches one character) and * (matches any number of characters). You can also use only the uppercase letters for the CamelCase names (for example, SIOOBE for StringIndexOutOfBoundsException). The shortcut for the Open Type dialog is Ctrl + Shift + T . There's also an equivalent feature for finding and opening resources other than Java classes, such as HTML files, images, and plain text files. The shortcut for the Open Resource dialog is Ctrl + Shift + R . You can also navigate to a class' source file by holding Ctrl and clicking on a reference to that class in the code. To navigate to a method's implementation or definition directly, hold Ctrl and click on the method's call. Another useful feature that makes it easy to browse through your project's source files is the Link With Editor feature in the Package Explorer view, as shown in the following screenshot: By enabling it, the selected resource in the Package Explorer will always be the one that's open in the editor. Using this feature together with OpenType is certainly the easiest way of finding a resource in the Package Explorer. Quick fix Whenever there's an error or warning marker in your code, Eclipse might have some suggestions on how to get rid of it. To open the Quick Fix menu containing the suggestions, place the caret on the marked piece of code related to the error or warning, right-click on it, and choose Quick Fix . You can also use the shortcut by pressing Ctrl + 1 with the caret placed on the marked piece of code. The following screenshot shows the quick fix feature suggesting you to either get rid of the unused variable, create getters and setters for it, or add a SuppressWarnings annotation: Let's see some of the most used quick fixes provided by Eclipse. You can take advantage of these quick fixes to speed up your code writing. You can for example, deliberately call a method that throws an exception without the try/catch block, and use the quick fix to generate it instead of writing the try/catch block yourself. Unhandled exceptions : When a method that throws an exception is called, and the exception is not caught or thrown, Eclipse will mark the call with an error. You can use the quick fix feature to surround the code with a proper try/catch block automatically. Just open the Quick Fix menu, and choose Surround with Try/Catch . It will generate a catch block that will then call the printStackTrace() method of the thrown exception. If the method is already inside a try block, you can also choose the Add catch clause to the surrounding try option. If the exception shouldn't be handled in the current method, you can also use the Add throws declaration quick fix. References to nonexisting methods and variables : Eclipse can create a stub for methods referenced through the code that doesn't exist with quick fix. To illustrate this feature's usefulness, let's suppose you are working on a class's code, and you realize that you will need a method that performs some specific operation with two integers, returning another integer value. You can simply use the method, pretending that it exists: int b = 4; int c = 5; int a = performOperation(b,c); The method call will be marked with an error that says performOperation is undefined. To create a stub for this method, place the caret over the method's name, open the Quick Fix menu, and choose create method performOperation(int, int) . A private method will be created with the correct parameters and return type as well as a TODO marker inside it, reminding you that you have to implement the method. You can also use a quick fix to create methods in other classes. Using the same previous example, you can create the performOperation() method in a different class, such as the following: OperationPerformer op = new OperationPerformer(); int a = op.performOperation(b,c); Speaking of classes, quick fix can also create one if you add a call to a non-existing class constructor. Non-existing variables can also be created with quick fix. Like with the method creation, just refer to a variable that still doesn't exist, place the caret over it, and open the Quick Fix menu. You can create the variable either as a local variable, a field, or a parameter. Remove dead code : Unused methods, constructors and fields with private visibility are all marked with warnings. While the quick fix provided for unused methods and constructors is the most evident one (remove the dead code), it's also possible to generate getters and setters for unused private fields with a quick fix. Customizing the editor Like almost everything in Eclipse, you can customize the Java Editor's appearance and behavior. There are plenty of configurations in the Preferences window (Window | Preferences ) that will certainly allow you to tailor the editor to suit your needs. Appearance-related configurations are mostly found in General | Appearance | Colors and Fonts and behavior and feature configurations are mostly under General | Editors | Text Editors . Since there are lots of different categories and configurations, the filter text in the Preferences window might help you find what you want. A short list of the preferences you will most likely want to change is as follows: Colors and fonts : Navigate to General | Appearance . In the Colors and Fonts configuration screen, you can see that options are organized by categories. The ones inside the Basic and Java categories will affect the Java Editor. Enable/Disable spell checking : The Eclipse editor comes with a spellchecker. While in some cases it can be useful, in many others you won't find much use for it. To disable or configure it, navigate to General | Editors | Text Editors | Spelling . Annotations : You can edit the way annotations (warnings and errors, among others) are shown in the editor by navigating to General | Editors | Text Editors | Annotations inside the Preferences window. You can change colors, the way annotations are highlighted in the code (underline, squiggly line, box, among others), and whether they are shown in the vertical bar before the code. Show Line Numbers : To show line numbers on the left-hand side of the editor, mark the corresponding checkbox by navigating to General | Editors | Text Editors . Right-clicking on the bar on the editor's left-hand side brings a dialog in which you can also enable/disable line numbers.
Read more
  • 0
  • 0
  • 2072

article-image-article-authorizations-in-sap-hana
Packt
16 Jul 2013
28 min read
Save for later

Authorizations in SAP HANA

Packt
16 Jul 2013
28 min read
(For more resources related to this topic, see here.) Roles In SAP HANA, as in most of SAP's software, authorizations are grouped into roles. A role is a collection of authorization objects, with their associated privileges. It allows us, as developers, to define self-contained units of authorization. In the same way that at the start of this book we created an attribute view allowing us to have a coherent view of our customer data which we could reuse at will in more advanced developments, authorization roles allow us to create coherent developments of authorization data which we can then assign to users at will, making sure that users who are supposed to have the same rights always have the same rights. If we had to assign individual authorization objects to users, we could be fairly sure that sooner or later, we would forget someone in a department, and they would not be able to access the data they needed to do their everyday work. Worse, we might not give quite the same authorizations to one person, and have to spend valuable time correcting our error when they couldn't see the data they needed (or worse, more dangerous and less obvious to us as developers, if the user could see more data than was intended). It is always a much better idea to group authorizations into a role and then assign the role to users, than assign authorizations directly to users. Assigning a role to a user means that when the user changes jobs and needs a new set of privileges; we can just remove the first role, and assign a second one. Since, we're just starting out using authorizations in SAP HANA, let's get into this good habit right from the start. It really will make our lives easier later on. Creating a role Role creation is done, like all other SAP HANA development, in the Studio. If your Studio is currently closed, please open it, and then select the Modeler perspective. In order to create roles, privileges, and users, you will yourself need privileges. Your SAP HANA user will need the ROLE ADMIN, USER ADMIN, and CREATE STRUCTURED PRIVILEGE system privileges in order to do the development work in this article. You will see in the Navigator panel we have a Security folder, as we can see here: Please find the Security folder and then expand this folder. You will see a subfolder called Roles. Right-click on the Roles folder and select New Role to start creating a role. On the screen which will open, you will see a number of tabs representing the different authorization objects we can create, as we can see here: We'll be looking at each of these in turn, in the following sections, so for the moment just give your role Name (BOOKUSER might be appropriate, if not very original). Granted roles Like many other object types in SAP HANA, once you have created a role, you can then use it inside another role. This onion-like arrangement makes authorizations a lot easier to manage. If we had, for example, a company with two teams: Sales   Purchasing   And two countries, say: France   Germany   We could create a role giving access to sales analytic views, one giving purchasing analytic views, one giving access to data for France, and one giving access to data for Germany. We could then create new roles, say Sales-France, which don't actually contain any authorization objects themselves, but contain only the Sales and the France roles. The role definition is much simpler to understand and to maintain than if we had directly created the Sales-France role and a Sales-Germany role with all the underlying objects. Once again, as with other development objects, creating small self-contained roles and reusing them when possible will make your (maintenance) life easier. In the Granted Roles tab we can see the list of subroles this main role contains. Note that this list is only a pointer, you cannot modify the actual authorizations and the other roles given here, you would need to open the individual role and make changes there. Part of roles The Part of Roles tab in the role definition screen is exactly the opposite of the Granted Roles tab. This tab lists all other roles of which this role is a subrole. It is very useful to track authorizations, especially when you find yourself in a situation where a user seems to have too many authorizations and can see data they shouldn't be able to see. You cannot manipulate this list as such, it exists for information only. If you want to make changes, you need to modify the main role of which this role is a subrole. SQL privileges An SQL privilege is the lowest level at which we can define restrictions for using database objects. SQL privileges apply to the simplest objects in the database such as schemas, tables and so on. No attribute, analytical, or calculation view can be seen by SQL privileges. This is not strictly true, though you can consider it so. What we have seen as an analytical view, for example, the graphical definition, the drag and drop, the checkboxes, has been transformed into a real database object in the _SYS_BIC schema upon activation. We could therefore define SQL privileges on this database object if we wanted, but this is not recommended and indeed limits the control we can have over the view. We'll see a little later that SAP HANA has much finer-grained authorizations for views than this. An important thing to note about SQL privileges is that they apply to the object on which they are defined. They restrict access to a given object itself, but do not at any point have any impact on the object's contents. For example, we can decide that one of our users can have access to the CUSTOMER table, but we couldn't restrict their access to only CUSTOMER values from the COUNTRY USA. SQL privileges can control access to any object under the Catalog node in the Navigator panel. Let's add some authorizations to our BOOK schema and its contents. At the top of the SQL Privileges tab is a green plus sign button. Now click on this button to get the Select Catalog Object dialog, shown here: As you can see in the screenshot, we have entered the two letters bo into the filter box at the top of the dialog. As soon as you enter at least two letters into this box, the Studio will attempt to find and then list all database objects whose name contains the two letters you typed. If you continue to type, the search will be refined further. The first item in the list shown is the BOOK schema we created right back at the start of the book in the Chapter 2, SAP HANA Studio - Installation and First Look . Please select the BOOK item, and then click on OK to add it to our new role: The first thing to notice is the warning icon on the SQL Privileges tab itself: This means that your role definition is incomplete, and the role cannot be activated and used as yet. On the right of the screen, a list of checkbox options has appeared. These are the individual authorizations appropriate to the SQL object you have selected. In order to grant rights to a user via a role, you need to decide which of these options to include in the role. The individual authorization names are self-explicit. For example, the CREATE ANY authorization allows creation of new objects inside a schema. The INSERT or SELECT authorization might at first seem unusual for a schema, as it's not an object which can support such instructions. However, the usage is actually quite elegant. If a user has INSERT rights on the schema BOOK, then they have INSERT rights on all objects inside the schema BOOK. Granting rights on the schema itself avoids having to specify the names of all objects inside the schema. It also future-proofs your authorization concept, since new objects created in the schema will automatically inherit from the existing authorizations you have defined. On the far right of the screen, alongside each authorization is a radio button which gives an additional privilege, the possibility for a given user to, in turn, give the rights to a second user. This is an option which should not be given to all users, and so should not be present in all roles you create; the right to attribute privileges to users should be limited to your administrators. If you give just any user the right to pass on their authorizations further, you will soon find that you are no longer able to determine who can do what in your database. For the moment we are creating a simple role to show the working of the authorization concept in SAP HANA, so we will check all the checkboxes, and leave the radio buttons at No : There are some SQL privileges which are necessary for any user to be able to do work in SAP HANA. These are listed below. They give access to the system objects describing the development models we create in SAP HANA, and if a user does not have these privileges, nothing will work at all, the user will not be authorized to do anything. The SQL privileges you will need to add to the role in order to give access to basic SAP HANA system objects are: The SELECT privilege on the _SYS_BI schema   The SELECT privilege on the _SYS_REPO schema   The EXECUTE privilege on the REPOSITORY_REST procedure   Please add these SQL privileges to your role now, in order to obtain the following result: As you can see with the configuration we have just done, SQL privileges allow a user to access a given object and allow specific actions on the object. They do not however allow us to specify particular authorizations to the contents of the object. In order to use such fine-grained rights, we need to create an analytic privilege, and then add it to our role, so let's do that now. Analytic privileges An analytic privilege is an artifact unique to SAP HANA, it is not part of the standard SQL authorization concept. Analytic privileges allow us to restrict access to certain values of a given attribute, analytic, or calculation view. This means that we can create one view, which by default shows all available data, and then restrict what is actually visible to different users. We could restrict visible data by company code, by country, or by region. For example, our users in Europe would be allowed to see and work with data from our customers in Europe, but not those in the USA. An analytic privilege is created through the Quick Launch panel of Modeler , so please open that view now (or switch to the Quick Launch tab if it's already open). You don't need to close the role definition tab that's already open, we can leave it for now, create our analytic privilege, and then come back to the role definition later. From the Quick Launch panel, select Analytic Privilege , and then Create . As usual with SAP HANA, we are asked to give Name , Description , and select a package for our object. We'll call it AP_EU (for analytic privilege, Europe), use the name as the description, and put it into our book package alongside our other developments. As is common in SAP HANA, we have the option of creating an analytic privilege from scratch (Create New ) or copying an existing privilege (Copy From ). We don't currently have any other analytic privileges in our development, so leave Create New selected, then click on Next to go to the second screen of the wizard, shown here: On this page of the dialog, we are prompted to add development models to the analytic privilege. This will then allow us to restrict access to given values of these models. In the previous screenshot, we have added the CUST_REV analytic view to the analytic privilege. This will allow us to restrict access to any value we specify of any of the fields visible in the view. To add a view to the analytic privilege, just find it in the left panel, click on its name and then click on the Add button. Once you have added the views you require for your authorizations, click on the Finish button at the bottom of the window to go to the next step. You will be presented with the analytic privilege development panel, reproduced here: This page allows us to define our analytic privilege completely. On the left we have the list of database views we have included in the analytic privilege. We can add more, or remove one, using the Add and Remove buttons. To the right, we can see the Associated Attributes Restrictions and Assign Restrictions boxes. These are where we define the restrictions to individual values, or sets of values. In the top box, Associated Attributes Restrictions , we define on which attributes we want to restrict access (country code or region, maybe). In the bottom box, Assign Restrictions , we define the individual values on which to restrict (for example, for company code, we could restrict to value 0001, or US22; for region, we could limit access to EU or USA). Let's add a restriction to the REGION field of our CUST_REV view now. Click on the Add button next to the Associated Attributes Restrictions box, to see the Select Object dialog: As can be expected, this dialog lists all the attributes in our analytic view. We just need to select the appropriate attribute and then click on OK to add it to the analytic privilege. Measures in the view are not listed in the dialog. We cannot restrict access to a view according to numeric values. We cannot therefore, make restrictions to customers with a revenue over 1 million Euros, for example. Please add the REGION field to the analytic privilege now. Once the appropriate fields have been added, we can define the restrictions to be applied to them. Click on the REGION field in the Associated Attributes Restrictions box, then on the Add button next to the Assign Restrictions box, to define the restrictions we want to apply. As we can see, restrictions can be defined according to the usual list of comparison operators. These are the same operators we used earlier to define a restricted column in our analytic views. In our example, we'll be restricting access to those lines with a REGION column equal to EU, so we'll select Equal . In the Value column, we can either type the appropriate value directly, or use the value help button, and the familiar Value Help Dialog which will appear, to select the value from those available in the view. Please add the EU value, either by typing it or by having SAP HANA find it for us, now. There is one more field which needs to be added to our analytic privilege, and the reason behind might seem at first a little strange. This point is valid for SAP HANA SP5, up to and including (at least) release 50 of the software. If this point turns out to be a bug, then it might not be necessary in later versions of the software. The field on which we want to restrict user actions (REGION) is not actually part of the analytic view itself. REGION, if you recall, is a field which is present in CUST_REV , thanks to the included attribute view CUST_ATTR . In its current state, the analytic privilege will not work, because no fields from the analytic view are actually present in the analytic privilege. We therefore need to add at least one of the native fields of the analytic view to the analytic privilege. We don't need to do any restriction on the field; however it needs to be in the privilege for everything to work as expected. This is hinted at in SAP Note 1809199, SAP HANA DB: debugging user authorization errors. Only if a view is included in one of the cube restrictions and at least one of its attribute is employed by one of the dimension restrictions, access to the view is granted by this analytical privilege. Not an explicit description of the workings of the authorization concept, but close. Our analytic view CUST_REV contains two native fields, CURRENCY and YEAR. You can add either of these to the analytic privilege. You do not need to assign any restrictions to the field; it just needs to be in the privilege. Here is the state of the analytic privilege when development work on it is finished: The Count column lists the number of restrictions in effect for the associated field. For the CURRENCY field, no restrictions are defined. We just need (as always) to activate our analytic privilege in order to be able to use it. The activation button is the same one as we have used up until now to activate the modeling views, the round green button with the right-facing white arrow at the top-right of the panel, which you can see on the preceding screenshot. Please activate the analytic privilege now. Once that has been done, we can add it to our role. Return to the Role tab (if you left it open) or reopen the role now. If you closed the role definition tab earlier, you can get back to our role by opening the Security node in the Navigator panel, then opening Roles, and double-clicking on the BOOKUSER role. In the Analytic Privileges tab of the role definition screen, click on the green plus sign at the top, to add an analytic privilege to our role. The analytic privilege we have just created is called AP_EU, so type ap_eu into the search box at the top of the dialog window which will open. As soon as you have typed at least two characters, SAP HANA will start searching for matching analytic privileges, and your AP_EU privilege will be listed, as we can see here: Click on OK to add the privilege to the role. We will see in a minute the effect our analytic privilege has on the rights of a particular user, but for the moment we can take a look at the second-to-last tab in the role definition screen, System Privileges . System privileges As its name suggests, system privileges gives to a particular user the right to perform specific actions on the SAP HANA system itself, not just on a given table or view. These are particular rights which should not be given to just any user, but should be reserved to those users who need to perform a particular task. We'll not be adding any of these privileges to our role, however we'll take a look at the available options and what they are used for. Click on the green plus-sign button at the top of the System Privileges tab to see a list of the available privileges. By default the dialog will do a search on all available values; there are only fifteen or so, but you can as usual filter them down if you require using the filter box at the top of the dialog: For a full list of the system privileges available and their uses, please refer to the SAP HANA SQL Reference, available on the help.sap.com website at http://help.sap.com/hana/html/sql_grant.html. Package privileges The last tab in the role definition screen concerns Package Privileges . These allow a given user to access those objects in a package. In our example, the package is called book, so if we add the book package to our role in the Package Privileges tab, we will see the following result: Assigning package privileges is similar to assigning SQL privileges we saw earlier. We first add the required object (here our book package), then we need to indicate exactly which rights we give to the role. As we can see in the preceding screenshot, we have a series of checkboxes on the right-hand side of the window. At least one of these checkboxes must be checked in order to save the role. The individual rights have names which are fairly self-explanatory. REPO.READ gives access to read the package, whereas REPO.EDIT_NATIVE_OBJECTS allows modification of objects, for example. The role we are creating is destined for an end user who will need to see the data in a role, but should not need to modify the data models in any way (and in fact we really don't want them to modify our data models, do we?). We'll just add the REPO.READ privilege, on our book package, to our role. Again we can decide whether the end user can in turn assign this privilege to others. And again, we don't need this feature in our role. At this point, our role is finished. We have given access to the SQL objects in the BOOK schema, created an analytic privilege which limits access to the Europe region in our CUST_REV model, and given read-only access to our book package. After activation (always) we'll be able to assign our role to a test user, and then see the effect our authorizations have on what the user can do and see. Please activate the role now. Users Users are probably the most important part of the authorization concept. They are where all our problems begin, and their attempts to do and see things they shouldn't are the main reason we have to spend valuable time defining authorizations in the first place. In technical terms, a user is just another database object. They are created, modified, and deleted in the same way a modeling view is. They have properties (their name and password, for example), and it is by modifying these properties that we influence the actions that the person who connects using the user can perform. Up until now we have been using the SYSTEM user (or the user that your database administrator assigned to you). This user is defined by SAP, and has basically the authorizations to do anything with the database. Use of this user is discouraged by SAP, and the author really would like to insist that you don't use it for your developments. Accidents happen, and one of the great things about authorizations is that they help to prevent accidents. If you try to delete an important object with the SYSTEM user, you will delete it, and getting it back might involve a database restore. If however you use a development user with less authorization, then you wouldn't have been allowed to do the deletion, saving a lot of tears. Of course, the question then arises, why have you been using the SYSTEM user for the last couple of hundred pages of development. The answer is simple: if the author had started the book with the authorizations article, not many readers would have gotten past page 10. Let's create a new user now, and assign the role we have just created. From the Navigator panel, open the Security node, right-click on User , and select New User from the menu to obtain the user creation screen as shown in the following screenshot: Defining a user requires remarkably little information: User Name : The login that the user will use. Your company might have a naming convention for users. Users might even already have a standard login they use to connect to other systems in your enterprise. In our example, we'll create a user with the (once again rather unimaginative) name of BOOKU.   Authentication : How will SAP HANA know that the user connecting with the name of ANNE really is Anne? There are three (currently) ways of authenticating a user with SAP HANA. Password : This is the most common authentication system, SAP HANA will ask Anne for her password when she connects to the system. Since Anne is the only person who knows her password, we can be sure that Anne really is ANNE, and let her connect and do anything the user ANNE is allowed to do. Passwords in SAP HANA have to respect a certain format. By default this format is one capital, one lowercase, one number, and at least eight characters. You can see and change the password policy in the system configuration. Double-click on the system name in the Navigator panel, click on the Configuration tab, type the word pass into the filter box at the top of the tab, and scroll down to indexserver.ini and then password policy . The password format in force on your system is listed as password_layout . By default this is A1a, meaning capitals, numbers, and lowercase letters are allowed. The value can also contain the # character, meaning that special characters must also be contained in the password. The only special characters allowed by SAP HANA are currently the underscore, dollar sign, and the hash character. Other password policy defaults are also listed on this screen, such as maximum_password_lifetime (the time after which SAP HANA will force you to change your password).   Kerberos and SAML : These authentication systems need to be set up by your network administrator and allow single sign-on in your enterprise. This means that SAP HANA will be able to see the Windows username that is connecting to the system. The database will assume that the authentication part (deciding whether Anne really is ANNE) has already been done by Windows, and let the user connect.     Session Client : As we saw when we created attribute and analytic views back at the start of the book, SAP HANA understands the notion of client, referring to a partition system of the SAP ERP database. In the SAP ERP, different users can work in different Clients. In our development, we filtered on Client 100. A much better way of handling filtering is to define the default client for a user when we define their account. The Session Client field can be filled with the ERP Client in which the user works. In this way we do not need to filter on the analytic models, we can leave their client value at Dynamic in the view, and the actual value to use will be taken from the user record. Once again this means maintenance of our developments is a lot simpler. If you like, you can take a few minutes at the end of this article to create a user with a session client value of 100, then go back and reset our attribute and analytic views' default client value to Dynamic, reactivate everything, and then do a data preview with your test user. The result should be identical to that obtained when the view was filtered on client 100. However, if you then create a second user with a session client of 200, this second user will see different data.   We'll create a user with a password login, so type a password for your user now. Remember to adhere to the password policy in force on your system. Also note that the user will be required to change their password on first login. At the bottom of the user definition screen, as we can see from the preceding screenshot, we have a series of tabs corresponding to the different authorizations we can assign to our user. These are the same tabs we saw earlier when defining a role. As explained at the beginning of this article, it is considered best practice to assign authorizations to a role and then the role to a user, rather than assign authorizations directly to a user; this makes maintenance easier. For this reason we will not be looking at the different tabs for assigning authorizations to our user, other than the first one, Granted Roles . The Granted Roles tab lists, and allows adding and removing roles from the list assigned to the user. By default when we create a user, they have no roles assigned, and hence have no authorizations at all in the system. They will be able to log in to SAP HANA but will be able to do no development work, and will see no data from the system. Please click on the green plus sign button in the Granted Roles tab of the user definition screen, to add a role to the user account. You will be provided with the Select Role dialog, shown in part here: This dialog has the familiar search box at the top, so typing the first few letters of a role name will bring up a list of matching roles. Here our role was called BOOKUSER, so please do a search for it, then select it in the list and click on OK to add it to the user account. Once that is done, we can test our user to verify that we can perform the necessary actions with the role and user we have just created. We just need, as with all objects in SAP HANA, to activate the user object first. As usual, this is done with the round green button with the right-facing white arrow at the top-right of the screen. Please do this now. Testing our user and role The only real way to check if the authorizations we have defined are appropriate to the business requirements is to create a user and then try out the role to see what the user can and cannot see and do in the system. The first thing to do is to add our new user to the Studio so we can connect to SAP HANA using this new user. To do this, in the Navigator panel, right click on the SAP HANA system name, and select Add Additional User from the menu which appears. This will give you the Add additional user dialog, shown in the following screenshot:     Enter the name of the user you just created (BOOKU) and the password you assigned to the user. You will be required to change the password immediately: Click on Finish to add the user to the Studio. You will see immediately in the Navigator panel that we can now work with either our SYSTEM user, or our BOOKU user: We can also see straight away that BOOKU is missing the privileges to perform or manage data backups; the Backup node is missing from the list for the BOOKU user. Let's try to do something with our BOOKU user and see how the system reacts. The way the Studio lets you handle multiple users is very elegant, since the tree structure of database objects is duplicated, one per user, you can see immediately how the different authorization profiles affect the different users. Additionally, if you request a data preview from the CUST_REV analytic view in the book package under the BOOKU user's node in the Navigator panel, you will see the data according to the BOOKU user's authorizations. Requesting the same data preview from the SYSTEM user's node will see the data according to SYSTEM's authorizations. Let's do a data preview on the CUST_REV view with the SYSTEM user, for reference: As we can see, there are 12 rows of data retrieved, and we have data from the EU and NAR regions. If we ask for the same data preview using our BOOKU user, we can see much less data: BOOKU can only see nine of the 12 data rows in our view, as no data from the NAR region is visible to the BOOKU user. This is exactly the result we aimed to achieve using our analytic privilege, in our role, assigned to our user. Summary In this article, we have taken a look at the different aspects of the authorization concept in SAP HANA. We examined the different authorization levels available in the system, from SQL privileges, analytic privileges, system privileges, and package privileges. We saw how to add these different authorization concepts to a role, a reusable group of authorizations. We went on to create a new user in our SAP HANA system, examining the different types of authentications available, and the assignment of roles to users. Finally, we logged into the Studio with our new user account, and found out the first-hand effect our authorizations had on what the user could see and do. In the next article, we will be working with hierarchical data, seeing what hierarchies can bring to our reporting applications, and how to make the best use of them. Resources for Article : Further resources on this subject: SAP Netweaver: Accessing the MDM System [Article] SAP HANA integration with Microsoft Excel [Article] Exporting SAP BusinessObjects Dashboards into Different Environments [Article]
Read more
  • 0
  • 2
  • 13915

article-image-rules-and-events
Packt
12 Jul 2013
10 min read
Save for later

Rules and Events

Packt
12 Jul 2013
10 min read
(For more resources related to this topic, see here.) Handling specifc events is something everybody expects from an application. While JavaScript has its own event handling model, working with Dynamics CRM offers a different set of events that we can take advantage of. The JavaScript event model, while it might work, is not supported, and defnitely not the approach you want to take when working within the context of Dynamics CRM. Some of the most notable events and their counterparts in JavaScript are described in the following table: Dynamics CRM 2011 JavaScript Description OnLoad onload This is a form event. Executes when a form is loaded. Most common use is to filter and hide elements on the form. OnSave onsubmit This is a form event. It executes when a form is saved. Most common use is to stop an operation from executing, as a result of a failed validation procedure. TabStateChange n/a This is a form event. It executes when the DisplayState of the tab changes. OnChange onchange This is a field specific event. It executes when tabbing out of a field where you've changed the value. Please note that there is no equivalent for onfocus and onblur. OnReadyStateComplete n/a This event indicates that the content of an IFrame has completed loading. Additional details on Dynamics CRM 2011 specifc events can be found on MSDN at http://msdn.microsoft.com/en-us/library/gg334481.aspx. Form load event usage In this recipe, we will focus on executing a few operations triggered by the form load event. We can check the value of a specifc field on the form, and based on that we can decide to hide a tab, hide a field, and prepopulate a text field with a predefned value. Getting ready... Just as with any of the previous recipes, you will need access to an environment, and permissions to make customizations. You should be a system administrator, a system customizer, or a custom role configured to allow you to perform the following operations. How to do it... For the purpose of this exercise, we will add to the Contact entity a new tab called "Special Customer", with some additional custom fields. We will also add an option set that we will check to determine if we hide or not the fields, as well as two new fields: one text field and one lookup field. So let's get started! Open the contact's main form for editing. Add a new tab by going to Insert | Tab | One Column. Double-click on the newly added tab to open the Tab Properties window. Change the Label field of the tab to Special Customer. Make sure the show label is expanded by default and visible checkboxes are checked. Click on OK. Add a few additional text fields on this tab. We will be hiding the tab along with the content within the tab. Add a new field, named Is Special Customer (new_IsSpecialCustomer). Leave the default yes/no values. Add the newly created field to the general form for the contact. Add another new text field, named Customer Classifcation (new_ CustomerClassifcation). Leave the Format as Text, and the default Maximum Length to 100, as shown in the following screenshot: Add the newly created text field to the general form, under the previously added field. Add a new lookup field, called Partner (new_Partner). Make it a lookup for a contact, as shown in the following screenshot: Add this new field to the general form, under the other two fields. Save and Publish the Contact form. Your form should look similar to the following screenshot: Observe the fact that I have ordered the three fields one on top of the other. The reason for this is because the default tab order in CRM is vertical and across. This way, when all the fields are visible, I can tab right from one to another. In your solution where you made the previous changes, add a new web resource named FormLoader (new_FormLoader). Set the Type to JScript. Click on the Text Editor button and insert the following function: function IsSpecialCustomer(){var _isSpecialSelection = null;var _isSpecial = Xrm.Page.getAttribute("new_isspecialcustomer");if(_isSpecial != null){_isSpecialSelection = _isSpecial.getValue();}if(_isSpecialSelection == false){// hide the Special Customer tabXrm.Page.ui.tabs.get("tab_5").setVisible(false);// hide the Customer Classification fieldXrm.Page.ui.controls.get("new_customerclassification").setVisible(false);// hide the Partner fieldXrm.Page.ui.controls.get("new_partner").setVisible(false);}} Save and Publish the web resource. Go back to the Contact form, and on the ribbon select Form Properties. On the Events tab, add the library created as web resource in the Forms Libraries section, and in the Event Handlers area, on the Form OnLoad add the function we created: Click on the Text Editor button and insert the following function: Click on OK, then click on Save andPublish the form Test your configuration by opening a new contact, setting the Is Special Customer field to No. Save and close the contact. Open it again, and the tab and fields should be hidden. How it works... The whole idea of this script is not much different from what we have demonstrated in some of the previous recipes. Based on a set form value, we hide a tab and some fields. Where we capture the difference is where we set the script to execute. Working with scripts executing when the form loads gives us a whole new way of handling various scenarios. There's more... In many scenarios, working with the form load events in conjunction with the other field events can potentially result in a very complex solution. When debugging, always pay close attention to the type of event you associate your script function with. See also See the Combining events recipe towards the end of this article for a more complex recipe detailing how to work with multiple events to achieve the expected result. Form save event usage While working with the Form OnLoad event can help us format and arrange the user interface, working with the Form OnSave opens up a new door towards validation of user input and execution of business process amongst others. Getting ready Using the same solution we have worked on in the previous recipe, we will continue to demonstrate a few other aspects of working with the forms in Dynamics CRM 2011. In this recipe the focus is on the handling the Form OnSave event. How to do it... First off, in order to kick off this, we might want to verify a set of fields for a condition, or perform a calculation based on a formula. In order to simplify this process, we can just check a simple yes/no condition on a form. How it works... Using the previously customized solution, we will be taking advantage of the Contact entity and the fields that we have already customized on that form. If you are starting with this recipe fresh, take the following step before delving into this recipe: Add a new two-options field, named Is Special Customer (new_IsSpecialCustomer). Leave the default yes/no values. Using this field, if the answer is No, we will stop the save process. In your solution add a new web resource. I have named it new_ch4rcp2. Set its type to JScript. Enter the following function in your resource: function StopSave(context){var _isSpecialSelection = null;var _isSpecial = Xrm.Page.getAttribute("new_isspecialcustomer");if(_isSpecial != null){_isSpecialSelection = _isSpecial.getValue();}if(_isSpecialSelection == false){alert("You cannot save your record while the Customer is not afriend!");context.getEventArgs().preventDefault();}} The function basically checks for the value in our Is Special Customer. If a value is retrieved, and that value is No, we can bring up an alert and stop the Save and Close event. Now, back on to the contact's main form, we attach this new function to the form's OnSave event. Save and Publish your solution. In order to test this functionality, we will create a new contact, populate all the required fields, and set the Is Special Customer field to No. Now try to click on Save and Close. You will get an alert as seen in the following screenshot, and the form will not close nor be saved. Changing the Is Special Customer selection to Yes and saving the form will now save and close the form. There's more... While this recipe only describes in a very simplistic manner the way to stop a form from saving and closing, the possibilities here are immense. Think about what you can do on form save, and what you can achieve if a condition should be met in order to allow the form to be saved. Starting a process instead of saving the form Another good use for blocking the save and close form is to take a different path. Let's say we want to kick off a workfow when we block the save form. We can call from the previous function a new function as follows: function launchWorkflow(dialogID, typeName, recordId){var serverUri = Mscrm.CrmUri.create('/cs/dialog/rundialog.aspx');window.showModalDialog(serverUri + '?DialogId=' + dialogID +'&EntityName=' + typeName +'&ObjectId=' + recordId, null, 'width=615,height=480,resizable=1,status=1,scrollbars=1');// Reload formwindow.location.reload(true);} We pass to this function the following three parameters: GUID of the Workfow or Dialog The type name of the entity The ID of the record See also For more details on parameters see the following article on MSDN: http://msdn.microsoft.com/en-us/library/gg309332.aspx Field change event usage In this recipe we will drill down to a lower level. We have handled form events, and now it is time to handle field events. The following recipe will show you how to bring all these together and achieve exactly the result you need. Getting ready For the purpose of this recipe, let's focus on reusing the previous solution. We will check the value of a field, and act upon it. How to do it... In order to walkthrough this recipe, follow these steps:> Create a new form field called new_changeevent, with a label of Change Event, and a Type of Two Options. Leave the default values of No and Yes. Leave the Default Value as No. Add this field to your main Contact form. Add the following script to a new JScript web resource: function ChangeEvent(){var _changeEventSelection = null;var _isChanged = Xrm.Page.getAttribute("new_changeevent");if(_isChanged != null){_changeEventSelection = _isChanged.getValue();}if(_changeEventSelection == true){alert("Change event is set to True");// perform other actions here}else{alert("Change event is set to False");}} This function, as seen in the previous recipes, checks the value of the Two Options field, and performs and action based on the user selection. The action in this example is simply bringing an alert message up. Add the new web resource to the form libraries. Associate this new function to the OnChange event of the field we have just created. Save and Publish your solution. Create a new contact, and try changing the Change Event value from No to Yes and back. Every time the selection is changed, a different message comes up in the alert. How it works... Handling events at the field level, specifcally the OnSave event, allows us to dynamically execute various other functions. We can easily take advantage of this functionality to modify the form displayed to a user dynamically, based on a selection. Based on a field value, we can defne areas or field on the form to be hidden and shown.
Read more
  • 0
  • 0
  • 1629

article-image-database-active-record-and-model-tricks
Packt
11 Jul 2013
14 min read
Save for later

Database, Active Record, and Model Tricks

Packt
11 Jul 2013
14 min read
(For more resources related to this topic, see here.) Getting data from a database Most applications today use databases. Be it a small website or a social network, at least some parts are powered by databases. Yii introduces three ways that allow you to work with databases: Active Record Query builder SQL via DAO We will use all these methods to get data from the film, film_actor, and actor tables and show it in a list. We will measure the execution time and memory usage to determine when to use these methods. Getting ready Create a new application by using yiic webapp as described in the official guide at the following URL:http://www.yiiframework.com/doc/guide/en/quickstart.first-app Download the Sakila database from the following URL:http://dev.mysql.com/doc/index-other.html Execute the downloaded SQLs; first schema then data. Configure the DB connection in protected/config/main.php to use the Sakila database. Use Gii to create models for the actor and film tables. How to do it... We will create protected/controllers/DbController.php as follows: <?php class DbController extends Controller { protected function afterAction($action) { $time = sprintf('%0.5f', Yii::getLogger() ->getExecutionTime()); $memory = round(memory_get_peak_usage()/(1024*1024),2)."MB"; echo "Time: $time, memory: $memory"; parent::afterAction($action); } public function actionAr() { $actors = Actor::model()->findAll(array('with' => 'films', 'order' => 't.first_name, t.last_name, films.title')); echo '<ol>'; foreach($actors as $actor) { echo '<li>'; echo $actor->first_name.' '.$actor->last_name; echo '<ol>'; foreach($actor->films as $film) { echo '<li>'; echo $film->title; echo '</li>'; } echo '</ol>'; echo '</li>'; } echo '</ol>'; } public function actionQueryBuilder() { $rows = Yii::app()->db->createCommand() ->from('actor') ->join('film_actor', 'actor.actor_id=film_actor.actor_id') ->leftJoin('film', 'film.film_id=film_actor.film_id') ->order('actor.first_name, actor.last_name, film.title') ->queryAll(); $this->renderRows($rows); } public function actionSql() { $sql = "SELECT * FROM actor a JOIN film_actor fa ON fa.actor_id = a.actor_id JOIN film f ON fa.film_id = f.film_id ORDER BY a.first_name, a.last_name, f.title"; $rows = Yii::app()->db->createCommand($sql)->queryAll(); $this->renderRows($rows); } public function renderRows($rows) { $lastActorName = null; echo '<ol>'; foreach($rows as $row) { $actorName = $row['first_name'].' '.$row['last_name']; if($actorName!=$lastActorName){ if($lastActorName!==null){ echo '</ol>'; echo '</li>'; } $lastActorName = $actorName; echo '<li>'; echo $actorName; echo '<ol>'; } echo '<li>'; echo $row['title']; echo '</li>'; } echo '</ol>'; } } Here, we have three actions corresponding to three different methods of getting data from a database. After running the preceding db/ar, db/queryBuilder and db/sql actions, you should get a tree showing 200 actors and 1,000 films they have acted in, as shown in the following screenshot: At the bottom there are statistics that give information about the memory usage and execution time. Absolute numbers can be different if you run this code, but the difference between the methods used should be about the same: Method Memory usage (megabytes) Execution time (seconds) Active Record 19.74 1.14109 Query builder 17.98 0.35732 SQL (DAO) 17.74 0.35038 How it works... Let's review the preceding code. The actionAr action method gets model instances by using the Active Record approach. We start with the Actor model generated with Gii to get all the actors and specify 'with' => 'films' to get the corresponding films using a single query or eager loading through relation, which Gii builds for us from InnoDB table foreign keys. We then simply iterate over all the actors and for each actor—over each film. Then for each item, we print its name. The actionQueryBuilder function uses query builder. First, we create a query command for the current DB connection with Yii::app()->db->createCommand(). We then add query parts one by one with from, join, and leftJoin. These methods escape values, tables, and field names automatically. The queryAll function returns an array of raw database rows. Each row is also an array indexed with result field names. We pass the result to renderRows, which renders it. With actionSql, we do the same, except we pass SQL directly instead of adding its parts one by one. It's worth mentioning that we should escape parameter values manually with Yii::app()->db->quoteValue before using them in the query string. The renderRows function renders the query builder. The DAO raw row requires you to add more checks and generally, it feels unnatural compared to rendering an Active Record result. As we can see, all these methods give the same result in the end, but they all have different performance, syntax, and extra features. We will now do a comparison and figure out when to use each method: Method Active Record Query Builder SQL (DAO) Syntax This will do SQL for you. Gii will generate models and relations for you. Works with models, completely OO-style, and very clean API. Produces array of properly nested models as the result. Clean API, suitable for building query on the fly. Produces raw data arrays as the result. Good for complex SQL. Manual values and keywords quoting. Not very suitable for building query on the fly. Produces raw data arrays as results. Performance Higher memory usage and execution time compared to SQL and query builder. Okay. Okay. Extra features Quotes values and names automatically. Behaviors. Before/after hooks. Validation. Quotes values and names automatically. None. Best for Prototyping selects. Update, delete, and create actions for single models (model gives a huge benefit when using with forms). Working with large amounts of data, building queries on the fly. Complex queries you want to do with pure SQL and have maximum possible performance. There's more... In order to learn more about working with databases in Yii, refer to the following resources: http://www.yiiframework.com/doc/guide/en/database.dao http://www.yiiframework.com/doc/guide/en/database.query-builder http://www.yiiframework.com/doc/guide/en/database.ar See also The Using CDbCriteria recipe Defining and using multiple DB connections Multiple database connections are not used very often for new standalone web applications. However, when you are building an add-on application for an existing system, you will most probably need another database connection. From this recipe you will learn how to define multiple DB connections and use them with DAO, query builder, and Active Record models. Getting ready Create a new application by using yiic webapp as described in the official guide at the following URL:http://www.yiiframework.com/doc/guide/en/quickstart.first-app Create two MySQL databases named db1 and db2. Create a table named post in db1 as follows: DROP TABLE IF EXISTS `post`; CREATE TABLE IF NOT EXISTS `post` ( `id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT, `title` VARCHAR(255) NOT NULL, `text` TEXT NOT NULL, PRIMARY KEY (`id`) ); Create a table named comment in db2 as follows: DROP TABLE IF EXISTS `comment`; CREATE TABLE IF NOT EXISTS `comment` ( `id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT, `text` TEXT NOT NULL, `postId` INT(10) UNSIGNED NOT NULL, PRIMARY KEY (`id`) ); How to do it... We will start with configuring the DB connections. Open protected/config/main.php and define a primary connection as described in the official guide: 'db'=>array( 'connectionString' => 'mysql:host=localhost;dbname=db1', 'emulatePrepare' => true, 'username' => 'root', 'password' => '', 'charset' => 'utf8', ), Copy it, rename the db component to db2, and change the connection string accordingly. Also, you need to add the class name as follows: 'db2'=>array( 'class'=>'CDbConnection', 'connectionString' => 'mysql:host=localhost;dbname=db2', 'emulatePrepare' => true, 'username' => 'root', 'password' => '', 'charset' => 'utf8', ), That is it. Now you have two database connections and you can use them with DAO and query builder as follows: $db1Rows = Yii::app()->db->createCommand($sql)->queryAll(); $db2Rows = Yii::app()->db2->createCommand($sql)->queryAll(); Now, if we need to use Active Record models, we first need to create Post and Comment models with Gii. Starting from Yii version 1.1.11, you can just select an appropriate connection for each model.Now you can use the Comment model as usual. Create protected/controllers/DbtestController.php as follows: <?php class DbtestController extends CController { public function actionIndex() { $post = new Post(); $post->title = "Post #".rand(1, 1000); $post->text = "text"; $post->save(); echo '<h1>Posts</h1>'; $posts = Post::model()->findAll(); foreach($posts as $post) { echo $post->title."<br />"; } $comment = new Comment(); $comment->postId = $post->id; $comment->text = "comment #".rand(1, 1000); $comment->save(); echo '<h1>Comments</h1>'; $comments = Comment::model()->findAll(); foreach($comments as $comment) { echo $comment->text."<br />"; } } } Run dbtest/index multiple times and you should see records added to both databases, as shown in the following screenshot: How it works... In Yii you can add and configure your own components through the configuration file. For non-standard components, such as db2, you have to specify the component class. Similarly, you can add db3, db4, or any other component, for example, facebookApi. The remaining array key/value pairs are assigned to the component's public properties respectively. There's more... Depending on the RDBMS used, there are additional things we can do to make it easier to use multiple databases. Cross-database relations If you are using MySQL, it is possible to create cross-database relations for your models. In order to do this, you should prefix the Comment model's table name with the database name as follows: class Comment extends CActiveRecord { //… public function tableName() { return 'db2.comment'; } //… } Now, if you have a comments relation defined in the Post model relations method, you can use the following code: $posts = Post::model()->with('comments')->findAll(); Further reading For further information, refer to the following URL: http://www.yiiframework.com/doc/api/CActiveRecord See also The Getting data from a database recipe Using scopes to get models for different languages Internationalizing your application is not an easy task. You need to translate interfaces, translate messages, format dates properly, and so on. Yii helps you to do this by giving you access to the Common Locale Data Repository ( CLDR ) data of Unicode and providing translation and formatting tools. When it comes to applications with data in multiple languages, you have to find your own way. From this recipe, you will learn a possible way to get a handy model function that will help to get blog posts for different languages. Getting ready Create a new application by using yiic webapp as described in the official guide at the following URL:http://www.yiiframework.com/doc/guide/en/quickstart.first-app Set up the database connection and create a table named post as follows: DROP TABLE IF EXISTS `post`; CREATE TABLE IF NOT EXISTS `post` ( `id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT, `lang` VARCHAR(5) NOT NULL DEFAULT 'en', `title` VARCHAR(255) NOT NULL, `text` TEXT NOT NULL, PRIMARY KEY (`id`) ); INSERT INTO `post`(`id`,`lang`,`title`,`text`) VALUES (1,'en_us','Yii news','Text in English'), (2,'de','Yii Nachrichten','Text in Deutsch'); Generate a Post model using Gii. How to do it... Add the following methods to protected/models/Post.php as follows: class Post extends CActiveRecord { public function defaultScope() { return array( 'condition' => "lang=:lang", 'params' => array( ':lang' => Yii::app()->language, ), ); } public function lang($lang){ $this->getDbCriteria()->mergeWith(array( 'condition' => "lang=:lang", 'params' => array( ':lang' => $lang, ), )); return $this; } } That is it. Now, we can use our model. Create protected/controllers/ DbtestController.php as follows: <?php class DbtestController extends CController { public function actionIndex() { // Get posts written in default application language $posts = Post::model()->findAll(); echo '<h1>Default language</h1>'; foreach($posts as $post) { echo '<h2>'.$post->title.'</h2>'; echo $post->text; } // Get posts written in German $posts = Post::model()->lang('de')->findAll(); echo '<h1>German</h1>'; foreach($posts as $post) { echo '<h2>'.$post->title.'</h2>'; echo $post->text; } } } Now, run dbtest/index and you should get an output similar to the one shown in the following screenshot: How it works... We have used Yii's Active Record scopes in the preceding code. The defaultScope function returns the default condition or criteria that will be applied to all the Post model query methods. As we need to specify the language explicitly, we create a scope named lang, which accepts the language name. With $this->getDbCriteria(), we get the model's criteria in its current state and then merge it with the new condition. As the condition is exactly the same as in defaultScope, except for the parameter value, it overrides the default scope. In order to support chained calls, lang returns the model instance by itself. There's more... For further information, refer to the following URLs: http://www.yiiframework.com/doc/guide/en/database.ar http://www.yiiframework.com/doc/api/CDbCriteria/ See also The Getting data from a database recipe The Using CDbCriteria recipe Processing model fields with AR event-like methods Active Record implementation in Yii is very powerful and has many features. One of these features is event-like methods , which you can use to preprocess model fields before putting them into the database or getting them from a database, as well as deleting data related to the model, and so on. In this recipe, we will linkify all URLs in the post text and we will list all existing Active Record event-like methods. Getting ready Create a new application by using yiic webapp as described in the official guide at the following URL:http://www.yiiframework.com/doc/guide/en/quickstart.first-app Set up a database connection and create a table named post as follows: DROP TABLE IF EXISTS `post`; CREATE TABLE IF NOT EXISTS `post` ( `id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT, `title` VARCHAR(255) NOT NULL, `text` TEXT NOT NULL, PRIMARY KEY (`id`) ); Generate the Post model using Gii How to do it... Add the following method to protected/models/Post.php as follows: protected function beforeSave() { $this->text = preg_replace('~((?:https?|ftps?)://.*?)( |$)~iu', '<a href="1">1</a>2', $this->text); return parent::beforeSave(); } That is it. Now, try saving a post containing a link. Create protected/controllers/TestController.php as follows: <?php class TestController extends CController { function actionIndex() { $post=new Post(); $post->title='links test'; $post->text='test http://www.yiiframework.com/ test'; $post->save(); print_r($post->text); } } Run test/index. You should get the following: How it works... The beforeSave method is implemented in the CActiveRecord class and executed just before saving a model. By using a regular expression, we replace everything that looks like a URL with a link that uses this URL and call the parent implementation, so that real events are raised properly. In order to prevent saving, you can return false. There's more... There are more event-like methods available as shown in the following table: Method name Description afterConstruct Called after a model instance is created by the new operator beforeDelete/afterDelete Called before/after deleting a record beforeFind/afterFind Method is invoked before/after each record is instantiated by a find method beforeSave/afterSave Method is invoked before/after saving a record successfully beforeValidate/afterValidate Method is invoked before/after validation ends Further reading In order to learn more about using event-like methods in Yii, you can refer to the following URLs: http://www.yiiframework.com/doc/api/CActiveRecord/ http://www.yiiframework.com/doc/api/CModel See also The Using Yii events recipe  The Highlighting code with Yii recipe The Automating timestamps recipe The Setting up an author automatically recipe
Read more
  • 0
  • 0
  • 3874

article-image-why-mybatis
Packt
10 Jul 2013
8 min read
Save for later

Why MyBatis

Packt
10 Jul 2013
8 min read
(For more resources related to this topic, see here.) Eliminates a lot of JDBC boilerplate code Java has a Java DataBase Connectivity (JDBC) API to work with relational databases. But JDBC is a very low-level API, and we need to write a lot of code to perform database operations. Let us examine how we can implement simple insert and select operations on a STUDENTS table using plain JDBC. Assume that the STUDENTS table has STUD_ID, NAME, EMAIL, and DOB columns. The corresponding Student JavaBean is as follows: package com.mybatis3.domain; import java.util.Date; public class Student { private Integer studId; private String name; private String email; private Date dob; // setters and getters } The following StudentService.java program implements the SELECT and INSERT operations on the STUDENTS table using JDBC. public Student findStudentById(int studId) { Student student = null; Connection conn = null; try{ //obtain connection conn = getDatabaseConnection(); String sql = "SELECT * FROM STUDENTS WHERE STUD_ID=?"; //create PreparedStatement PreparedStatement pstmt = conn.prepareStatement(sql); //set input parameters pstmt.setInt(1, studId); ResultSet rs = pstmt.executeQuery(); //fetch results from database and populate into Java objects if(rs.next()) { student = new Student(); student.setStudId(rs.getInt("stud_id")); student.setName(rs.getString("name")); student.setEmail(rs.getString("email")); student.setDob(rs.getDate("dob")); } } catch (SQLException e){ throw new RuntimeException(e); }finally{ //close connection if(conn!= null){ try { conn.close(); } catch (SQLException e){ } } } return student; } public void createStudent(Student student) { Connection conn = null; try{ //obtain connection conn = getDatabaseConnection(); String sql = "INSERT INTO STUDENTS(STUD_ID,NAME,EMAIL,DOB) VALUES(?,?,?,?)"; //create a PreparedStatement PreparedStatement pstmt = conn.prepareStatement(sql); //set input parameters pstmt.setInt(1, student.getStudId()); pstmt.setString(2, student.getName()); pstmt.setString(3, student.getEmail()); pstmt.setDate(4, new java.sql.Date(student.getDob().getTime())); pstmt.executeUpdate(); } catch (SQLException e){ throw new RuntimeException(e); }finally{ //close connection if(conn!= null){ try { conn.close(); } catch (SQLException e){ } } } } protected Connection getDatabaseConnection() throws SQLException { try{ Class.forName("com.mysql.jdbc.Driver"); return DriverManager.getConnection ("jdbc:mysql://localhost:3306/test", "root", "admin"); } catch (SQLException e){ throw e; } catch (Exception e){ throw new RuntimeException(e); } } There is a lot of duplicate code in each of the preceding methods, for creating a connection, creating a statement, setting input parameters, and closing the resources, such as the connection, statement, and result set. MyBatis abstracts all these common tasks so that the developer can focus on the really important aspects, such as preparing the SQL statement that needs to be executed and passing the input data as Java objects. In addition to this, MyBatis automates the process of setting the query parameters from the input Java object properties and populates the Java objects with the SQL query results as well. Now let us see how we can implement the preceding methods using MyBatis: Configure the queries in a SQL Mapper config file, say StudentMapper.xml. <select id="findStudentById" parameterType="int" resultType=" Student"> SELECT STUD_ID AS studId, NAME, EMAIL, DOB FROM STUDENTS WHERE STUD_ID=#{Id} </select> <insert id="insertStudent" parameterType="Student"> INSERT INTO STUDENTS(STUD_ID,NAME,EMAIL,DOB) VALUES(#{studId},#{name},#{email},#{dob}) </insert> Create a StudentMapper interface. public interface StudentMapper { Student findStudentById(Integer id); void insertStudent(Student student); } In Java code, you can invoke these statements as follows: SqlSession session = getSqlSessionFactory().openSession(); StudentMapper mapper = session.getMapper(StudentMapper.class); // Select Student by Id Student student = mapper.selectStudentById(1); //To insert a Student record mapper.insertStudent(student); That's it! You don't need to create the Connection, PrepareStatement, extract, and set parameters and close the connection by yourself for every database operation. Just configure the database connection properties and SQL statements, and MyBatis will take care of all the ground work. Don't worry about what SqlSessionFactory, SqlSession, and Mapper XML files are. Along with these, MyBatis provides many other features that simplify the implementation of persistence logic. It supports the mapping of complex SQL result set data to nested object graph structures It supports the mapping of one-to-one and one-to-many results to Java objects It supports building dynamic SQL queries based on the input data Low learning curve One of the primary reasons for MyBatis' popularity is that it is very simple to learn and use because it depends on your knowledge of Java and SQL. If developers are familiar with Java and SQL, they will fnd it fairly easy to get started with MyBatis. Works well with legacy databases Sometimes we may need to work with legacy databases that are not in a normalized form. It is possible, but diffcult, to work with these kinds of legacy databases with fully-fedged ORM frameworks such as Hibernate because they attempt to statically map Java objects to database tables. MyBatis works by mapping query results to Java objects; this makes it easy for MyBatis to work with legacy databases. You can create Java domain objects following the object-oriented model, execute queries Embraces SQL Full-fedged ORM frameworks such as Hibernate encourage working with entity objects and generate SQL queries under the hood. Because of this SQL generation, we may not be able to take advantage of database-specifc features. Hibernate allows to execute native SQLs, but that might defeat the promise of a database-independent persistence. The MyBatis framework embraces SQL instead of hiding it from developers. As MyBatis won't generate any SQLs and developers are responsible for preparing the queries, you can take advantage of database-specifc features and prepare optimized SQL queries. Also, working with stored procedures is supported by MyBatis. Supports integration with Spring and Guice frameworks MyBatis provides out-of-the-box integration support for the popular dependency injection frameworks Spring and Guice; this further simplifes working with MyBatis. Supports integration with third-party cache libraries MyBatis has inbuilt support for caching SELECT query results within the scope of SqlSession level ResultSets. In addition to this, MyBatis also provides integration support for various third-party cache libraries, such as EHCache, OSCache, and Hazelcast. Better performance Performance is one of the key factors for the success of any software application. There are lots of things to consider for better performance, but for many applications, the persistence layer is a key for overall system performance. MyBatis supports database connection pooling that eliminates the cost of creating a database connection on demand for every request. MyBatis has an in-built cache mechanism which caches the results of SQL queries at the SqlSession level. That is, if you invoke the same mapped select query, then MyBatis returns the cached result instead of querying the database again. MyBatis doesn't use proxying heavily and hence yields better performance compared to other ORM frameworks that use proxies extensively. There are no one-size-fits-all solutions in software development. Each application has a different set of requirements, and we should choose our tools and frameworks based on application needs. In the previous section, we have seen various advantages of using MyBatis. But there will be cases where MyBatis may not be the ideal or best solution.If your application is driven by an object model and wants to generate SQL dynamically, MyBatis may not be a good ft for you. Also, if you want to have a transitive persistence mechanism (saving the parent object should persist associated child objects as well) for your application, Hibernate will be better suited for it. Installing and configuring MyBatis We are assuming that the JDK 1.6+ and MySQL 5 database servers have been installed on your system. The installation process of JDK and MySQL is outside the scope of this article. At the time of writing this article, the latest version of MyBatis is MyBatis 3.2.2. Even though it is not mandatory to use IDEs, such as Eclipse, NetBeans IDE, or IntelliJ IDEA for coding, they greatly simplify development with features such as handy autocompletion, refactoring, and debugging. You can use any of your favorite IDEs for this purpose. This section explains how to develop a simple Java project using MyBatis: By creating a STUDENTS table and inserting sample data By creating a Java project and adding mybatis-3.2.2.jar to the classpath By creating the mybatis-config.xml and StudentMapper.xml configuration files By creating the MyBatisSqlSessionFactory singleton class By creating the StudentMapper interface and the StudentService classes By creating a JUnit test for testing StudentService Summary In this article, we discussed about MyBatis and the advantages of using MyBatis instead of plain JDBC for database access. Resources for Article : Further resources on this subject: Building an EJB 3.0 Persistence Model with Oracle JDeveloper [Article] New Features in JPA 2.0 [Article] An Introduction to Hibernate and Spring: Part 1 [Article]
Read more
  • 0
  • 0
  • 12792

article-image-optimizing-performance
Packt
02 Jul 2013
14 min read
Save for later

Optimizing Performance

Packt
02 Jul 2013
14 min read
(For more resources related to this topic, see here.) Improving relevance and Quality Score AdWords rewards advertisers who choose relevant keywords and write compelling ads with good Quality Scores. The better your Quality Scores, the less you'll need to pay for each click, resulting in more profits for you. This ecosystem evolved to benefit users, Google, and advertisers. If the ads on Google were irrelevant and of poor quality, users would get frustrated and not click on them, and Google would lose revenue. From an advertiser's perspective, when users click on irrelevant ads, they tend to leave your website, costing you money and not contributing to your bottom line. AdWords was designed to encourage high-quality ads, and as an advertiser you'll reap many benefits from optimizing them to improve relevance. Getting ready First, check your Quality Scores to identify low quality keywords to focus on. Go to the Campaigns tab. Click on the Keywords tab. Go to Columns and choose Customize columns. From the Attributes section, choose Qual. score. Click on Apply and you will see an extra column with your Quality Scores. In your Keywords tab, sort the Qual. score column to review low Quality Score keywords. Generally, Quality Score 1 to 3 is considered low, 4 to 6 is average with room for improvement, 7 to 9 is good, and 10 is considered great. Another way you can identify low-quality keywords is with filters. Create a keyword filter to see all keywords that are below a certain Quality Score. Download this report to have an easy to refer to summary of all keywords you'll need to focus on. How to do it... To improve your Quality Scores, follow these 10 tips: Start with low Quality Score keywords that get the most impressions. This is where you'll have the biggest impact. Re-organize your keywords into more tightly themed ad groups. If a keyword has a low Quality Score, try moving it to its own ad group with more specific ad text and its own negative keywords. Your broad match keywords may be getting expanded to irrelevant variations. Try changing them to a more specific match type. Add negative keywords to eliminate irrelevant impressions and increase your CTR. For example, add free as a negative keyword to eliminate someone looking for free products and services online. Run a search terms report to see what queries are triggering clicks and get new negative keyword ideas. Some of your low quality keywords may not be relevant to your website. If a keyword has a very low Quality Score and rarely shows, it could be negatively impacting the rest of your account. Consider deleting it. Write new ads for your low Quality Score keywords, placing each keyword in your ad text, ideally in your headline. Test multiple ad versions to see which one resonates better with your customers. Experiment with different calls-to-action, promotions, and ways to describe the unique benefits of your products and services. Pause the lower performing ads in each ad group, if you are testing multiple variations to ensure that ads getting a better CTR show more often. Try implementing dynamic keyword insertion to have AdWords automatically insert your keywords into the ad titles or description lines. Choose more specific landing pages. Your landing page should be relevant to your keywords and contain your keywords on the page. If it does not, consider creating new landing pages for your most important keywords. How it works... Quality Score is a measure of relevance and is calculated by taking into account the following factors: Your keyword's CTR: Your CTR is like an online voting system; people in the search auction vote on how relevant your ads are with their clicks. Your display URL's CTR: Your display URL's past CTR affects your Quality Scores. How relevant your keywords are: Some keywords you choose will be more relevant to your business than others. If you sell snowboards, but would like to run on a keyword like "snow," a generic term that's not as relevant to your business, you will receive a much lower Quality Score. Pick specific keywords that clearly describe your products and stay away from general keywords that could apply to many different businesses. The relevance of your ads to your keywords: Your ads need to include your keywords in the ad text. If you have too many keywords for them all to be reflected in your ad copy, create additional, smaller ad groups. When a searched keywords is included in an ad text, that term is highlighted by Google in your ad, helping it stand out even more on the Google search results page. Landing page quality: The keywords you choose should be included in your ad text and further mirrored on your landing page. In addition to your landing page being relevant to your keywords, it also needs to be transparent and easy to navigate. Historical account performance: Advertisers who continue to choose poor quality keywords will receive low Quality Scores when adding new keywords. This system helps Google discourage advertisers who continue to choose irrelevant keywords and encourage advertisers who create relevant, quality keywords and ads. Performance in the regions you are targeting: The regions you target via your campaign settings page will affect your Quality Scores. Performance on the devices you are targeting: You may get different Quality Scores on mobile and tablet devices, if your keywords perform differently depending on device. Quality Score is dynamic and is calculated every time a search triggers your ad. In order to achieve better Quality Scores, you'll need to focus on tying together all of the various elements that comprise Quality Score. Increasing relevance helps you achieve a better ad rank and pay less for each click. The Quality Score algorithm is designed to reward relevancy and encourage advertisers to create high-quality accounts, which will in turn help you achieve better ROI with AdWords. There's more… The more general your keywords are, the more difficult it will be to obtain a high Quality Score for them, even after following all of the recommended AdWords best practices. In such cases, you'll need to weigh if the lower Quality Score is worth the traffic and conversions you get from these keywords. Keep in mind that if you continue to choose low-quality keywords, this will hurt your overall account performance. Improving ad rank Your ad position is going to heavily impact visibility and traffic, with the top-ranked ads receiving the most clicks. Obviously, the more competitive your keywords are, the more costly it will be to have your ads show in the #1 spot. However, there are specific shortand long-term strategies that will help you obtain the best possible ad rank. Getting ready First, isolate the keywords that are not ranked optimally: Identify keywords that are not showing on the first page of Google's search results If you have a specific ad position in mind, use filters in your Keywords tab to see which keywords are not meeting this criteria Quickly diagnose your keywords to figure out if they are showing or are restricted by Quality Scores and bids. On your Keywords tab, click on Keyword details and select Diagnose keywords. How to do it... To improve your ad rank, you can: Increase your bid Improve your Quality Score Increasing your bids is the easy fix short-term solution. However, continuing to increase how much you spend on each click when your ad rank slips is not going to be profitable in the long run. The long-term strategy to improving ad position is to raise your Quality Scores. To improve Quality Score, start with the following: Refine your campaign structure, breaking out related keywords into their own ad groups, which will help you write more relevant ads. Refine ads with more compelling ad copy, using keywords in ad text. Pause lower CTR ads if you are running multiple ad variations. Add negative keywords to weed out impressions that are not relevant and are weighing down your CTR. How it works... Your ad rank determines your ad position, or where your ads show in relation to other advertisers. The ad rank formula consists of your Quality Score and your bid: Ad Rank = Quality Score x Max CPC Ad rank is calculated each time your ad enters the ad auction. This means that for each new query your ads could appear in a different position. There's more… The higher your Quality Score, the less you'll need to bid to maintain your ad rank. This strategy helps AdWords ensure high quality ads on Google.com and encourages advertisers to optimize their accounts. Changing keyword match types Keyword match types control who sees your ads and how the keywords you have chosen are expanded to match other relevant queries. Using too many of your keywords in the most restrictive match types can limit your traffic, while using too many broad keywords can generate some or a lot of irrelevant clicks. Getting ready Determine which keywords you might want to change match types for. Here are a couple of common edits advertisers make: Broad match keywords with low Quality Scores and no conversions. Change to phrase or exact match to restrict variations. Exact match keywords with no impressions. Change to more general match type to broaden reach. How to do it... To change a single keyword's match type: Go to the Campaigns tab. Click on the Keywords tab or click on a specific campaign and ad group first. In your keyword table, click on the keyword you'd like to edit. Before you can proceed, you might need to agree to the system warning by clicking on Yes, I understand. The system warns you that if you edit a keyword, it will be deleted and treated as a new keyword in AdWords. You can check the Don't show this message again checkbox so you don't have to see this warning each time you edit a keyword. Next, you'll be able to choose a different match type from the drop-down menu. In this screenshot, we are choosing to change a broad keyword to a more specific match type. Click on Save. To change match types for multiple keywords: From your Keywords tab, check all of the keywords you'd like to edit. From the Edit drop-down menu, choose Change match type. Choose what you'd like to change your match type from and to. Since changing a match type deletes the old keyword and creates a new one, you have the option to create duplicate versions of the keywords you have selected and add them in the new match types. To use that option, check Duplicate keywords and change match type in duplicates. You can preview your changes before they go live by clicking on Preview changes. Click on Make changes. How it works... Changing a keyword's match type deletes the old keyword and creates a brand new keyword in your account. It also resets a keyword's history to 0, but performance data will still be available for all deleted keywords. Scheduling ads to run during key days and times Many advertisers choose to run AdWords campaigns only during hours when they have customer support available. If you have a limited budget, you might want to focus your ad budgets on days and times your customers are most likely to be looking for you. Getting ready Determine if ad scheduling is necessary and appropriate for your business. Advertisers that may benefit from this include businesses that operate primarily during specific hours. For example, a website with customer support available to take calls during business hours only, or a pizza delivery service that only delivers evenings. Review performance by day and hour of day, keeping in mind that you will see fewer clicks and impressions during less busy times, so you have focus on conversion rates and CPA instead. Some advertisers get great conversion rates during off peak hours, late at night and in the early mornings, when fewer advertisers are competing in the ad auction. Keep in mind how your customers interact with you. If you rely on calls and only have customer support during specific hours, make sure your ads are focused on when you have the proper support available. How to do it... To enable ad scheduling: Go to the Campaigns tab. Click on the specific campaign you'd like to edit. Go to the Settings tab. Select Ad schedule. Click on Edit ad schedule. Click on + Create custom schedule. From the drop-down menu, choose to create a schedule for all days, Monday through Friday, or specific days of the week, and then set your hours. Click on +Add to add additional parameters. Click on Save. How it works... Ad scheduling helps you control when your ads appear to potential customers. Ad scheduling is set at the campaign level, which means that it applies to all keywords and ads within a single campaign. By default, AdWords campaigns are set to run all days of the week and all hours of the day. There's more… When you set up ad scheduling, keep in mind your account's time zone. You can find out your time zone by going to My Account | Preferences. AdWords will also reference your time zone as you create a custom schedule for each campaign. You cannot change your time zone. Expanding your keyword list Expanding your keywords will be one of your main strategies to increase clicks as well as conversions. Just as markets evolve and search patterns change, your keywords also need to be updated in order not to become stagnant. Here we will discuss several tools you can use to build up and refresh your keyword list. Getting ready Review your website and compare your list of products and services to your AdWords account. Are your current keywords covering all of the categories you specialize in? Are there other ways to describe some of your key offerings? Who are your main competitors and are they doing PPC? How to do it... To expand your keyword list, try one of the following strategies. Automated keyword suggestions To see automated keyword ideas relevant to your website, follow these steps: Click on the Campaigns tab. Go into a specific campaign and ad group. Clock on + Add keywords above your ad group's current keyword summary. AdWords will suggest new sample keywords based on a scan of your website grouped into related categories. Click to expand each category and review the suggested keywords. If you like a keyword, click on Add to move it to the Add keywords box. Do not simply add all of the automated suggestions, as not all of them will be specific enough. You as a business owner know your audience best and should pick and choose only the keywords that are the most relevant. Make sure that you are not adding keywords that may be already present in your other campaigns or ad groups. Click on Save after adding all of the relevant keywords. Search terms report Review your search terms report regularly and add any relevant keywords that resulted in clicks and conversions. Click on Add as keyword recipe after viewing your search terms to add them to your account. Competitor keywords Use websites such as spyfu.com to see what keywords your competitors' ads are appearing on and to download their keyword lists. Enter a competitor's URL into the search box to uncover profitable keywords you missed. You can download a competitor's full keyword list, sort, and filter it, or export it to an AdWords-friendly format. The tool can even organize a domain's keywords into targeted ad groups so you have less manual work to do. Google's keyword tool In addition to entering your own domain into Google's keyword tool, try typing in a competitor's website and see what keywords are being recommended. How it works... Adding new relevant keywords to your AdWords account will help drive more impressions and clicks. With new and unique keywords, you can capitalize on previously untapped opportunities to drive new leads and sales.
Read more
  • 0
  • 0
  • 2478
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-hubs
Packt
28 Jun 2013
8 min read
Save for later

Hubs

Packt
28 Jun 2013
8 min read
(For more resources related to this topic, see here.) Moving up one level While PersistentConnection seems very easy to work with, it is the lowest level in SignalR. It does provide the perfect abstraction for keeping a connection open between a client and a server, but that's just about all it does provide. Working with different operations is not far from how you would deal with things in a regular socket connection, where you basically have to parse whatever is coming from a client and figure out what operation is being asked to be performed based on the input. SignalR provides a higher level of abstraction that removes this need and you can write your server-side code in a more intuitive manner. In SignalR, this higher level of abstraction is called a Hub. Basically, a Hub represents an abstraction that allows you to write classes with methods that take different parameters, as you would with any API in your application, and then makes it completely transparent on the client—at least for JavaScript. This resembles a concept called Remote Procedure Call (RPC), with many incarnations of it out there. For our chat application at this stage, we basically just want to be able to send a message from a client to the server and have it send the message to all of the other clients connected. To do this, we will now move away from the PersistentConnection and introduce a new class called Hub using the following steps: First, start off by deleting the ChatConnection class from your Web project. Now we want to add a Hub implementation instead. Right-click on the SignalRChat project and select Add | New Item. In the dialog, chose Class and give it a name Chat.cs. This is the class that will represent our Hub. Make it inherit from Hub: Public class Chat : Hub Add the necessary import statement at the top of the file: using Microsoft.AspNet.SignalR.Hubs; In the class we will add a simple method that the clients will call to send a message. We call the method Send and take one parameter into it; a string which contains the message being sent by the client: Public void Send(string message){} From the base class of Hub, we get a few things that we can use. For now we'll be using the Clients property to broadcast to all other clients connected to the Hub. On the Clients property, you'll find an All property which is dynamic; on this we can call anything and the client will just have to subscribe to the method we call, if the client is interested. It is possible to change the name of the Hub to not be the same as the class name. An attribute called HubName() can be placed in front of the class to give it a new name. The attribute takes one parameter; the name you want for your Hub. Similarly, for methods inside your Hub, you can use an attribute called HubMethodName() to give the method a different name. The next thing we need to do is to go into the Global.asax.cs file, and make some changes. Firstly, we remove the .MapConnection(…) line and replace it with a .MapHubs() line. This will make all Hubs in your application automatically accessible from a default URL. All Hubs in the application will be mapped to /signalr/<name of hub>; so more concretely the path will be: http:// <your-site>:port/signalr/<name of hub>. We're going with the defaults for now. It should cover the needs on the server-side code. Moving into the JavaScript/HTML part of things, SignalR comes with a JavaScript proxy generator that can generate JavaScript proxies from your Hubs mapped using .MapHubs(). This is also subject to the same default URL but will follow the configuration given to .MapHubs().We will need to include a script reference in the HTML code right after the line that references the SignalR JavaScript file. We add the following: <script src = "/signalr/hubs" type="text/javascript"></script> This will include the generated proxies for our JavaScript client. What this means is that we get whatever is exposed on a Hub generated for us and we can start using it straight away. Before we get started with the concrete implementation for our web client, we can move all of the custom code revitalizing the Rich Client, for PersistentConnection altogether. We then want to get to our proxy, and work with it. It sits on the connection object that SignalR adds to jQuery. So, for us, that means an object called chat will be there. On the the chat object, sit two important properties, one representing the client functions that get invoked when the server "calls" something on the client. And the second one is the property representing the server and all of the functionalities that we can call from the client. Let's start by hooking up the client and its methods. Earlier we implemented in the Hub sitting on the server a call to addMessage() with the message. This can be added to the client property inside the chat Hub instance: Basically, whenever the server calls that method, our client counterpart will be called. Now what we need to do is to start the Hub and print out when we are connected to the chat window: $.connection.hub.start().done(function() {$("#chatWindow").val("Connectedn");}); Then we need to hook up the click event on the button and call the server to send messages. Again, we use the server property sitting on the chat hub instance in the client, which corresponds to a method on the Hub: $("#sendButton").click(function() {chat.server.send($("#messageTextBox").val());$("#messageTextBox").val("");}); You should now have something that looks as follows: You may have noticed that the send function on the client is in camelCase and the server-side C# code has it in PascalCase. SignalR automatically translates between the two case types. In general, camelCase is the preferred and the most broadly used casing style in JavaScript—while Pascal being the most used in C#. You should now be having a full sample in HTML/JavaScript that looks like the following screenshot: Running it should produce the same result as before, with the exception of the .NET terminal client, which also needs alterations. In fact, let's just get rid of the code inside Program.cs and start over. The client API is a bit rougher in C#; this comes from the more statically typed nature of C#. Sure, it is possible—technically—to get pretty close to what has been done in JavaScript, but it hasn't been a focal point for the SignalR team. Basically, we need a different connection than the PersistentConnection class. We'll be needing a HubConnection class. From the HubConnection class we can create a proxy for the chat Hub: As with JavaScript, we can hook up client-side methods that get invoked when the server calls any client. Although as mentioned, not as elegantly as in JavaScript. On the chat Hub instance, we get a method called On(), which can be used to specify a client-side method corresponding to the client call from the server. So we set addMessage to point to a method which, in our case, is for now just an inline lambda expression. Now we need, as with PersistentConnection, to start the connection and wait until it's connected: hubConnection.Start().Wait(); Now we can get user input and send it off to the server. Again, as with client methods called from the server, we have a slightly different approach than with JavaScript; we call the Invoke method giving it the name of the method to call on the server and any arguments. The Invoke() method does take a parameter, so you can specify any number of arguments which will then be sent to the server: The finished result should look something like the following screenshot, and now work in full correspondence with the JavaScript chat: Summary Exposing our functionality through Hubs makes it easier to consume on the client, at least on JavaScript based clients, due to the proxy generation. It basically brings it to the client as if it was on the client. With the Hub you also get the ability to call the client from the server in a more natural manner. One of the things often important for applications is the ability to ?lter out messages so you only get messages relevant for your context. Resources for Article : Further resources on this subject: Working with Microsoft Dynamics AX and .NET: Part 1 [Article] Working with Microsoft Dynamics AX and .NET: Part 2 [Article] Deploying .NET-based Applications on to Microsoft Windows CE Enabled Smart Devices [Article]
Read more
  • 0
  • 0
  • 1946

article-image-python-libraries-geospatial-development
Packt
17 Jun 2013
14 min read
Save for later

Python Libraries for Geospatial Development

Packt
17 Jun 2013
14 min read
(For more resources related to this topic, see here.) Reading and writing geospatial data While you could in theory write your own parser to read a particular geospatial data format, it is much easier to use an existing Python library to do this. We will look at two popular libraries for reading and writing geospatial data: GDAL and OGR. GDAL/OGR Unfortunately, the naming of these two libraries is rather confusing. Geospatial Data Abstraction Library ( GDAL), was originally just a library for working with raster geospatial data, while the separate OGR library was intended to work with vector data. However, the two libraries are now partially merged, and are generally downloaded and installed together under the combined name of "GDAL". To avoid confusion, we will call this combined library GDAL/OGR and use "GDAL" to refer to just the raster translation library. A default installation of GDAL supports reading 116 different raster file formats, and writing to 58 different formats. OGR by default supports reading 56 different vector file formats, and writing to 30 formats. This makes GDAL/OGR one of the most powerful geospatial data translators available, and certainly the most useful freely-available library for reading and writing geospatial data. GDAL design GDAL uses the following data model for describing raster geospatial data: Let's take a look at the various parts of this model: A dataset holds all the raster data, in the form of a collection of raster "bands", along with information that is common to all these bands. A dataset normally represents the contents of a single file. A raster band represents a band, channel, or layer within the image. For example, RGB image data would normally have separate bands for the red, green, and blue components of the image. The raster size specifies the overall width and height of the image, in pixels. The georeferencing transform converts from (x, y) raster coordinates into georeferenced coordinates—that is, coordinates on the surface of the earth. There are two types of georeferencing transforms supported by GDAL: affine transformations and ground control points. An affine transformation is a mathematical formula allowing the following operations to be applied to the raster data: More than one of these operations can be applied at once; this allows you to perform sophisticated transforms such as rotations. Affine transformations are sometimes referred to as linear transformations. Ground Control Points ( GCPs) relate one or more positions within the raster to their equivalent georeferenced coordinates, as shown in the following figure: Note that GDAL does not translate coordinates using GCPs— that is left up to the application, and generally involves complex mathematical functions to perform the transformation. The coordinate system describes the georeferenced coordinates produced the georeferencing transform. The coordinate system includes the projection and datum, as well as the units and scale used by the raster data. The metadata contains additional information about the dataset as a whole. Each raster band contains the following (among other things): The band raster size: This is the size (number of pixels across and number of lines high) for the data within the band. This may be the same as the raster size for the overall dataset, in which case the dataset is at full resolution, or the band's data may need to be scaled to match the dataset. Some band metadata providing extra information specific to this band. A color table describing how pixel values are translated into colors. The raster data itself. GDAL provides a number of drivers which allow you to read (and sometimes write) various types of raster geospatial data. When reading a file, GDAL selects a suitable driver automatically based on the type of data; when writing, you first select the driver and then tell the driver to create the new dataset you want to write to. GDAL example code A Digital Elevation Model ( DEM) file contains height values. In the following example program, we use GDAL to calculate the average of the height values contained in a sample DEM file. In this case, we use a DEM file downloaded from the GLOBE elevation dataset: from osgeo import gdal,gdalconst import struct dataset = gdal.Open("data/e10g") band = dataset.GetRasterBand(1) fmt = "<" + ("h" * band.XSize) totHeight = 0 for y in range(band.YSize): scanline = band.ReadRaster(0, y, band.XSize, 1, band.XSize, 1, band.DataType) values = struct.unpack(fmt, scanline) for value in values: if value == -500: # Special height value for the sea -> ignore. continue totHeight = totHeight + value average = totHeight / (band.XSize * band.YSize) print "Average height =", average As you can see, this program obtains the single raster band from the DEM file, and then reads through it one scanline at a time. We then use the struct standard Python library module to read the individual height values out of the scanline. Because the GLOBE dataset uses a special height value of -500 to represent the ocean, we exclude these values from our calculations. Finally, we use the remaining height values to calculate the average height, in meters, over the entire DEM data file. OGR design OGR uses the following model for working with vector-based geospatial data: Let's take a look at this design in more detail: The data source represents the file you are working with—though it doesn't have to be a file. It could just as easily be a URL or some other source of data. The data source has one or more layers , representing sets of related data. For example, a single data source representing a country may contain a "terrain" layer, a "contour lines" layer, a "roads" later, and a "city boundaries" layer. Other data sources may consist of just one layer. Each layer has a spatial reference and a list of features. The spatial reference specifies the projection and datum used by the layer's data. A feature corresponds to some significant element within the layer. For example, a feature might represent a state, a city, a road, an island, and so on. Each feature has a list of attributes and a geometry. The attributes provide additional meta-information about the feature. For example, an attribute might provide the name for a city's feature, its population, or the feature's unique ID used to retrieve additional information about the feature from an external database. Finally, the geometry describes the physical shape or location of the feature. Geometries are recursive data structures that can themselves contain sub-geometries—for example, a "country" feature might consist of a geometry that encompasses several islands, each represented by a subgeometry within the main "country" geometry. The geometry design within OGR is based on the Open Geospatial Consortium's "Simple Features" model for representing geospatial geometries. For more information, see http://www.opengeospatial.org/standards/sfa . Like GDAL, OGR also provides a number of drivers which allow you to read (and sometimes write) various types of vector-based geospatial data. When reading a file, OGR selects a suitable driver automatically; when writing, you first select the driver and then tell the driver to create the new data source to write to. OGR example code The following example program uses OGR to read through the contents of a shapefile, printing out the value of the NAME attribute for each feature along with the geometry type: from osgeo import ogr shapefile = ogr.Open("TM_WORLD_BORDERS-0.3.shp") layer = shapefile.GetLayer(0) for i in range(layer.GetFeatureCount()): feature = layer.GetFeature(i) name = feature.GetField("NAME") geometry = feature.GetGeometryRef() print i, name, geometry.GetGeometryName() Documentation GDAL and OGR are well documented, but with a catch for Python programmers. The GDAL/OGR library and associated command-line tools are all written in C and C++. Bindings are available which allow access from a variety of other languages, including Python, but the documentation is all written for the C++ version of the libraries. This can make reading the documentation rather challenging—not only are all the method signatures written in C++, but the Python bindings have changed many of the method and class names to make them more "pythonic". Fortunately, the Python libraries are largely self-documenting, thanks to all the docstrings embedded in the Python bindings themselves. This means you can explore the documentation using tools such as Python's built-in pydoc utility, which can be run from the command line like this: % pydoc -g osgeo This will open up a GUI window allowing you to read the documentation using a web browser. Alternatively, if you want to find out about a single method or class, you can use Python's built-in help() command from the Python command line, like this: >>> import osgeo.ogr >>> help(osgeo.ogr.DataSource.CopyLayer) Not all the methods are documented, so you may need to refer to the C++ docs on the GDAL website for more information, and some of the docstrings are copied directly from the C++ documentation—but in general the documentation for GDAL/OGR is excellent, and should allow you to quickly come up to speed using this library. Availability GDAL/OGR runs on modern Unix machines, including Linux and Mac OS X, as well as most versions of Microsoft Windows. The main website for GDAL can be found at: http://gdal.org The main website for OGR is at: http://gdal.org/ogr To download GDAL/OGR, follow the Downloads link on the main GDAL website. Windows users may find the FWTools package useful, as it provides a wide range of geospatial software for win32 machines, including GDAL/OGR and its Python bindings. FWTools can be found at: http://fwtools.maptools.org For those running Mac OS X, prebuilt binaries can be obtained from: http://www.kyngchaos.com/software/frameworks Make sure that you install GDAL Version 1.9 or later, as you will need this version to work through the examples in this book. Being an open source package, the complete source code for GDAL/OGR is available from the website, so you can compile it yourself. Most people, however, will simply want to use a prebuilt binary version. Dealing with projections One of the challenges of working with geospatial data is that geodetic locations (points on the Earth's surface) are mapped into a two-dimensional Cartesian plane using a cartographic projection. Whenever you have some geospatial data, you need to know which projection that data uses. You also need to know the datum (model of the Earth's shape) assumed by the data. A common challenge when dealing with geospatial data is that you have to convert data from one projection/datum to another. Fortunately, there is a Python library pyproj which makes this task easy. pyproj pyproj is a Python "wrapper" around another library called PROJ.4. "PROJ.4" is an abbreviation for Version 4 of the PROJ library. PROJ was originally written by the US Geological Survey for dealing with map projections, and has been widely used in geospatial software for many years. The pyproj library makes it possible to access the functionality of PROJ.4 from within your Python programs. Design The pyproj library consists of the following pieces: pyproj consists of just two classes: Proj and Geod. Proj converts from longitude and latitude values to native map (x, y) coordinates, and vice versa. Geod performs various Great Circle distance and angle calculations. Both are built on top of the PROJ.4 library. Let's take a closer look at these two classes. Proj Proj is a cartographic transformation class, allowing you to convert geographic coordinates (that is, latitude and longitude values) into cartographic coordinates (x, y values, by default in meters) and vice versa. When you create a new Proj instance, you specify the projection, datum, and other values used to describe how the projection is to be done. For example, to use the Transverse Mercator projection and the WGS84 ellipsoid, you would do the following: projection = pyproj.Proj(proj='tmerc', ellps='WGS84') Once you have created a Proj instance, you can use it to convert a latitude and longitude to an (x, y) coordinate using the given projection. You can also use it to do an inverse projection—that is, converting from an (x, y) coordinate back into a latitude and longitude value again. The helpful transform() function can be used to directly convert coordinates from one projection to another. You simply provide the starting coordinates, the Proj object that describes the starting coordinates' projection, and the desired ending projection. This can be very useful when converting coordinates, either singly or en masse. Geod Geod is a geodetic computation class, which allows you to perform various Great Circle calculations. We looked at Great Circle calculations earlier, when considering how to accurately calculate the distance between two points on the Earth's surface. The Geod class, however, can do more than this: The fwd() method takes a starting point, an azimuth (angular direction) and a distance, and returns the ending point and the back azimuth (the angle from the end point back to the start point again):   The inv() method takes two coordinates and returns the forward and back azimuth as well as the distance between them:   The npts() method calculates the coordinates of a number of points spaced equidistantly along a geodesic line running from the start to the end point:   When you create a new Geod object, you specify the ellipsoid to use when performing the geodetic calculations. The ellipsoid can be selected from a number of predefined ellipsoids, or you can enter the parameters for the ellipsoid (equatorial radius, polar radius, and so on) directly. Example code The following example starts with a location specified using UTM zone 17 coordinates. Using two Proj objects to define the UTM Zone 17 and lat/long projections, it translates this location's coordinates into latitude and longitude values: import pyproj UTM_X = 565718.5235 UTM_Y = 3980998.9244 srcProj = pyproj.Proj(proj="utm", zone="11", ellps="clrk66", units="m") dstProj = pyproj.Proj(proj="longlat", ellps="WGS84", datum="WGS84") long,lat = pyproj.transform(srcProj, dstProj, UTM_X, UTM_Y) print "UTM zone 11 coordinate (%0.4f, %0.4f) = %0.4f, %0.4f" % (UTM_X, UTM_Y, lat, long) Continuing on with this example, let's take the calculated lat/long values and, using a Geod object, calculate another point 10 kilometers northeast of that location: angle = 315 # 315 degrees = northeast. distance = 10000 geod = pyproj.Geod(ellps="WGS84") long2,lat2,invAngle = geod.fwd(long, lat, angle, distance) print "%0.4f, %0.4f is 10km northeast of %0.4f, %0.4f" % (lat2, long2, lat, long) Documentation The documentation available on the pyproj website, and in the docs directory provided with the source code, is excellent as far as it goes. It describes how to use the various classes and methods, what they do and what parameters are required. However, the documentation is rather sparse when it comes to the parameters used when creating a new Proj object. As the documentation says: A Proj class instance is initialized with proj map projection control parameter key/value pairs. The key/value pairs can either be passed in a dictionary, or as keyword arguments, or as a proj4 string (compatible with the proj command). The documentation does provide a link to a website listing a number of standard map projections and their associated parameters, but understanding what these parameters mean generally requires you to delve into the PROJ documentation itself. The documentation for PROJ is dense and confusing, even more so because the main manual is written for PROJ Version 3, with addendums for later versions. Attempting to make sense of all this can be quite challenging. Fortunately, in most cases you won't need to refer to the PROJ documentation at all. When working with geospatial data using GDAL or OGR, you can easily extract the projection as a "proj4 string" which can be passed directly to the Proj initializer. If you want to hardwire the projection, you can generally choose a projection and ellipsoid using the proj="..." and ellps="..." parameters, respectively. If you want to do more than this, though, you will need to refer to the PROJ documentation for more details. To find out more about PROJ, and to read the original documentation, you can find everything you need at: http://trac.osgeo.org/proj
Read more
  • 0
  • 0
  • 8088

article-image-creating-pop-menu
Packt
14 Jun 2013
7 min read
Save for later

Creating a pop-up menu

Packt
14 Jun 2013
7 min read
(For more resources related to this topic, see here.) How to do it... Open the application model (Application.e4xmi) and go to Application | Windows | Trimmed Window | Controls | Perspective Stack | Perspective | Controls | PartSashContainer | Part ( Code Snippets). Expand the Code Snippet part and right-click on the Menus node. Select Add child | Popup Menu. Set the ID of the pop-up menu to codesnippetapp.snippetlist.popupmenu. Right-click on the newly added pop-up menu and select Add Child | DirectMenuItem. Set Label of the menu item as New Snippet. Click on Class URI link. This opens the New Handler wizard. Click on the Browse button next to the Package textbox and select codesnippetapp.handlers from the list of packages displayed. Set Name as NewSnippetMenuHandler and click on the Finish button. The new class file is opened in the editor. Go back to the application model. Refer to the following screenshot: Right-click on the Popup Menu node and add another pop-up menu item with the Delete label and the DeleteSnippetMenuHandler class. Now we need to register this pop-up menu with the TableViewer class in the Code Snippets part. Open the class SnippetListView (you can find this class in the codesnippetapp.views package). We will have to register the pop-up menu using Menu Service. Add the EMenuService argument to the postConstruct method: @PostConstruct public void postConstruct(Composite parent, IEclipseContext ctx, EMenuService menuService) Append the following code to the postConstruct method: menuService.registerContextMenu(snippetsList.getTable(), "codesnippetapp.snippetlist.popupmenu"); Run the application. Right-click in the TableViewer class on the left-hand side. You should see a pop-up menu with two options: New Snippet and Delete. How it works... To add a pop-up menu, you first need to create a menu in the application model for a part in which you want to display the menu. In this recipe, we added a menu to the Code Snippets part. Then, you add menu items. In this recipe, we added two DirectMenuItem. For the main menu bar in the task of adding menu and toolbar buttons, we added HandledMenuItem, because we wanted to share the handler for the menu between toolbar button and menu item. However, in this case, we need only one implementation of options in the pop-up menu, so we created DirectMenuItem. But, if you want to add keyboard shortcuts for the menu options, then you may want to create HandledMenuItem instead of DirectMenuItem. For each menu item, you set a class URI that is a handler class for the menu item. The next step is to register this pop-up menu with a UI control. In our application, we want to associate this menu with the TableViewer class that displays a list of snippets. To register a menu with any UI control, you need to get an instance of EMenuService. We obtained this instance in the postConstruct method of SnippetListView using DI—we added the EMenuService argument to the postConstruct method. Then, we used registerContextMenu of EMenuService to associate the pop-up menu with the TableViewer class. registerContextMenu takes instances of the UI control and menu ID as arguments. There's more... The Delete option in our pop-up menu makes sense only when you click on any snippet. So, when you right-click on an area of TreeViewer that does not have any snippet at that location, the Delete option should not be displayed, only the New Snippet option. This can be done using core expressions. You can find more information about core expressions at http://wiki.eclipse.org/Platform_Expression_Framework, and http://wiki.eclipse.org/Command_Core_Expressions. We will use a core expression to decide if the Delete menu option should be displayed. We will add a mouse listener to the TableViewer class. If the mouse was clicked on a Snippet, then we will add SnippetData to IEclipseContext with the snippet_at_mouse_click key. If there is no snippet at the location, then we will remove this key from IEclipseContext. Then, we will add a core expression to check if the snippet_at_mouse_click variable is of type codesnippetapp.data.SnippetData. We will then associate this core expression with the Delete menu item in the application model. Adding mouse listener to the TableViewer class Create a static field in the SnippetListView class. private static String SNIPPET_AT_MOUSE_CLICK = "snippet_at_mouse_ click"; Make the ctx argument of the postConstruct method, final. Append the following code in the postConstruct method: //Add mouse listener to check if there is a snippet at mouse click snippetsList.getTable().addMouseListener(new MouseAdapter() { @Override public void mouseDown(MouseEvent e) { if (e.button == 1) //Ignore if left mouse button return; //Get snippet at the location of mouse click TableItem itemAtClick = snippetsList.getTable(). getItem(new Point(e.x, e.y)); if (itemAtClick != null) { //Add selected snippet to the context ctx.set(SNIPPET_AT_MOUSE_CLICK, itemAtClick.getData()); } else { //No snippet at the mouse click. Remove the variable ctx.remove(SNIPPET_AT_MOUSE_CLICK); } } }); Creating core expression Carry out to the following steps: Open plugin.xml and go to the Dependencies tab. Add org.eclipse.core.expression as a required plugin. Go to the Extensions tab. Add the org.eclipse.core.expressions. definitions extension. This will add a new definition. Change the ID of the definition to CodeSnippetApp.delete.snippet.expression. Right-click on the definition and select New |With. Change the name of the variable to snippet_at_mouse_click. This is the same variable name we set in the SnippetListView class. Right-click on the With node, and select New | instanceof option. Set the value to codesnippetapp.data.SnippetData. This core expression will be true when the type of (instanceof) the snippet_at_mouse_click variable is codesnippetapp.data.SnippetData. Click on plugin.xml and verify that the core expression definition is as follows: <extension point="org.eclipse.core.expressions.definitions"> <definition id="CodeSnippetApp.delete.snippet.expression"> <with variable="snippet_at_mouse_click"> <instanceof value="codesnippetapp.data.SnippetData"> </instanceof> </with> </definition> </extension> Setting the core expression for Menu Item Open the application model (Application.e4xmi) and go to DirectMenuItem for the Delete pop-up menu. Right-click on the menu item and select Add child | VisibleWhen Core Expression. This will add a Core Expression child node. Click on the Core Expression node and then on the Find button next to the Expression Id textbox and select CodeSnippetApp.delete.snippet. expression from the list. This is the ID of the core expression definition we added in plugin.xml. Run the application. When you right-click on the Snippets List view, which does not have any snippet at this point, you should see only the New Snippet menu option. Summary In this task, we created a pop-up menu that is displayed when you right-click in the snippets list. If no snippet is selected at a location where you right-click, then it displays a pop-up menu with a single option to add a snippet. If there is a snippet at the location, then we display a menu that has options to delete the snippet and add a snippet. Resources for Article : Further resources on this subject: Installing Alfresco Software Development Kit (SDK) [Article] JBoss AS plug-in and the Eclipse Web Tools Platform [Article] Deployment of Reports with BIRT [Article]
Read more
  • 0
  • 0
  • 3680

article-image-top-features-youll-want-know-about
Packt
12 Jun 2013
10 min read
Save for later

Top features you'll want to know about

Packt
12 Jun 2013
10 min read
(For more resources related to this topic, see here.) 1 – Track changes and production revisions (for Adobe Story Plus only) It is important to keep track of any changes you or someone else may make to a document. It's easy to save over the previous version with the new one, but what if you want to compare the previous and current versions to one another? You are able to track any and all revisions through this feature. Called revision styles, all revisions become associated with a unique style for easier identification. Track changes Before moving to revisions, we need to be able to know how to insert and track changes made to a document. This is how it is done: When in the AUTHORING view, in the document, go to the Review tab in the top tool bar. Check Start Tracking Changes to enable it, and uncheck it to disable: When it is checked, any new content you add will be in red text and highlighted: There is a speech bubble on the right-hand side of the addition, which allows for the person making the change to add a comment. Click on the icon to open the comment window: When you place the cursor over the inserted change, a new bubble will appear telling you who made the change and when. On the far right-hand side, you can either accept or reject the change: Production revisions You have to be in the Authoring view in order to make a revision. Production revisions highlight certain pages where changes have been made. The script becomes locked and all changes are highlighted in the revision style you choose. On the title page, a note is inserted on the bottomright corner giving the date of the last revision. This is also done on the footer of every page where there is a change. The color changes and borders will not be exported in a PDF. Before starting a revision, make sure that you have done the following: Act on all tracked changes in your document by accepting or rejecting them. Disable track changes after completing accepting/rejecting tracked changes. Now, after completing the preceding steps, follow these steps: Select Production | Start Revision. In the Active Revision drop-down, choose a revision style: This style will be used for the markups in the revision. Make sure that you haven't already used the chosen style for a previous revision document. Click Start Revision. Creating a revision style Follow these steps to create your revision style: Select Production | Manage Revisions. Click on the + icon. Enter a name for the style The following options can be tailored according to your needs: Revision Color: Used to choose a color from the color menu. This color will then be applied to all the revised text and the border of the individual pages that contain the revisions. The border color will not be displayed in a printed or exported document. Mark: The default mark is displayed on the right of the revised content. You can change this mark by choosing any symbol of your liking. Date: The revision date. Revision Text Style: The chosen formatting option is used to display revised text. Click Done and your new style will be available from now on. Deleting or modifying existing revisions Let's take a look at how we can delete or modify already existing revisions: Select the style that you want to delete or modify. You can do either of the following: Click on the - sign to delete the style To modify, simply edit its values and click Done Display options for revisions Adobe Story also provides some display options for revisions, here's how we can set them up: Select Production | Manage Revisions. In Viewing Options, the following display options can be personalized according to your needs: Show Markup For: The options are Select All or Active. This will let you choose whether you want to have all the markups shown for all revisions or just the active ones. Mark Position: The mark you set in Revision Style is set to the right-hand side by default; you can also change its position. Show Date In Script Header and Footer: If you do not want to display the date, disable this option. Locking or unlocking scene numbers When you lock scene numbers, you prevent the renumbering of existing scenes whenever a new scene is added during production revisions. When you do insert a new scene, Adobe Story will apply a number to the scene preceding it. For example, if you add a new scene in between scene 4 and 5, it will be numbered 4A. Here's how we can lock and unlock scene numbers: Select Production | Manage Scene Numbers. Select the Keep Existing Scene Number option to lock all current scenes. To unlock, deselect Keep Existing Scene Number. Omitting or unomitting scenes Adobe Story allows you to remove a scene without affecting the scene numbers remaining in the script. The word OMITTED will appear at the location of the scene you've chosen to omit. You can, at a later date, unomit the scene if you chose and recover the content. To omit a scene, simply place your cursor on the scene and then select Production | Omit Scene. To unomit a scene, place your cursor on the omitted scene and then select Production | Unomit Scene. Printing production revisions If you want to print your revisions, it is easy to do so; just follow these steps: Select File | Print. Choose any one of the following option: Entire Script All ChangedPages Revision To print in color, select the Print Revised Text In Color option. Identifying the total number of revised pages Here's how we can identify the total number of revised pages: Select Production | Manage Revisions. In Viewing Options, select All and click Done. 2 – Tagging Along with the advent of the "cloud" concept, tagging individual words to content has become something of a norm in today's online society. Adobe Story has incorporated a similar system. With tagging, you can tag words and phrases in your scripts automatically, or manually by using the Tagging Panel option. For example, "boom" can be tagged as "sound effect". Tagging panel To open the panel, you must be first in the AUTHORING view. Select View | Tagging Panel. The panel will open on the right-hand side of the document. To add tags to the panel, enter the name of the tag in the field next to the Create button: To delete a tag from the tagging panel, select the tag and then click on the Delete this Tag link: Tagging automatically You must be in the online mode for the Autotagging feature to work. It will not work in the offline mode. The Autotagging feature is only available for English scripts. This is how it's done: Select File | Tagging | Start Autotagging. Or select it from the drop-down menu option in the Tagging panel: Once you enable Autotagging, the script will be locked. You will have to wait until the process has completed before being able to edit the document; the following screenshot shows the message being displayed: Tagging manually Select View | Tagging Panel. Choose the word or phrase you would like to tag. If what you're choosing has already been tagged, it will be appended to the tag list for the word or phrase. Select a tag from Taglist in the Tagging panel. Do either of the following: Select the Show In Bold option if you want the tagged words or phrases to be displayed in bold. Select the Show Color option if you would prefer Story to display the selected color (you can choose a color for each tag with a color palette on the righthand of the tag in the Taglist panel) to the tag: Finding words or phrases by their specific tag Follow these steps to search for words or phrases with a specific tag: Disable visibility for all tags. Enable visibility for the tag that you want to search. To do this, simply click on the eye icon on the left-hand side of the tagged word: Use the arrow icons in the Tagging panel in order to navigate through the tags in the script. Only the visible tags will be shown. Viewing tags associated with a word or phrase To view tags associated with a word or a phrase, you can do either of the following: Select the word/phrase. The tags associated with the word/phrase will be highlighted in the Tagging panel. Scroll through the panel in order to view the tags associated with it: Move your mouse over the word/phrase. The information will be displayed in the tool tip: Hold Ctrl (Cmd on Mac) and double-click to view the associated tags: Removing tags Over the word you wish to edit, hold Ctrl (Cmd on Mac) and double-click to bring up the Applied Tags panel. Click on the Remove This Tag icon for the chosen tag. Click Close. To remove all the tags, select File | Tagging | Remove All Tags. To remove all the manual tags, select File | Tagging | Remove All Manual Tags. To remove all the auto tags, select File | Tagging | Remove Auto Tags. 3 – Application for iOS-based devices Adobe Story has an application for iOS-based devices. This application is available currently only in English. It allows you to read and review Adobe Story scripts and documents. It does not support AV (Audio Visual) scripts, Multicolumn scripts, and TV scripts as of yet. Logging in Before you start, make sure you have registered yourself with Adobe Story using the web or desktop application. Use the same combination of e-mail address and password used on the full application with the iOS version. Accept the TOU before attempting to log in. If you want to log out, select Account and then select Log Out. Viewing documents, scene outline, and scenes The ten most recently read files will be displayed upon logging in to the Adobe Story application. To view all the documents, click Categories. To view the scene outline, select the script in the Recent Files or Categories view. To view the contents of a scene, select the scene in the scene outline. Use the arrow icons to move among the scenes. To view Notifications, in the Recent Files view, select Notifications. A list of notifications is displayed. Highlighted notifications are for new ones. Reviewing scripts As long as you have author, co-author, or reviewer permissions, you will be able to review a script. Open the script and navigate to the scene. Do one of the following: Double-click to select the content that you want to comment on. Click on Comment, or on the Add Comment button. To comment on the content that has already been commented on, enter your comment in the Write New Comment textbox. To navigate comments, use the arrow icons. Click Post. Viewing or deleting comments In the scene containing the comments, select Comments. The comment list is displayed. The paragraph containing the comment is highlighted when you select on a comment in the list. Select Delete after clicking on the desired comment. Summary In this article we learned about three of Adobe Story's key features. We learned about track changes and production revisions. we learned about tagging, and learned about more about Adobe Story in iOS devices. There is a whole lot more to learn as far as the features in Adobe Story is concerned. Resources for Article : Further resources on this subject: Integrating Scala, Groovy, and Flex Development with Apache Maven [Article] Exporting SAP BusinessObjects Dashboards into Different Environments [Article] An Introduction to Flash Builder 4-Network Monitor [Article]
Read more
  • 0
  • 0
  • 2245
article-image-article-top-features-you-need-know-about
Packt
03 Jun 2013
3 min read
Save for later

Top features you need to know about

Packt
03 Jun 2013
3 min read
(For more resources related to this topic, see here.) 1 – Minimap The minimap is an innovative feature of Sublime Text 2 that gives you a bird's-eye view of the document you are editing. Always present at the right-hand side of the editor, it allows you to quickly look at a live, updated, zoomed out version of your current document. While the text will rarely be distinguishable, it allows for a topographical view of your document structure. Image The minimap feature is also very useful for navigating a large document as it can behave similar to a scroll bar. When clicked on, the minimap can be used to scroll the document to a different portion. However, should you find yourself not needing the minimap, or need the screen real estate it inhabits, it can easily be hidden by using the Menu bar to select View | Hide Minimap. 2 – Multiple cursors Another way Sublime Text 2 differentiates itself from the crowded text editor market is by way of including the functionality that allows the user to edit a document in multiple places at the same time. This can be very useful when making an identical change in multiple places. It is especially useful when the change that needs to occur cannot be easily accomplished with find and replace. By pressing command + left-click on OS X, or Ctrl + left-click on other platforms, an additional cursor will be placed at the location of the click. Each additional cursor will mirror the original cursor. The following screenshot shows a demo of this functionality. First, I created cursors on each of my three lines of text. Then I proceeded to type test without quotes: Image Now, as shown in the following screenshot, anything typed will be typed identically on the three lines where the cursors are placed. In this case I typed a space followed by the word test. This addition was simultaneous and I only had to make the change once, after creating the additional cursors. Image To return to a single cursor, simply press Esc or left-click anywhere on the document. Summary This article covered a few few features of Sublime Text 2 including multiple cursors, a plugin system, and a few others which will be covered in this article. Resources for Article : Further resources on this subject: Building a Flex Type-Ahead Text Input [Article] Introduction to Data Binding [Article] Working with Binding data and UI elements in Silverlight 4 [Article]
Read more
  • 0
  • 0
  • 2205

article-image-building-winrt-components-be-consumed-any-language-become-expert
Packt
30 May 2013
5 min read
Save for later

Building WinRT components to be consumed from any language (Become an expert)

Packt
30 May 2013
5 min read
(For more resources related to this topic, see here.) Getting ready Please refer to the WinRTCalculator project for the full working code to create a WinRT component and consume it in Javascript. How to do it... Perform the following steps to create a WinRT component and consume it in Javascript: Launch Visual Studio 2012 and create a new project. Expand Visual C++ from the left pane and then select the node for Windows Store apps. Select the Windows Runtime component and then name the project as WinRTCalculator. Open Class1.h and add the following method declarations: double ComputeAddition(double num1, double num2);double ComputeSubstraction(double num1, double num2);double ComputeMultiplication(double num1, double num2);double ComputeDivision(double num1, double num2); Open Class1.cpp and add the following method implementations: double Class1::ComputeAddition(double num1, double num2){return num1+num2;}double Class1::ComputeSubstraction(double num1, double num2){if(num1>num2)return num1-num2;else return num2-num1;}double Class1::ComputeMultiplication(double num1, double num2){return num1*num2;}double Class1::ComputeDivision(double num1, double num2){if (num2 !=0){ return num1/num2; } else return 0; } Now save the project and build it. Now we need to create a Javascript project where the preceding WinRTCalculator component will be consumed. To create the Javascript project, follow these steps: Right-click on Solution Explorer and go to Add | New Project. Expand JavaScript from the left pane, and choose Blank App. Name the project as ConsumeWinRTCalculator. Right-click on ConsumeWinRTCalculator and set it as Startup Project . Add a project reference to WinRTCalculator, as follows: Right-click on the ConsumeWinRTCalculator project and choose Add Reference. Go to Solution | Projects from the left pane of the References Manager dialog box. Select WinRTCalculator from the center pane and then click on the OK button. Open the default.html file and add the following HTML code in the body: <p>Calculator from javascript</p> <div id="inputDiv"> <br /><br /> <span id="inputNum1Div">Input Number - 1 : </span> <input id="num1" /> <br /><br /> <span id="inputNum2Div">Input Number - 2 : </span> <input id="num2" /> <br /><br /> <p id="status"></p> </div> <br /><br /> <div id="addButtonDiv"> <button id="addButton" onclick= "AdditionButton_Click()">Addition of Two Numbers </button> </div> <div id="addResultDiv"> <p id="addResult"></p> </div> <br /><br /> <div id="subButtonDiv"> <button id= "subButton" onclick="SubsctractionButton_Click()"> Substraction of two numbers</button> </div> <div id="subResultDiv"> <p id="subResult"></p> </div> <br /><br /> <div id="mulButtonDiv"> <button id= "mulButton" onclick="MultiplicationButton_Click()"> Multiplcation of two numbers</button> </div> <div id="mulResultDiv"> <p id="mulResult"></p> </div> <br /><br /> <div id="divButtonDiv"> <button id= "divButton" onclick="DivisionButton_Click()"> Division of two numbers</button> </div> <div id="divResultDiv"> <p id="divResult"></p> </div> Open the default.css style file from 5725OT_08_CodeWinRTCalculatorConsumeWinRTCalculatorcss default.css and copy-paste the styles to your default.css style file. Add JavaScript event handlers that will call the WinRTCalculator component DLL. Add the following code at the end of the default.js file: var nativeObject = new WinRTCalculator.Class1(); function AdditionButton_Click() { var num1 = document.getElementById('num1').value; var num2 = document.getElementById('num2').value; if (num1 == '' || num2 == '') { document.getElementById('status').innerHTML = 'Enter input numbers to continue'; } else { var result = nativeObject.computeAddition(num1, num2); document.getElementById('status').innerHTML = ''; document.getElementById('addResult').innerHTML = result; } } function SubsctractionButton_Click() { var num1 = document.getElementById('num1').value; var num2 = document.getElementById('num2').value; if (num1 == '' || num2 == '') { document.getElementById('status').innerHTML = 'Enter input numbers to continue'; } else { var result = nativeObject.computeSubstraction (num1, num2); document.getElementById('status').innerHTML = ''; document.getElementById('subResult').innerHTML = result; } } function MultiplicationButton_Click() { var num1 = document.getElementById('num1').value; var num2 = document.getElementById('num2').value; if (num1 == '' || num2 == '') { document.getElementById('status').innerHTML = 'Enter input numbers to continue'; } else { var result = nativeObject.computeMultiplication (num1, num2); document.getElementById('status').innerHTML = ''; document.getElementById('mulResult').innerHTML = result; } } Now press the F5 key to run the application. Enter the two numbers and click on the Addition of Two Numbers button or on any of the shown buttons to display the computation. How it works... The Class1.h and Class1.cpp files have a public ref class. It's an Activatable class that JavaScript can create by using a new expression. JavaScript activates the C++ class Class1 and then calls its methods and the returned values are populated to the HTML Div. There's more... While debugging a JavaScript project that has a reference to a WinRT component DLL, the debugger is set to enable either stepping through the script or through the component native code. To change this setting, right-click on the JavaScript project and go to Properties | Debugging | Debugger Type. If a C++ Windows Runtime component project is removed from a solution, the corresponding project reference from the JavaScript project must also be removed manually. Summary In this article, we learned how to reate a WinRT component and call it from JavaScript. Resources for Article : Further resources on this subject: Installation and basic features of EnterpriseDB [Article] Editing DataGrids with Popup Windows in Flex [Article] Monitoring Windows with Zabbix 1.8 [Article]
Read more
  • 0
  • 0
  • 1739

article-image-deploying-html5-applications-gnome
Packt
28 May 2013
10 min read
Save for later

Deploying HTML5 Applications with GNOME

Packt
28 May 2013
10 min read
(For more resources related to this topic, see here.) Before we start Most of the discussions in this article require a moderate knowledge of HTML5, JSON, and common client-side JavaScript programming. One particular exercise uses JQuery and JQuery Mobile to show how a real HTML5 application will be implemented. Embedding WebKit What we need to learn first is how to embed a WebKit layout engine inside our GTK+ application. Embedding WebKit means we can use HTML and CSS as our user interface instead of GTK+ or Clutter. Time for action – embedding WebKit With WebKitGTK+, this is a very easy task to do; just follow these steps: Create an empty Vala project without GtkBuilder and no license. Name it hello-webkit. Modify configure.ac to include WebKitGTK+ into the project. Find the following line of code in the file: PKG_CHECK_MODULES(HELLO_WEBKIT, [gtk+-3.0]) Remove the previous line and replace it with the following one: PKG_CHECK_MODULES(HELLO_WEBKIT, [gtk+-3.0 webkitgtk-3.0]) Modify Makefile.am inside the src folder to include WebKitGTK into the Vala compilation pipeline. Find the following lines of code in the file: hello_webkit_VALAFLAGS = --pkg gtk+-3.0 Remove it and replace it completely with the following lines: hello_webkit_VALAFLAGS = --vapidir . --pkg gtk+-3.0 --pkg webkit-1.0 --pkglibsoup-2.4 Fill the hello_webkit.vala file inside the src folder with the following lines: using GLib;using Gtk;using WebKit;public class Main : WebView{public Main (){load_html_string("<h1>Hello</h1>","/");}static int main (string[] args){Gtk.init (ref args);var webView = new Main ();var window = new Gtk.Window();window.add(webView);window.show_all ();Gtk.main ();return 0;}} Copy the accompanying webkit-1.0.vapi file into the src folder. We need to do this, unfortunately, because the webkit-1.0.vapi file distributed with many distributions is still using GTK+ Version 2. Run it, you will see a window with the message Hello, as shown in the following screenshot: What just happened? What we need to do first is to include WebKit into our namespace, so we can use all the functions and classes from it. using WebKit; Our class is derived from the WebView widget. It is an important widget in WebKit, which is capable of showing a web page. Showing it means not only parsing and displaying the DOM properly, but that it's capable to run the scripts and handle the styles referred to by the document. The derivation declaration is put in the class declaration as shown next: public class Main : WebView In our constructor, we only load a string and parse it as an HTML document. The string is Hello, styled with level 1 heading. After the execution of the following line, WebKit will parse and display the presentation of the HTML5 code inside its body: public Main (){load_html_string("<h1>Hello</h1>","/");} In our main function, what we need to do is create a window to put our WebView widget into. After adding the widget, we need to call the show_all() function in order to display both the window and the widget. static int main (string[] args){Gtk.init (ref args);var webView = new Main ();var window = new Gtk.Window();window.add(webView); The window content now only has a WebView widget as its sole displaying widget. At this point, we no longer use GTK+ to show our UI, but it is all written in HTML5. Runtime with JavaScriptCore An HTML5 application is, most of the time, accompanied by client-side scripts that are written in JavaScript and a set of styling definition written in CSS3. WebKit already provides the feature of running client-side JavaScript (running the script inside the web page) with a component called JavaScriptCore, so we don't need to worry about it. But how about the connection with the GNOME platform? How to make the client-side script access the GNOME objects? One approach is that we can expose our objects, which are written in Vala so that they can be used by the client-side JavaScript. This is where we will utilize JavaScriptCore. We can think of this as a frontend and backend architecture pattern. All of the code of business process which touch GNOME will reside in the backend. They are all written in Vala and run by the main process. On the opposite side, the frontend, the code is written in JavaScript and HTML5, and is run by WebKit internally. The frontend is what the user sees while the backend is what is going on behind the scene. Consider the following diagram of our application. The backend part is grouped inside a grey bordered box and run in the main process. The frontend is outside the box and run and displayed by WebKit. From the diagram, we can see that the frontend creates an object and calls a function in the created object. The object we create is not defined in the client side, but is actually created at the backend. We ask JavaScriptCore to act as a bridge to connect the object created at the backend to be made accessible by the frontend code. To do this, we wrap the backend objects with JavaScriptCore class and function definitions. For each object we want to make available to frontend, we need to create a mapping in the JavaScriptCore side. In the following diagram, we first map the MyClass object, then the helloFromVala function, then the intFromVala, and so on: Time for action – calling the Vala object from the frontend Now let's try and create a simple client-side JavaScript code and call an object defined at the backend: Create an empty Vala project, without GtkBuilder and no license. Name it hello-jscore. Modify configure.ac to include WebKitGTK+ exactly like our previous experiment. Modify Makefile.am inside the src folder to include WebKitGTK+ and JSCore into the Vala compilation pipeline. Find the following lines of code in the file: hello_jscore_VALAFLAGS = --pkg gtk+-3.0 Remove it and replace it completely with the following lines: hello_jscore_VALAFLAGS = --vapidir . --pkg gtk+-3.0 --pkg webkit-1.0 --pkglibsoup-2.4 --pkg javascriptcore Fill the hello_jscore.vala file inside the src folder with the following lines of code: using GLib;using Gtk;using WebKit;using JSCore;public class Main : WebView{public Main (){load_html_string("<h1>Hello</h1>" +"<script>alert(HelloJSCore.hello())</script>","/");window_object_cleared.connect ((frame, context) => {setup_js_class ((JSCore.GlobalContext) context);});}public static JSCore.Value helloFromVala (Context ctx,JSCore.Object function,JSCore.Object thisObject,JSCore.Value[] arguments,out JSCore.Value exception) {exception = null;var text = new String.with_utf8_c_string ("Hello fromJSCore");return new JSCore.Value.string (ctx, text);}static const JSCore.StaticFunction[] js_funcs = {{ "hello", helloFromVala, PropertyAttribute.ReadOnly },{ null, null, 0 }};static const ClassDefinition js_class = {0, // versionClassAttribute.None, // attribute"HelloJSCore", // classNamenull, // parentClassnull, // static valuesjs_funcs, // static functionsnull, // initializenull, // finalizenull, // hasPropertynull, // getPropertynull, // setPropertynull, // deletePropertynull, // getPropertyNamesnull, // callAsFunctionnull, // callAsConstructornull, // hasInstancenull // convertToType};void setup_js_class (GlobalContext context) {var theClass = new Class (js_class);var theObject = new JSCore.Object (context, theClass,context);var theGlobal = context.get_global_object ();var id = new String.with_utf8_c_string ("HelloJSCore");theGlobal.set_property (context, id, theObject,PropertyAttribute.None, null);}static int main (string[] args){Gtk.init (ref args);var webView = new Main ();var window = new Gtk.Window();window.add(webView);window.show_all ();Gtk.main ();return 0;}} Copy the accompanied webkit-1.0.vapi and javascriptcore.vapi files into the src folder. The javascriptcore.vapi file is needed because some distributions do not have this .vapi file in their repositories. Run the application. The following output will be displayed: What just happened? The first thing we do is include the WebKit and JavaScriptCore namespaces. Note, in the following code snippet, that the JavaScriptCore namespace is abbreviated as JSCore: using WebKit;using JSCore; In the Main function, we load HTML content into the WebView widget. We display a level 1 heading and then call the alert function. The alert function displays a string returned by the hello function inside the HelloJSCore class, as shown in the following code: public Main (){load_html_string("<h1>Hello</h1>" +"<script>alert(HelloJSCore.hello())</script>","/"); In the preceding code snippet, we can see that the client-side JavaScript code is as follows: alert(HelloJSCore.hello()) And we can also see that we call the hello function from the HelloJSCore class as a static function. It means that we don't instantiate the HelloJSCore object before calling the hello function. In WebView, we initialize the class defined in the Vala class when we get the window_object_cleared signal. This signal is emitted whenever a page is cleared. The initialization is done in setup_js_class and this is also where we pass the JSCore global context into. The global context is where JSCore keeps the global variables and functions. It is accessible by every code. window_object_cleared.connect ((frame, context) => {setup_js_class ((JSCore.GlobalContext)context);}); The following snippet of code contains the function, which we want to expose to the clientside JavaScript. The function just returns a Hello from JSCore string message: public static JSCore.Value helloFromVala (Context ctx,JSCore.Object function,JSCore.Object thisObject,JSCore.Value[] arguments,out JSCore.Value exception) {exception = null;var text = new String.with_utf8_c_string ("Hello from JSCore");return new JSCore.Value.string (ctx, text);} Then we need to put a boilerplate code that is needed to expose the function and other members of the class. The first part of the code is the static function index. This is the mapping between the exposed function and the name of the function defined in the wrapper. In the following example, we map the hello function, which can be used in the client side, with the helloFromVala function defined in the code. The index is then ended with null to mark the end of the array: static const JSCore.StaticFunction[] js_funcs = {{ "hello", helloFromVala, PropertyAttribute.ReadOnly },{ null, null, 0 }}; The next part of the code is the class definition. It is about the structure that we have to fill, so that JSCore would know about the class. All of the fields are filled with null, except for those we want to make use of. In this example, we use the static function for the hello function. So we fill the static function field with js_funcs, which we defined in the preceding code snippet: static const ClassDefinition js_class = {0, // versionClassAttribute.None, // attribute"HelloJSCore", // classNamenull, // parentClassnull, // static valuesjs_funcs, // static functionsnull, // initializenull, // finalizenull, // hasPropertynull, // getPropertynull, // setPropertynull, // deletePropertynull, // getPropertyNamesnull, // callAsFunctionnull, // callAsConstructornull, // hasInstancenull // convertToType}; After that, in the the setup_js_class function, we set up the class to be made available in the JSCore global context. First, we create JSCore.Class with the class definition structure we filled previously. Then, we create an object of the class, which is created in the global context. Last but not least, we assign the object with a string identifier, which is HelloJSCore. After executing the following code, we will be able to refer HelloJSCore on the client side: void setup_js_class (GlobalContext context) {var theClass = new Class (js_class);var theObject = new JSCore.Object (context, theClass,context);var theGlobal = context.get_global_object ();var id = new String.with_utf8_c_string ("HelloJSCore");theGlobal.set_property (context, id, theObject,PropertyAttribute.None, null);}
Read more
  • 0
  • 0
  • 14101
article-image-developing-web-project-jasperreports
Packt
27 May 2013
11 min read
Save for later

Developing a Web Project for JasperReports

Packt
27 May 2013
11 min read
(For more resources related to this topic, see here.) Setting the environment First, we need to install the required software, Oracle Enterprise Pack for Eclipse 12c, from http://www.oracle.com/technetwork/middleware/ias/ downloads/wls-main-097127.html using Installers with Oracle WebLogic Server, Oracle Coherence and Oracle Enterprise Pack for Eclipse, and download the Oracle Database 11g Express Edition from http://www.oracle.com/technetwork/products/express-edition/overview/index.html. Setting the environment requires the following tasks: Creating database tables Configuring a data source in WebLogic Server 12c Copying JasperReports required JAR files to the server classpath First create a database table, which shall be the data source for creating the reports, with the following SQL script. If a database table has already been created, the table may be used for this article too. CREATE TABLE OE.Catalog(CatalogId INTEGER PRIMARY KEY, Journal VARCHAR(25), Publisher VARCHAR(25),Edition VARCHAR(25), Title Varchar(45), Author Varchar(25)); INSERT INTO OE.Catalog VALUES('1', 'Oracle Magazine', 'Oracle Publishing', 'Nov-Dec 2004', 'Database Resource Manager', 'Kimberly Floss'); INSERT INTO OE.Catalog VALUES('2', 'Oracle Magazine', 'Oracle Publishing', 'Nov-Dec 2004', 'From ADF UIX to JSF', 'Jonas Jacobi'); INSERT INTO OE.Catalog VALUES('3', 'Oracle Magazine', 'Oracle Publishing', 'March-April 2005', 'Starting with Oracle ADF ', 'Steve Muench'); Next, configure a data source in WebLogic server with JNDI name jdbc/OracleDS. Next, we need to download some JasperReports JAR files including dependencies. Download the JAR/ZIP files listed below and extract the zip/tar.gz to a directory, c:/jasperreports for example.   JAR/ZIP Donwload URL jasperreports-4.7.0.jar http://sourceforge.net/projects/ jasperreports/files/jasperreports/JasperReports%204.7.0/ itext-2.1.0 http://mirrors.ibiblio.org/pub/mirrors/maven2/com/ lowagie/itext/2.1.0/itext-2.1.0.jar commons-beanutils-1.8.3-bin.zip http://commons.apache.org/beanutils/download_beanutils.cgi commons-digester-2.1.jar http://commons.apache.org/digester/download_digester.cgi commons-logging-1.1.1-bin http://commons.apache.org/logging/download_logging.cgi  poi-bin-3.8-20120326 zip or tar.gz http://poi.apache.org/download.html#POI-3.8 All the JasperReports libraries are open source. We shall be using the following JAR files to create a JasperReports report: JAR File Description commons-beanutils-1.8.3.jar JavaBeans utility classes commons-beanutils-bean-collections-1.8.3.jar Collections framework extension classes commons-beanutils-core-1.8.3.jar JavaBeans utility core classes commons-digester-2.1.jar Classes for processing XML documents. commons-logging-1.1.1.jar Logging classes iText-2.1.0.jar PDF library jasperreports-4.7.0.jar JasperReports API poi-3.8-20120326.jar, poi-excelant-3.8-20120326.jar, poi-ooxml-3.8-20120326.jar, poi-ooxml-schemas-3.8-20120326.jar, poi-scratchpad-3.8-20120326.jar Apache Jakarta POI  classes and dependencies. Add the Jasper Reports required by the JAR files to the user_projectsdomains base_domainbinstartWebLogic.bat script's CLASSPATH variable: set SAVE_CLASSPATH=%CLASSPATH%;C:jasperreportscommonsbeanutils- 1.8.3commons-beanutils-1.8.3.jar;C:jasperreportscommonsbeanutils- 1.8.3commons-beanutils-bean-collections-1.8.3.jar;C: jasperreportscommons-beanutils-1.8.3commons-beanutils-core- 1.8.3.jar;C:jasperreportscommons-digester-2.1.jar;C:jasperreports commons-logging-1.1.1commons-logging-1.1.1.jar;C:jasperreports itext-2.1.0.jar;C:jasperreportsjasperreports-4.7.0.jar;C: jasperreportspoi-3.8poi-3.8-20120326.jar;C:jasperreportspoi- 3.8poi-scratchpad-3.8-20120326.jar;C:jasperreportspoi-3.8poiooxml- 3.8-20120326.jar;C:jasperreportspoi-3.8.jar;C:jasperreports poi-3.8poi-excelant-3.8-20120326.jar;C:jasperreportspoi-3.8poiooxml- schemas-3.8-20120326.jar Creating a Dynamic Web project in Eclipse First, we need to create a web project for generating JasperReports reports. Select File | New | Other. In New wizard select Web | Dynamic Web Project. In Dynamic Web Project configuration specify a Project name (PDFExcelReports for example), select the Target Runtime as Oracle WebLogic Server 11g R1 ( 10.3.5). Click on Next. Select the default Java settings; that is, Default output folder as build/classes, and then click on Next. In WebModule, specify ContextRoot as PDFExcelReports and Content Directory as WebContent. Click on Finish. A web project for PDFExcelReports gets generated. Right-click on the project node in ProjectExplorer and select Project Properties. In Properties, select Project Facets. The Dynamic Web Module project facet should be selected by default as shown in the following screenshot: Next, create a User Library for JasperReports JAR files and dependencies. Select Java Build Path in Properties. Click on Add Library. In Add Library, select User Library and click on Next. In User Library, click on User Libraries. In User Libraries, click on New. In New User Library, specify a User library name (JasperReports) and click on OK. A new user library gets added to User Libraries. Click on Add JARs to add JAR files to the library. The following screenshot shows the JasperReports that are added: Creating the configuration file We require a JasperReports configuration file for generating reports. JasperReports XML configuration files are based on the jasperreport.dtd DTD, with a root element of jasperReport. We shall specify the JasperReports report design in an XML configuration bin file, which we have called config.xml. Create an XML file config.xml in the webContent folder by selecting XML | XML File in the New wizard. Some of the other elements (with commonly used subelements and attributes) in a JasperReports configuration XML file are listed in the following table: XML Element Description Sub-Elements Attributes jasperReport Root Element reportFont, parameter, queryString, field, variable, group, title, pageHeader, columnHeader, detail, columnFooter, pageFooter. name, columnCount, pageWidth, pageHeight, orientation, columnWidth, columnSpacing, leftMargin, rightMargin, topMargin, bottomMargin. reportFont Report level font definitions - name, isDefault, fontName, size, isBold, isItalic, isUnderline, isStrikeThrough, pdfFontName, pdfEncoding, isPdfEmbedded parameter Object references used in generating a report. Referenced with P${name} parameterDescription, defaultValueExpression name, class queryString Specifies the SQL query for retrieving data from a database. - - field Database table columns included in report. Referenced with F${name} fieldDescription name, class variable Variable used in the report XML file. Referenced with V${name} variableExpression, initialValueExpression name,class. title Report title band - pageHeader Page Header band - columnHeader Specifies the different columns in the report generated. band - detail Specifies the column values band - columnFooter Column footer band - A report section is represented with the band element. A band element includes staticText and textElement elements. A staticText element is used to add static text to a report (for example, column headers) and a textElement element is used to add dynamically generated text to a report (for example, column values retrieved from a database table). We won't be using all or even most of these element and attributes. Specify the page width with the pageWidth attribute in the root element jasperReport. Specify the report fonts using the reportFont element. The reportElement elements specify the ARIAL_NORMAL, ARIAL_BOLD, and ARIAL_ITALIC fonts used in the report. Specify a ReportTitle parameter using the parameter element. The queryString of the example JasperReports configuration XML file catalog.xml specifies the SQL query to retrieve the data for the report. <queryString><![CDATA[SELECT CatalogId, Journal, Publisher, Edition, Title, Author FROM OE.Catalog]]> </queryString> The PDF report has the columns CatalogId, Journal, Publisher, Edition, Title, and Author. Specify a report band for the report title. The ReportTitle parameter is invoked using the $P {ReportTitle} expression. Specify a column header using the columnHeader element. Specify static text with the staticText element. Specify the report detail with the detail element. A column text field is defined using the textField element. The dynamic value of a text field is defined using the textFieldExpression element: <textField> <reportElement x="0" y="0" width="100" height="20"/> <textFieldExpression class="java.lang.String"><![CDATA[$F{Cata logId}]]></textFieldExpression> </textField> Specify a page footer with the pageFooter element. Report parameters are defined using $P{}, report fields using $F{}, and report variables using $V{}. The config. xml file is listed as follows: <?xml version='1.0' encoding='utf-8'?> <!DOCTYPE jasperReport PUBLIC "-//JasperReports//DTD Report Design// EN" "http://jasperreports.sourceforge.net/dtds/jasperreport.dtd"> <jasperReport name="PDFReport" pageWidth="975"> The following code snippet specifies the report fonts: <reportFont name="Arial_Normal" isDefault="true" fontName="Arial" size="15" isBold="false" isItalic="false" isUnderline="false" isStrikeThrough="false" pdfFontName="Helvetica" pdfEncoding="Cp1252" isPdfEmbedded="false"/> <reportFont name="Arial_Bold" isDefault="false" fontName="Arial" size="15" isBold="true" isItalic="false" isUnderline="false" isStrikeThrough="false" pdfFontName="Helvetica-Bold" pdfEncoding="Cp1252" isPdfEmbedded="false"/> <reportFont name="Arial_Italic" isDefault="false" fontName="Arial" size="12" isBold="false" isItalic="true" isUnderline="false" isStrikeThrough="false" pdfFontName="Helvetica-Oblique" pdfEncoding="Cp1252" isPdfEmbedded="false"/> The following code snippet specifies the parameter for the report title, the SQL query to generate the report with, and the report fields. The resultset from the SQL query gets bound to the fields. <parameter name="ReportTitle" class="java.lang.String"/> <queryString><![CDATA[SELECT CatalogId, Journal, Publisher, Edition, Title, Author FROM Catalog]]></queryString> <field name="CatalogId" class="java.lang.String"/> <field name="Journal" class="java.lang.String"/> <field name="Publisher" class="java.lang.String"/> <field name="Edition" class="java.lang.String"/> <field name="Title" class="java.lang.String"/> <field name="Author" class="java.lang.String"/> Add the report title to the report as follows: <title> <band height="50"> <textField> <reportElement x="350" y="0" width="200" height="50" /> <textFieldExpression class="java.lang. String">$P{ReportTitle}</textFieldExpression> </textField> </band> </title> <pageHeader> <band> </band> </pageHeader> Add the column's header as follows: <columnHeader> <band height="20"> <staticText> <reportElement x="0" y="0" width="100" height="20"/> <textElement> <font isUnderline="false" reportFont="Arial_Bold"/> </textElement> <text><![CDATA[CATALOG ID]]></text> </staticText> <staticText> <reportElement x="125" y="0" width="100" height="20"/> <textElement> <font isUnderline="false" reportFont="Arial_Bold"/> </textElement> <text><![CDATA[JOURNAL]]></text> </staticText> <staticText> <reportElement x="250" y="0" width="150" height="20"/> <textElement> <font isUnderline="false" reportFont="Arial_Bold"/> </textElement> <text><![CDATA[PUBLISHER]]></text> </staticText> <staticText> <reportElement x="425" y="0" width="100" height="20"/> <textElement> <font isUnderline="false" reportFont="Arial_Bold"/> </textElement> <text><![CDATA[EDITION]]></text> </staticText> <staticText> <reportElement x="550" y="0" width="200" height="20"/> <textElement> <font isUnderline="false" reportFont="Arial_Bold"/> </textElement> <text><![CDATA[TITLE]]></text> </staticText> <staticText> <reportElement x="775" y="0" width="200" height="20"/> <textElement> <font isUnderline="false" reportFont="Arial_Bold"/> </textElement> <text><![CDATA[AUTHOR]]></text> </staticText> </band> </columnHeader> The following code snippet shows how to add the report detail, which consists of values retrieved using the SQL query from the Oracle database: <detail> <band height="20"> <textField> <reportElement x="0" y="0" width="100" height="20"/> <textFieldExpression class="java.lang.String"><![CDATA[$F{Cata logId}]]></textFieldExpression> </textField> <textField pattern="0.00"> <reportElement x="125" y="0" width="100" height="20"/> <textFieldExpression class="java.lang.String"><![CDATA[$F{Jour nal}]]></textFieldExpression> </textField> <textField pattern="0.00"> <reportElement x="250" y="0" width="150" height="20"/> <textFieldExpression class="java.lang.String"><![CDATA[$F{Publ isher}]]></textFieldExpression> </textField> <textField> <reportElement x="425" y="0" width="100" height="20"/> <textFieldExpression class="java.lang.String"><![CDATA[$F{Edit ion}]]></textFieldExpression> </textField> <textField pattern="0.00"> <reportElement x="550" y="0" width="200" height="20"/> <textFieldExpression class="java.lang. String"><![CDATA[$F{Title}]]></textFieldExpression> </textField> <textField> <reportElement x="775" y="0" width="200" height="20"/> <textFieldExpression class="java.lang. String"><![CDATA[$F{Author}]]></textFieldExpression> </textField> </band> </detail> Add the column and page footer including the page number as follows: <columnFooter> <band> </band> </columnFooter> <pageFooter> <band height="15"> <staticText> <reportElement x="0" y="0" width="40" height="15"/> <textElement> <font isUnderline="false" reportFont="Arial_Italic"/> </textElement> <text><![CDATA[Page #]]></text> </staticText> <textField> <reportElement x="40" y="0" width="100" height="15"/> <textElement> <font isUnderline="false" reportFont="Arial_Italic"/> </textElement> <textFieldExpression class="java.lang. Integer"><![CDATA[$V{PAGE_NUMBER}]]></textFieldExpression> </textField> </band> </pageFooter> <summary> <band> </band> </summary> </jasperReport> We need to create a JAR file for the config.xml file and add the JAR file to the WebLogic Server's domain's lib directory. Create a JAR file using the following command from the directory containing the config.xml as follows: >jar cf config.jar config.xml Add the config.jar file to the user_projectsdomainsbase_domainlib directory, which is in the classpath of the server.
Read more
  • 0
  • 0
  • 5964

article-image-quick-start
Packt
22 May 2013
8 min read
Save for later

Quick start

Packt
22 May 2013
8 min read
(For more resources related to this topic, see here.) Common issues in Google Map Maker Before we get started, it's worth taking into consideration some of the known issues with Google Map Maker: Map Maker interface does not usually come fully translated into all languages at the same time. UI translations are usually rolled out gradually and are a part of another community-driven effort. This project is accessible at http://www.google.com/ transconsole/giyl/chooseProject. Note that some of the languages—for example, Urdu despite being translated completely—still are not available in Map Maker UI. Map Maker has not been verified for compatibility with Internet Explorer 7 and earlier versions of IE. Google Map Maker is accessed by firing the URL http://www.google.com/mapmaker. To access and get started with Map Maker, you must have a Google Account in order to start making and submitting edits. A Google Account is a unified sign-in system that provides access to a variety of free Google consumer products such as Gmail, Google Groups, Google Maps, Google Wallet, AdWords, AdSense, and so on. Think of a Google Account as a single Google sign-in, made up of an e-mail address (any e-mail address, does not have to be a Gmail) and a password of your choice, that gives you access to all the Google products under your own profile. Create your Google Account by visiting https://accounts.google.com/SignUp if you would like to use another e-mail address. If you already have a Gmail account, please sign in from the left pane when you visit http://www.google.com/mapmaker, as shown here: The Map Maker interface during the first visit The Map Maker interface The Google Map Maker interface is simple, intuitive, and easy to use. It has standard graphical icons that help you navigate around the tools and functionalities. Let us take a closer look at it: A first-time login to Map Maker starts by displaying a tutorial to quickly take you through the key features of Google Map Maker. You can navigate your way through the quick tutorial by going back and forth using the respective forward and back arrows. You can close the quick tutorial and get started with making edits right away by clicking on the X icon on top. Don't worry, you can always access the tutorial later, as will be explained later in this book. The Map Maker UI Let us take a detailed look at the Map Maker interface, I have tried to subdivide it based on the main functionalities and purposes of the tools. Key tools/sections that you need to know are highlighted and clearly labeled as well. I have named them based strictly on the functionality and this is by no means the conventional way of doing so. Let us take a quick deep dive into the tools and see what each section serves: Search The search area allows you to search and fly to places you want to in Map Maker in an instant. It works just like Google Search, only that it returns a map zoomed to the area/business you queried. Try it. Type the name of your city and hit Enter. This comes in handy when, on visiting Google Map Maker, the default load is not defaulting to your current location much as it should or just when you want to make edits and/or reviews in some other area you are familiar with or just to view and visit places. Take a look at the following search query: Review area This is the area that displays your own recent edits as well as displaying edits happening within your neighborhood that you created or are based on your location. You can switch between the tabs based on the functionality that you want; the different tabs are explained as follows: Everything: This tab is like a channel stream or timeline. Shows the recent activities in terms of new edits, reviews, or comments by you and other mappers within the neighborhood view of the map, that is, the current location of the map that is in view. See the following example: An Everything view To Review: This area only highlights the edits whose reviews are pending. Recently Published: Streams all those recent edits, which have been approved and published. You can, however, still contest these edits or correct them if they are incorrect. Filter by Category: Just next to the Recently Published tab, you will find a three-dot tab that allows you to expand this section. This section is the filter section and gives you the power to filter by categories the actions, places, and edits you would like to perform. For instance, you may just be interested in (re)viewing road and line features or the chronological order of the edits being made in the locale. Filter by Category Map view area This is the area where the Google Maps loads in order to allow you to perform the operations and edits that you want. The map view usually defaults to your current location when you visit http://www.google.com/mapmaker. Map controls These tools allow you to control the view of the map. They allow you to pan, zoom, and view Street View for supported cities. Let's take a look at what and how each of the tools comes in handy: Map controls Edit control This is the area that allows you to make new edits to Map Maker and correct existing ones as well. You can create new point, line, and polygon features by exploring the Add New tab. Note that the tools will change according to the main tool selected. You can also edit existing point, line, polygon, and direction features by exploring the Edit tab. We will take a deep dive into this section a little later in this book. Personal/User area I call this the personal area, because it allows you to personalize your Map Maker through custom settings and adding labs (experimental features that are still under testing and development). Labs allow you to extend the normal functionality of Map maker. This section also allows you to share your edits, directions, and maps with your friends by generating a unique URL for it. Create and make changes to your Map Maker profile, access Help and discussion forums, report a bug, and as well as submit feedback to the Google Maker team by using these tools. Personal user area View The View section allows you to switch between the different layers of Google Map Maker—Satellite and Map. In a Map view, you only get to view the map details created by users, whereas in Satellite view, you can see the map elements overlaying the satellite imagery provided to Google by various satellite imagery providers and partners. This is the best layer to use when making edits as it allows you to draw/trace over the edits from the satellite imagery to creating the features in a process called digitization in Cartography terms. It is actually the backbone of this community-driven project. Users have to align everything from satellite imagery to points feature, line features, and polygon features for better accuracy; otherwise their edits may be denied or delayed in the reviewing process. You can add more layers such as photos, which will display edits /features alongside the photos uploaded among other features. To switch between and add layers, simply click on it and the Map view will be populated with the layer(s) of your selection. Different views in Map Maker Contributors The Contributors' segment displays all the contributors who have made a substantial number of edits on the area of the Map view. It displays the contributors' preferred nicknames (set during the signing-up stage). If you click on any nickname, it takes you to their respective Map Maker profiles showing their edits and badges earned. Scale This section will show us the display scale of the map as we zoom in and out. Summary This article explained in detail how we can use the different features of Map Maker to our benefit. It also explained the different interfaces used in Google Map Maker. Resources for Article : Further resources on this subject: Moodle 2.0 Multimedia: Working with 2D and 3D Maps [Article] Google Earth, Google Maps and Your Photos: a Tutorial [Article] Google Earth, Google Maps and Your Photos: a Tutorial Part II [Article]
Read more
  • 0
  • 0
  • 1580
Modal Close icon
Modal Close icon