Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Application Development

357 Articles
article-image-article-authorizations-in-sap-hana
Packt
16 Jul 2013
28 min read
Save for later

Authorizations in SAP HANA

Packt
16 Jul 2013
28 min read
(For more resources related to this topic, see here.) Roles In SAP HANA, as in most of SAP's software, authorizations are grouped into roles. A role is a collection of authorization objects, with their associated privileges. It allows us, as developers, to define self-contained units of authorization. In the same way that at the start of this book we created an attribute view allowing us to have a coherent view of our customer data which we could reuse at will in more advanced developments, authorization roles allow us to create coherent developments of authorization data which we can then assign to users at will, making sure that users who are supposed to have the same rights always have the same rights. If we had to assign individual authorization objects to users, we could be fairly sure that sooner or later, we would forget someone in a department, and they would not be able to access the data they needed to do their everyday work. Worse, we might not give quite the same authorizations to one person, and have to spend valuable time correcting our error when they couldn't see the data they needed (or worse, more dangerous and less obvious to us as developers, if the user could see more data than was intended). It is always a much better idea to group authorizations into a role and then assign the role to users, than assign authorizations directly to users. Assigning a role to a user means that when the user changes jobs and needs a new set of privileges; we can just remove the first role, and assign a second one. Since, we're just starting out using authorizations in SAP HANA, let's get into this good habit right from the start. It really will make our lives easier later on. Creating a role Role creation is done, like all other SAP HANA development, in the Studio. If your Studio is currently closed, please open it, and then select the Modeler perspective. In order to create roles, privileges, and users, you will yourself need privileges. Your SAP HANA user will need the ROLE ADMIN, USER ADMIN, and CREATE STRUCTURED PRIVILEGE system privileges in order to do the development work in this article. You will see in the Navigator panel we have a Security folder, as we can see here: Please find the Security folder and then expand this folder. You will see a subfolder called Roles. Right-click on the Roles folder and select New Role to start creating a role. On the screen which will open, you will see a number of tabs representing the different authorization objects we can create, as we can see here: We'll be looking at each of these in turn, in the following sections, so for the moment just give your role Name (BOOKUSER might be appropriate, if not very original). Granted roles Like many other object types in SAP HANA, once you have created a role, you can then use it inside another role. This onion-like arrangement makes authorizations a lot easier to manage. If we had, for example, a company with two teams: Sales   Purchasing   And two countries, say: France   Germany   We could create a role giving access to sales analytic views, one giving purchasing analytic views, one giving access to data for France, and one giving access to data for Germany. We could then create new roles, say Sales-France, which don't actually contain any authorization objects themselves, but contain only the Sales and the France roles. The role definition is much simpler to understand and to maintain than if we had directly created the Sales-France role and a Sales-Germany role with all the underlying objects. Once again, as with other development objects, creating small self-contained roles and reusing them when possible will make your (maintenance) life easier. In the Granted Roles tab we can see the list of subroles this main role contains. Note that this list is only a pointer, you cannot modify the actual authorizations and the other roles given here, you would need to open the individual role and make changes there. Part of roles The Part of Roles tab in the role definition screen is exactly the opposite of the Granted Roles tab. This tab lists all other roles of which this role is a subrole. It is very useful to track authorizations, especially when you find yourself in a situation where a user seems to have too many authorizations and can see data they shouldn't be able to see. You cannot manipulate this list as such, it exists for information only. If you want to make changes, you need to modify the main role of which this role is a subrole. SQL privileges An SQL privilege is the lowest level at which we can define restrictions for using database objects. SQL privileges apply to the simplest objects in the database such as schemas, tables and so on. No attribute, analytical, or calculation view can be seen by SQL privileges. This is not strictly true, though you can consider it so. What we have seen as an analytical view, for example, the graphical definition, the drag and drop, the checkboxes, has been transformed into a real database object in the _SYS_BIC schema upon activation. We could therefore define SQL privileges on this database object if we wanted, but this is not recommended and indeed limits the control we can have over the view. We'll see a little later that SAP HANA has much finer-grained authorizations for views than this. An important thing to note about SQL privileges is that they apply to the object on which they are defined. They restrict access to a given object itself, but do not at any point have any impact on the object's contents. For example, we can decide that one of our users can have access to the CUSTOMER table, but we couldn't restrict their access to only CUSTOMER values from the COUNTRY USA. SQL privileges can control access to any object under the Catalog node in the Navigator panel. Let's add some authorizations to our BOOK schema and its contents. At the top of the SQL Privileges tab is a green plus sign button. Now click on this button to get the Select Catalog Object dialog, shown here: As you can see in the screenshot, we have entered the two letters bo into the filter box at the top of the dialog. As soon as you enter at least two letters into this box, the Studio will attempt to find and then list all database objects whose name contains the two letters you typed. If you continue to type, the search will be refined further. The first item in the list shown is the BOOK schema we created right back at the start of the book in the Chapter 2, SAP HANA Studio - Installation and First Look . Please select the BOOK item, and then click on OK to add it to our new role: The first thing to notice is the warning icon on the SQL Privileges tab itself: This means that your role definition is incomplete, and the role cannot be activated and used as yet. On the right of the screen, a list of checkbox options has appeared. These are the individual authorizations appropriate to the SQL object you have selected. In order to grant rights to a user via a role, you need to decide which of these options to include in the role. The individual authorization names are self-explicit. For example, the CREATE ANY authorization allows creation of new objects inside a schema. The INSERT or SELECT authorization might at first seem unusual for a schema, as it's not an object which can support such instructions. However, the usage is actually quite elegant. If a user has INSERT rights on the schema BOOK, then they have INSERT rights on all objects inside the schema BOOK. Granting rights on the schema itself avoids having to specify the names of all objects inside the schema. It also future-proofs your authorization concept, since new objects created in the schema will automatically inherit from the existing authorizations you have defined. On the far right of the screen, alongside each authorization is a radio button which gives an additional privilege, the possibility for a given user to, in turn, give the rights to a second user. This is an option which should not be given to all users, and so should not be present in all roles you create; the right to attribute privileges to users should be limited to your administrators. If you give just any user the right to pass on their authorizations further, you will soon find that you are no longer able to determine who can do what in your database. For the moment we are creating a simple role to show the working of the authorization concept in SAP HANA, so we will check all the checkboxes, and leave the radio buttons at No : There are some SQL privileges which are necessary for any user to be able to do work in SAP HANA. These are listed below. They give access to the system objects describing the development models we create in SAP HANA, and if a user does not have these privileges, nothing will work at all, the user will not be authorized to do anything. The SQL privileges you will need to add to the role in order to give access to basic SAP HANA system objects are: The SELECT privilege on the _SYS_BI schema   The SELECT privilege on the _SYS_REPO schema   The EXECUTE privilege on the REPOSITORY_REST procedure   Please add these SQL privileges to your role now, in order to obtain the following result: As you can see with the configuration we have just done, SQL privileges allow a user to access a given object and allow specific actions on the object. They do not however allow us to specify particular authorizations to the contents of the object. In order to use such fine-grained rights, we need to create an analytic privilege, and then add it to our role, so let's do that now. Analytic privileges An analytic privilege is an artifact unique to SAP HANA, it is not part of the standard SQL authorization concept. Analytic privileges allow us to restrict access to certain values of a given attribute, analytic, or calculation view. This means that we can create one view, which by default shows all available data, and then restrict what is actually visible to different users. We could restrict visible data by company code, by country, or by region. For example, our users in Europe would be allowed to see and work with data from our customers in Europe, but not those in the USA. An analytic privilege is created through the Quick Launch panel of Modeler , so please open that view now (or switch to the Quick Launch tab if it's already open). You don't need to close the role definition tab that's already open, we can leave it for now, create our analytic privilege, and then come back to the role definition later. From the Quick Launch panel, select Analytic Privilege , and then Create . As usual with SAP HANA, we are asked to give Name , Description , and select a package for our object. We'll call it AP_EU (for analytic privilege, Europe), use the name as the description, and put it into our book package alongside our other developments. As is common in SAP HANA, we have the option of creating an analytic privilege from scratch (Create New ) or copying an existing privilege (Copy From ). We don't currently have any other analytic privileges in our development, so leave Create New selected, then click on Next to go to the second screen of the wizard, shown here: On this page of the dialog, we are prompted to add development models to the analytic privilege. This will then allow us to restrict access to given values of these models. In the previous screenshot, we have added the CUST_REV analytic view to the analytic privilege. This will allow us to restrict access to any value we specify of any of the fields visible in the view. To add a view to the analytic privilege, just find it in the left panel, click on its name and then click on the Add button. Once you have added the views you require for your authorizations, click on the Finish button at the bottom of the window to go to the next step. You will be presented with the analytic privilege development panel, reproduced here: This page allows us to define our analytic privilege completely. On the left we have the list of database views we have included in the analytic privilege. We can add more, or remove one, using the Add and Remove buttons. To the right, we can see the Associated Attributes Restrictions and Assign Restrictions boxes. These are where we define the restrictions to individual values, or sets of values. In the top box, Associated Attributes Restrictions , we define on which attributes we want to restrict access (country code or region, maybe). In the bottom box, Assign Restrictions , we define the individual values on which to restrict (for example, for company code, we could restrict to value 0001, or US22; for region, we could limit access to EU or USA). Let's add a restriction to the REGION field of our CUST_REV view now. Click on the Add button next to the Associated Attributes Restrictions box, to see the Select Object dialog: As can be expected, this dialog lists all the attributes in our analytic view. We just need to select the appropriate attribute and then click on OK to add it to the analytic privilege. Measures in the view are not listed in the dialog. We cannot restrict access to a view according to numeric values. We cannot therefore, make restrictions to customers with a revenue over 1 million Euros, for example. Please add the REGION field to the analytic privilege now. Once the appropriate fields have been added, we can define the restrictions to be applied to them. Click on the REGION field in the Associated Attributes Restrictions box, then on the Add button next to the Assign Restrictions box, to define the restrictions we want to apply. As we can see, restrictions can be defined according to the usual list of comparison operators. These are the same operators we used earlier to define a restricted column in our analytic views. In our example, we'll be restricting access to those lines with a REGION column equal to EU, so we'll select Equal . In the Value column, we can either type the appropriate value directly, or use the value help button, and the familiar Value Help Dialog which will appear, to select the value from those available in the view. Please add the EU value, either by typing it or by having SAP HANA find it for us, now. There is one more field which needs to be added to our analytic privilege, and the reason behind might seem at first a little strange. This point is valid for SAP HANA SP5, up to and including (at least) release 50 of the software. If this point turns out to be a bug, then it might not be necessary in later versions of the software. The field on which we want to restrict user actions (REGION) is not actually part of the analytic view itself. REGION, if you recall, is a field which is present in CUST_REV , thanks to the included attribute view CUST_ATTR . In its current state, the analytic privilege will not work, because no fields from the analytic view are actually present in the analytic privilege. We therefore need to add at least one of the native fields of the analytic view to the analytic privilege. We don't need to do any restriction on the field; however it needs to be in the privilege for everything to work as expected. This is hinted at in SAP Note 1809199, SAP HANA DB: debugging user authorization errors. Only if a view is included in one of the cube restrictions and at least one of its attribute is employed by one of the dimension restrictions, access to the view is granted by this analytical privilege. Not an explicit description of the workings of the authorization concept, but close. Our analytic view CUST_REV contains two native fields, CURRENCY and YEAR. You can add either of these to the analytic privilege. You do not need to assign any restrictions to the field; it just needs to be in the privilege. Here is the state of the analytic privilege when development work on it is finished: The Count column lists the number of restrictions in effect for the associated field. For the CURRENCY field, no restrictions are defined. We just need (as always) to activate our analytic privilege in order to be able to use it. The activation button is the same one as we have used up until now to activate the modeling views, the round green button with the right-facing white arrow at the top-right of the panel, which you can see on the preceding screenshot. Please activate the analytic privilege now. Once that has been done, we can add it to our role. Return to the Role tab (if you left it open) or reopen the role now. If you closed the role definition tab earlier, you can get back to our role by opening the Security node in the Navigator panel, then opening Roles, and double-clicking on the BOOKUSER role. In the Analytic Privileges tab of the role definition screen, click on the green plus sign at the top, to add an analytic privilege to our role. The analytic privilege we have just created is called AP_EU, so type ap_eu into the search box at the top of the dialog window which will open. As soon as you have typed at least two characters, SAP HANA will start searching for matching analytic privileges, and your AP_EU privilege will be listed, as we can see here: Click on OK to add the privilege to the role. We will see in a minute the effect our analytic privilege has on the rights of a particular user, but for the moment we can take a look at the second-to-last tab in the role definition screen, System Privileges . System privileges As its name suggests, system privileges gives to a particular user the right to perform specific actions on the SAP HANA system itself, not just on a given table or view. These are particular rights which should not be given to just any user, but should be reserved to those users who need to perform a particular task. We'll not be adding any of these privileges to our role, however we'll take a look at the available options and what they are used for. Click on the green plus-sign button at the top of the System Privileges tab to see a list of the available privileges. By default the dialog will do a search on all available values; there are only fifteen or so, but you can as usual filter them down if you require using the filter box at the top of the dialog: For a full list of the system privileges available and their uses, please refer to the SAP HANA SQL Reference, available on the help.sap.com website at http://help.sap.com/hana/html/sql_grant.html. Package privileges The last tab in the role definition screen concerns Package Privileges . These allow a given user to access those objects in a package. In our example, the package is called book, so if we add the book package to our role in the Package Privileges tab, we will see the following result: Assigning package privileges is similar to assigning SQL privileges we saw earlier. We first add the required object (here our book package), then we need to indicate exactly which rights we give to the role. As we can see in the preceding screenshot, we have a series of checkboxes on the right-hand side of the window. At least one of these checkboxes must be checked in order to save the role. The individual rights have names which are fairly self-explanatory. REPO.READ gives access to read the package, whereas REPO.EDIT_NATIVE_OBJECTS allows modification of objects, for example. The role we are creating is destined for an end user who will need to see the data in a role, but should not need to modify the data models in any way (and in fact we really don't want them to modify our data models, do we?). We'll just add the REPO.READ privilege, on our book package, to our role. Again we can decide whether the end user can in turn assign this privilege to others. And again, we don't need this feature in our role. At this point, our role is finished. We have given access to the SQL objects in the BOOK schema, created an analytic privilege which limits access to the Europe region in our CUST_REV model, and given read-only access to our book package. After activation (always) we'll be able to assign our role to a test user, and then see the effect our authorizations have on what the user can do and see. Please activate the role now. Users Users are probably the most important part of the authorization concept. They are where all our problems begin, and their attempts to do and see things they shouldn't are the main reason we have to spend valuable time defining authorizations in the first place. In technical terms, a user is just another database object. They are created, modified, and deleted in the same way a modeling view is. They have properties (their name and password, for example), and it is by modifying these properties that we influence the actions that the person who connects using the user can perform. Up until now we have been using the SYSTEM user (or the user that your database administrator assigned to you). This user is defined by SAP, and has basically the authorizations to do anything with the database. Use of this user is discouraged by SAP, and the author really would like to insist that you don't use it for your developments. Accidents happen, and one of the great things about authorizations is that they help to prevent accidents. If you try to delete an important object with the SYSTEM user, you will delete it, and getting it back might involve a database restore. If however you use a development user with less authorization, then you wouldn't have been allowed to do the deletion, saving a lot of tears. Of course, the question then arises, why have you been using the SYSTEM user for the last couple of hundred pages of development. The answer is simple: if the author had started the book with the authorizations article, not many readers would have gotten past page 10. Let's create a new user now, and assign the role we have just created. From the Navigator panel, open the Security node, right-click on User , and select New User from the menu to obtain the user creation screen as shown in the following screenshot: Defining a user requires remarkably little information: User Name : The login that the user will use. Your company might have a naming convention for users. Users might even already have a standard login they use to connect to other systems in your enterprise. In our example, we'll create a user with the (once again rather unimaginative) name of BOOKU.   Authentication : How will SAP HANA know that the user connecting with the name of ANNE really is Anne? There are three (currently) ways of authenticating a user with SAP HANA. Password : This is the most common authentication system, SAP HANA will ask Anne for her password when she connects to the system. Since Anne is the only person who knows her password, we can be sure that Anne really is ANNE, and let her connect and do anything the user ANNE is allowed to do. Passwords in SAP HANA have to respect a certain format. By default this format is one capital, one lowercase, one number, and at least eight characters. You can see and change the password policy in the system configuration. Double-click on the system name in the Navigator panel, click on the Configuration tab, type the word pass into the filter box at the top of the tab, and scroll down to indexserver.ini and then password policy . The password format in force on your system is listed as password_layout . By default this is A1a, meaning capitals, numbers, and lowercase letters are allowed. The value can also contain the # character, meaning that special characters must also be contained in the password. The only special characters allowed by SAP HANA are currently the underscore, dollar sign, and the hash character. Other password policy defaults are also listed on this screen, such as maximum_password_lifetime (the time after which SAP HANA will force you to change your password).   Kerberos and SAML : These authentication systems need to be set up by your network administrator and allow single sign-on in your enterprise. This means that SAP HANA will be able to see the Windows username that is connecting to the system. The database will assume that the authentication part (deciding whether Anne really is ANNE) has already been done by Windows, and let the user connect.     Session Client : As we saw when we created attribute and analytic views back at the start of the book, SAP HANA understands the notion of client, referring to a partition system of the SAP ERP database. In the SAP ERP, different users can work in different Clients. In our development, we filtered on Client 100. A much better way of handling filtering is to define the default client for a user when we define their account. The Session Client field can be filled with the ERP Client in which the user works. In this way we do not need to filter on the analytic models, we can leave their client value at Dynamic in the view, and the actual value to use will be taken from the user record. Once again this means maintenance of our developments is a lot simpler. If you like, you can take a few minutes at the end of this article to create a user with a session client value of 100, then go back and reset our attribute and analytic views' default client value to Dynamic, reactivate everything, and then do a data preview with your test user. The result should be identical to that obtained when the view was filtered on client 100. However, if you then create a second user with a session client of 200, this second user will see different data.   We'll create a user with a password login, so type a password for your user now. Remember to adhere to the password policy in force on your system. Also note that the user will be required to change their password on first login. At the bottom of the user definition screen, as we can see from the preceding screenshot, we have a series of tabs corresponding to the different authorizations we can assign to our user. These are the same tabs we saw earlier when defining a role. As explained at the beginning of this article, it is considered best practice to assign authorizations to a role and then the role to a user, rather than assign authorizations directly to a user; this makes maintenance easier. For this reason we will not be looking at the different tabs for assigning authorizations to our user, other than the first one, Granted Roles . The Granted Roles tab lists, and allows adding and removing roles from the list assigned to the user. By default when we create a user, they have no roles assigned, and hence have no authorizations at all in the system. They will be able to log in to SAP HANA but will be able to do no development work, and will see no data from the system. Please click on the green plus sign button in the Granted Roles tab of the user definition screen, to add a role to the user account. You will be provided with the Select Role dialog, shown in part here: This dialog has the familiar search box at the top, so typing the first few letters of a role name will bring up a list of matching roles. Here our role was called BOOKUSER, so please do a search for it, then select it in the list and click on OK to add it to the user account. Once that is done, we can test our user to verify that we can perform the necessary actions with the role and user we have just created. We just need, as with all objects in SAP HANA, to activate the user object first. As usual, this is done with the round green button with the right-facing white arrow at the top-right of the screen. Please do this now. Testing our user and role The only real way to check if the authorizations we have defined are appropriate to the business requirements is to create a user and then try out the role to see what the user can and cannot see and do in the system. The first thing to do is to add our new user to the Studio so we can connect to SAP HANA using this new user. To do this, in the Navigator panel, right click on the SAP HANA system name, and select Add Additional User from the menu which appears. This will give you the Add additional user dialog, shown in the following screenshot:     Enter the name of the user you just created (BOOKU) and the password you assigned to the user. You will be required to change the password immediately: Click on Finish to add the user to the Studio. You will see immediately in the Navigator panel that we can now work with either our SYSTEM user, or our BOOKU user: We can also see straight away that BOOKU is missing the privileges to perform or manage data backups; the Backup node is missing from the list for the BOOKU user. Let's try to do something with our BOOKU user and see how the system reacts. The way the Studio lets you handle multiple users is very elegant, since the tree structure of database objects is duplicated, one per user, you can see immediately how the different authorization profiles affect the different users. Additionally, if you request a data preview from the CUST_REV analytic view in the book package under the BOOKU user's node in the Navigator panel, you will see the data according to the BOOKU user's authorizations. Requesting the same data preview from the SYSTEM user's node will see the data according to SYSTEM's authorizations. Let's do a data preview on the CUST_REV view with the SYSTEM user, for reference: As we can see, there are 12 rows of data retrieved, and we have data from the EU and NAR regions. If we ask for the same data preview using our BOOKU user, we can see much less data: BOOKU can only see nine of the 12 data rows in our view, as no data from the NAR region is visible to the BOOKU user. This is exactly the result we aimed to achieve using our analytic privilege, in our role, assigned to our user. Summary In this article, we have taken a look at the different aspects of the authorization concept in SAP HANA. We examined the different authorization levels available in the system, from SQL privileges, analytic privileges, system privileges, and package privileges. We saw how to add these different authorization concepts to a role, a reusable group of authorizations. We went on to create a new user in our SAP HANA system, examining the different types of authentications available, and the assignment of roles to users. Finally, we logged into the Studio with our new user account, and found out the first-hand effect our authorizations had on what the user could see and do. In the next article, we will be working with hierarchical data, seeing what hierarchies can bring to our reporting applications, and how to make the best use of them. Resources for Article : Further resources on this subject: SAP Netweaver: Accessing the MDM System [Article] SAP HANA integration with Microsoft Excel [Article] Exporting SAP BusinessObjects Dashboards into Different Environments [Article]
Read more
  • 0
  • 2
  • 13915

article-image-using-web-api-extend-your-application
Packt
08 Sep 2016
14 min read
Save for later

Using Web API to Extend Your Application

Packt
08 Sep 2016
14 min read
In this article by Shahed Chowdhuri author of book ASP.Net Core Essentials, we will work through a working sample of a web API project. During this lesson, we will cover the following: Web API Web API configuration Web API routes Consuming Web API applications (For more resources related to this topic, see here.) Understanding a web API Building web applications can be a rewarding experience. The satisfaction of reaching a broad set of potential users can trump the frustrating nights spent fine-tuning an application and fixing bugs. But some mobile users demand a more streamlined experience that only a native mobile app can provide. Mobile browsers may experience performance issues in low-bandwidth situations, where HTML5 applications can only go so far with a heavy server-side back-end. Enter web API, with its RESTful endpoints, built with mobile-friendly server-side code. The case for web APIs In order to create a piece of software, years of wisdom tell us that we should build software with users in mind. Without use cases, its features are literally useless. By designing features around user stories, it makes sense to reveal public endpoints that relate directly to user actions. As a result, you will end up with a leaner web application that works for more users. If you need more convincing, here's a recap of features and benefits: It lets you build modern lightweight web services, which are a great choice for your application, as long as you don't need SOAP It's easier to work with than any past work you may have done with ASP.NET Windows Communication Foundation (WCF) services It supports RESTful endpoints It's great for a variety of clients, both mobile and web It's unified with ASP.NET MVC and can be included with/without your web application Creating a new web API project from scratch Let's build a sample web application named Patient Records. In this application, we will create a web API from scratch to allow the following tasks: Add a new patient Edit an existing patient Delete an existing patient View a specific patient or a list of patients These four actions make up the so-called CRUD operations of our system: to Create, Read, Update or Delete patient records. Following the steps below, we will create a new project in Visual Studio 2015: Create a new web API project. Add an API controller. Add methods for CRUD operations. The preceding steps have been expanded into detailed instructions with the following screenshots: In Visual Studio 2015, click File | New | Project. You can also press Ctrl+Shift+N on your keyboard. On the left panel, locate the Web node below Visual C#, then select ASP.NET Core Web Application (.NET Core), as shown in the following screenshot: With this project template selected, type in a name for your project, for examplePatientRecordsApi, and choose a location on your computer, as shown in the following screenshot: Optionally, you may select the checkboxes on the lower right to create a directory for your solution file and/or add your new project to source control. Click OK to proceed. In the dialog that follows, select Empty from the list of the ASP.NET Core Templates, then click OK, as shown in the following screenshot: Optionally, you can check the checkbox for Microsoft Azure to host your project in the cloud. Click OK to proceed. Building your web API project In the Solution Explorer, you may observe that your References are being restored. This occurs every time you create a new project or add new references to your project that have to be restored through NuGet,as shown in the following screenshot: Follow these steps, to fix your references, and build your Web API project: Rightclickon your project, and click Add | New Folder to add a new folder, as shown in the following screenshot: Perform the preceding step three times to create new folders for your Controllers, Models, and Views,as shown in the following screenshot: Rightclick on your Controllers folder, then click Add | New Item to create a new API controller for patient records on your system, as shown in the following screenshot: In the dialog box that appears, choose Web API Controller Class from the list of options under .NET Core, as shown in the following screenshot: Name your new API controller, for examplePatientController.cs, then click Add to proceed. In your new PatientController, you will most likely have several areas highlighted with red squiggly lines due to a lack of necessary dependencies, as shown in the following screenshot. As a result, you won't be able to build your project/solution at this time: In the next section, we will learn about how to configure your web API so that it has the proper references and dependencies in its configuration files. Configuring the web API in your web application How does the web server know what to send to the browser when a specific URL is requested? The answer lies in the configuration of your web API project. Setting up dependencies In this section, we will learn how to set up your dependencies automatically using the IDE, or manually by editing your project's configuration file. To pull in the necessary dependencies, you may right-click on the using statement for Microsoft.AspNet.Mvc and select Quick Actions and Refactorings…. This can also be triggered by pressing Ctrl +. (period) on your keyboard or simply by hovering over the underlined term, as shown in the following screenshot: Visual Studio should offer you several possible options, fromwhich you can select the one that adds the package Microsoft.AspNetCore.Mvc.Corefor the namespace Microsoft.AspNetCore.Mvc. For the Controller class, add a reference for the Microsoft.AspNetCore.Mvc.ViewFeaturespackage, as shown in the following screenshot: Fig12: Adding the Microsoft.AspNetCore.Mvc.Core 1.0.0 package If you select the latest version that's available, this should update your references and remove the red squiggly lines, as shown in the following screenshot: Fig13:Updating your references and removing the red squiggly lines The precedingstep should automatically update your project.json file with the correct dependencies for theMicrosoft.AspNetCore.Mvc.Core, and Microsoft.AspNetCore.Mvc.ViewFeatures, as shown in the following screenshot: The "frameworks" section of theproject.json file identifies the type and version of the .NET Framework that your web app is using, for examplenetcoreapp1.0 for the 1.0 version of .NET Core. You will see something similar in your project, as shown in the following screenshot: Click the Build Solution button from the top menu/toolbar. Depending on how you have your shortcuts set up, you may press Ctrl+Shift+B or press F6 on your keyboard to build the solution. You should now be able to build your project/solution without errors, as shown in the following screenshot: Before running the web API project, open the Startup.cs class file, and replace the app.Run() statement/block (along with its contents) with a call to app.UseMvc()in the Configure() method. To add the Mvc to the project, add a call to the services.AddMvcCore() in the ConfigureServices() method. To allow this code to compile, add a reference to Microsoft.AspNetCore.Mvc. Parts of a web API project Let's take a closer look at the PatientController class. The auto-generated class has the following methods: public IEnumerable<string> Get() public string Get(int id) public void Post([FromBody]string value) public void Put(int id, [FromBody]string value) public void Delete(int id) The Get() method simply returns a JSON object as an enumerable string of values, while the Get(int id) method is an overridden variant that gets a particular value for a specified ID. The Post() and Put() methods can be used for creating and updating entities. Note that the Put() method takes in an ID value as the first parameter so that it knows which entity to update. Finally, we have the Delete() method, which can be used to delete an entity using the specified ID. Running the web API project You may run the web API project in a web browser that can display JSON data. If you use Google Chrome, I would suggest using the JSONView Extension (or other similar extension) to properly display JSON data. The aforementioned extension is also available on GitHub at the following URL: https://github.com/gildas-lormeau/JSONView-for-Chrome If you use Microsoft Edge, you can view the raw JSON data directly in the browser.Once your browser is ready, you can select your browser of choice from the top toolbar of Visual Studio. Click on the tiny triangle icon next to the Debug button, then select a browser, as shown in the following screenshot: In the preceding screenshot, you can see that multiple installed browsers are available, including Firefox, Google Chrome, Internet Explorer,and Edge. To choose a different browser, simply click on Browse With…, in the menu to select a different one. Now, click the Debug button (that isthe green play button) to see the web API project in action in your web browser, as shown in the following screenshot. If you don't have a web application set up, you won't be able to browse the site from the root URL: Don’t worry if you see this error, you can update the URL to include a path to your API controller, for an example seehttp://localhost:12345/api/Patient. Note that your port number may vary. Now, you should be able to see a list of views that are being spat out by your API controller, as shown in the following screenshot: Adding routes to handle anticipated URL paths Back in the days of classic ASP, application URL paths typically reflected physical file paths. This continued with ASP.NET web forms, even though the concept of custom URL routing was introduced. With ASP.NET MVC, routes were designed to cater to functionality rather than physical paths. ASP.NET web API continues this newer tradition, with the ability to set up custom routes from within your code. You can create routes for your application using fluent configuration in your startup code or with declarative attributes surrounded by square brackets. Understanding routes To understand the purpose of having routes, let's focus on the features and benefits of routes in your application. This applies to both ASP.NET MVC and ASP.NET web API: By defining routes, you can introduce predictable patterns for URL access This gives you more control over how URLs are mapped to your controllers Human-readable route paths are also SEO-friendly, which is great for Search Engine Optimization It provides some level of obscurity when it comes to revealing the underlying web technology and physical file names in your system Setting up routes Let's start with this simple class-level attribute that specifies a route for your API controller, as follows: [Route("api/[controller]")] public class PatientController : Controller { // ... } Here, we can dissect the attribute (seen in square brackets, used to affect the class below it) and its parameter to understand what's going on: The Route attribute indicates that we are going to define a route for this controller. Within the parentheses that follow, the route path is defined in double quotes. The first part of this path is thestring literal api/, which declares that the path to an API method call will begin with the term api followed by a forward slash. The rest of the path is the word controller in square brackets, which refers to the controller name. By convention, the controller's name is part of the controller's class name that precedes the term Controller. For a class PatientController, the controller name is just the word Patient. This means that all API methods for this controller can be accessed using the following syntax, where MyApplicationServer should be replaced with your own server or domain name:http://MyApplicationServer/api/Patient For method calls, you can define a route with or without parameters. The following two examples illustrate both types of route definitions: [HttpGet] public IEnumerable<string> Get() {     return new string[] { "value1", "value2" }; } In this example, the Get() method performs an action related to the HTTP verb HttpGet, which is declared in the attribute directly above the method. This identifies the default method for accessing the controller through a browser without any parameters, which means that this API method can be accessed using the following syntax: http://MyApplicationServer/api/Patient To include parameters, we can use the following syntax: [HttpGet("{id}")] public string Get(int id) {     return "value"; } Here, the HttpGet attribute is coupled with an "{id}" parameter, enclosed in curly braces within double quotes. The overridden version of the Get() method also includes an integer value named id to correspond with the expected parameter. If no parameter is specified, the value of id is equal to default(int) which is zero. This can be called without any parameters with the following syntax: http://MyApplicationServer/api/Patient/Get In order to pass parameters, you can add any integer value right after the controller name, with the following syntax: http://MyApplicationServer/api/Patient/1 This will assign the number 1 to the integer variable id. Testing routes To test the aforementioned routes, simply run the application from Visual Studio and access the specified URLs without parameters. The preceding screenshot show the results of accessing the following path: http://MyApplicationServer/api/Patient/1 Consuming a web API from a client application If a web API exposes public endpoints, but there is no client application there to consume it, does it really exist? Without getting too philosophical, let's go over the possible ways you can consume a client application. You can do any of the following: Consume the Web API using external tools Consume the Web API with a mobile app Consume the Web API with a web client Testing with external tools If you don't have a client application set up, you can use an external tool such as Fiddler. Fiddler is a free tool that is now available from Telerik, available at http://www.telerik.com/download/fiddler, as shown in the following screenshot: You can use Fiddler to inspect URLs that are being retrieved and submitted on your machine. You can also use it to trigger any URL, and change the request type (Get, Post, and others). Consuming a web API from a mobile app Since this article is primarily about the ASP.NET core web API, we won't go into detail about mobile application development. However, it's important to note that a web API can provide a backend for your mobile app projects. Mobile apps may include Windows Mobile apps, iOS apps, Android apps, and any modern app that you can build for today's smartphones and tablets. You may consult the documentation for your particular platform of choice, to determine what is needed to call a RESTful API. Consuming a web API from a web client A web client, in this case, refers to any HTML/JavaScript application that has the ability to call a RESTful API. At the least, you can build a complete client-side solution with straight JavaScript to perform the necessary actions. For a better experience, you may use jQuery and also one of many popular JavaScript frameworks. A web client can also be a part of a larger ASP.NET MVC application or a Single-Page Application (SPA). As long as your application is spitting out JavaScript that is contained in HTML pages, you can build a frontend that works with your backend web API. Summary In this article, we've taken a look at the basic structure of an ASP.NET web API project, and observed the unification of web API with MVC in an ASP.NET core. We also learned how to use a web API as our backend to provide support for various frontend applications. Resources for Article:   Further resources on this subject: Introducing IoT with Particle's Photon and Electron [article] Schema Validation with Oracle JDeveloper - XDK 11g [article] Getting Started with Spring Security [article]
Read more
  • 0
  • 0
  • 13799

article-image-python-multimedia-animation-examples-using-pyglet
Packt
31 Aug 2010
7 min read
Save for later

Python Multimedia: Animation Examples using Pyglet

Packt
31 Aug 2010
7 min read
(For more resources on Python, see here.) Single image animation Imagine that you are creating a cartoon movie where you want to animate the motion of an arrow or a bullet hitting a target. In such cases, typically it is just a single image. The desired animation effect is accomplished by performing appropriate translation or rotation of the image. Time for action – bouncing ball animation Lets create a simple animation of a 'bouncing ball'. We will use a single image file, ball.png, which can be downloaded from the Packt website. The dimensions of this image in pixels are 200x200, created on a transparent background. The following screenshot shows this image opened in GIMP image editor. The three dots on the ball identify its side. We will see why this is needed. Imagine this as a ball used in a bowling game. The image of a ball opened in GIMP appears as shown in the preceding image. The ball size in pixels is 200x200. Download the files SingleImageAnimation.py and ball.png from the Packt website. Place the ball.png file in a sub-directory 'images' within the directory in which SingleImageAnimation.py is saved. The following code snippet shows the overall structure of the code. 1 import pyglet2 import time34 class SingleImageAnimation(pyglet.window.Window):5 def __init__(self, width=600, height=600):6 pass7 def createDrawableObjects(self):8 pass9 def adjustWindowSize(self):10 pass11 def moveObjects(self, t):12 pass13 def on_draw(self):14 pass15 win = SingleImageAnimation()16 # Set window background color to gray.17 pyglet.gl.glClearColor(0.5, 0.5, 0.5, 1)1819 pyglet.clock.schedule_interval(win.moveObjects, 1.0/20)2021 pyglet.app.run() Although it is not required, we will encapsulate event handling and other functionality within a class SingleImageAnimation. The program to be developed is short, but in general, it is a good coding practice. It will also be good for any future extension to the code. An instance of SingleImageAnimation is created on line 14. This class is inherited from pyglet.window.Window. It encapsulates the functionality we need here. The API method on_draw is overridden by the class. on_draw is called when the window needs to be redrawn. Note that we no longer need a decorator statement such as @win.event above the on_draw method because the window API method is simply overridden by this inherited class. The constructor of the class SingleImageAnimation is as follows: 1 def __init__(self, width=None, height=None):2 pyglet.window.Window.__init__(self,3 width=width,4 height=height,5 resizable = True)6 self.drawableObjects = []7 self.rising = False8 self.ballSprite = None9 self.createDrawableObjects()10 self.adjustWindowSize() As mentioned earlier, the class SingleImageAnimation inherits pyglet.window.Window. However, its constructor doesn't take all the arguments supported by its super class. This is because we don't need to change most of the default argument values. If you want to extend this application further and need these arguments, you can do so by adding them as __init__ arguments. The constructor initializes some instance variables and then calls methods to create the animation sprite and resize the window respectively. The method createDrawableObjects creates a sprite instance using the ball.png image. 1 def createDrawableObjects(self):2 """3 Create sprite objects that will be drawn within the4 window.5 """6 ball_img= pyglet.image.load('images/ball.png')7 ball_img.anchor_x = ball_img.width / 28 ball_img.anchor_y = ball_img.height / 2910 self.ballSprite = pyglet.sprite.Sprite(ball_img)11 self.ballSprite.position = (12 self.ballSprite.width + 100,13 self.ballSprite.height*2 - 50)14 self.drawableObjects.append(self.ballSprite) The anchor_x and anchor_y properties of the image instance are set such that the image has an anchor exactly at its center. This will be useful while rotating the image later. On line 10, the sprite instance self.ballSprite is created. Later, we will be setting the width and height of the Pyglet window as twice of the sprite width and thrice of the sprite height. The position of the image within the window is set on line 11. The initial position is chosen as shown in the next screenshot. In this case, there is only one Sprite instance. However, to make the program more general, a list of drawable objects called self.drawableObjects is maintained. To continue the discussion from the previous step, we will now review the on_draw method. def on_draw(self): self.clear() for d in self.drawableObjects: d.draw() As mentioned previously, the on_draw function is an API method of class pyglet.window.Window that is called when a window needs to be redrawn. This method is overridden here. The self.clear() call clears the previously drawn contents within the window. Then, all the Sprite objects in the list self.drawableObjects are drawn in the for loop. The preceding image illustrates the initial ball position in the animation. The method adjustWindowSize sets the width and height parameters of the Pyglet window. The code is self-explanatory: def adjustWindowSize(self): w = self.ballSprite.width * 3 h = self.ballSprite.height * 3 self.width = w self.height = h So far, we have set up everything for the animation to play. Now comes the fun part. We will change the position of the sprite representing the image to achieve the animation effect. During the animation, the image will also be rotated, to give it the natural feel of a bouncing ball. 1 def moveObjects(self, t):2 if self.ballSprite.y - 100 < 0:3 self.rising = True4 elif self.ballSprite.y > self.ballSprite.height*2 - 50:5 self.rising = False67 if not self.rising:8 self.ballSprite.y -= 59 self.ballSprite.rotation -= 610 else:11 self.ballSprite.y += 512 self.ballSprite.rotation += 5 This method is scheduled to be called 20 times per second using the following code in the program. pyglet.clock.schedule_interval(win.moveObjects, 1.0/20) To start with, the ball is placed near the top. The animation should be such that it gradually falls down, hits the bottom, and bounces back. After this, it continues its upward journey to hit a boundary somewhere near the top and again it begins its downward journey. The code block from lines 2 to 5 checks the current y position of self.ballSprite. If it has hit the upward limit, the flag self.rising is set to False. Likewise, when the lower limit is hit, the flag is set to True. The flag is then used by the next code snippet to increment or decrement the y position of self.ballSprite. The highlighted lines of code rotate the Sprite instance. The current rotation angle is incremented or decremented by the given value. This is the reason why we set the image anchors, anchor_x and anchor_y at the center of the image. The Sprite object honors these image anchors. If the anchors are not set this way, the ball will be seen wobbling in the resultant animation. Once all the pieces are in place, run the program from the command line as: $python SingleImageAnimation.py This will pop up a window that will play the bouncing ball animation. The next illustration shows some intermediate frames from the animation while the ball is falling down. What just happened? We learned how to create an animation using just a single image. The image of a ball was represented by a sprite instance. This sprite was then translated and rotated on the screen to accomplish a bouncing ball animation. The whole functionality, including the event handling, was encapsulated in the class SingleImageAnimation.
Read more
  • 0
  • 0
  • 13658

article-image-data-access-layer
Packt
09 Nov 2016
13 min read
Save for later

Data Access Layer

Packt
09 Nov 2016
13 min read
In this article by Alexander Zaytsev, author of NHibernate 4.0 Cookbook, we will cover the following topics: Transaction Auto-wrapping for the data access layer Setting up an NHibernate repository Using Named Queries in the data access layer (For more resources related to this topic, see here.) Introduction There are two styles of data access layer common in today's applications. Repositories and Data Access Objects. In reality, the distinction between these two have become quite blurred, but in theory, it's something like this: A repository should act like an in-memory collection. Entities are added to and removed from the collection, and its contents can be enumerated. Queries are typically handled by sending query specifications to the repository. A DAO (Data Access Object) is simply an abstraction of an application's data access. Its purpose is to hide the implementation details of the database access, from the consuming code. The first recipe shows the beginnings of a typical data access object. The remaining recipes show how to set up a repository-based data access layer with NHibernate's various APIs. Transaction Auto-wrapping for the data access layer In this recipe, we'll show you how we can set up the data access layer to wrap all data access in NHibernate transactions automatically. How to do it... Create a new class library named Eg.Core.Data. Install NHibernate to Eg.Core.Data using NuGet Package Manager Console. Add the following two DOA classes: public class DataAccessObject<T, TId> where T : Entity<TId> { private readonly ISessionFactory _sessionFactory; private ISession session { get { return _sessionFactory.GetCurrentSession(); } } public DataAccessObject(ISessionFactory sessionFactory) { _sessionFactory = sessionFactory; } public T Get(TId id) { return WithinTransaction(() => session.Get<T>(id)); } public T Load(TId id) { return WithinTransaction(() => session.Load<T>(id)); } public void Save(T entity) { WithinTransaction(() => session.SaveOrUpdate(entity)); } public void Delete(T entity) { WithinTransaction(() => session.Delete(entity)); } private TResult WithinTransaction<TResult>(Func<TResult> func) { if (!session.Transaction.IsActive) { // Wrap in transaction TResult result; using (var tx = session.BeginTransaction()) { result = func.Invoke(); tx.Commit(); } return result; } // Don't wrap; return func.Invoke(); } private void WithinTransaction(Action action) { WithinTransaction<bool>(() => { action.Invoke(); return false; }); } } public class DataAccessObject<T> : DataAccessObject<T, Guid> where T : Entity { } How it works... NHibernate requires that all data access occurs inside an NHibernate transaction. Remember, the ambient transaction created by TransactionScope is not a substitute for an NHibernate transaction This recipe, however, shows a more explicit approach. To ensure that at least all our data access layer calls are wrapped in transactions, we create a private WithinTransaction method that accepts a delegate, consisting of some data access methods, such as session.Save or session.Get. This WithinTransaction method first checks if the session has an active transaction. If it does, the delegate is invoked immediately. If it doesn't, a new NHibernate transaction is created, the delegate is invoked, and finally the transaction is committed. If the data access method throws an exception, the transaction will be rolled back automatically as the exception bubbles up to the using block. There's more... This transactional auto-wrapping can also be set up using SessionWrapper from the unofficial NHibernate AddIns project at https://bitbucket.org/fabiomaulo/unhaddins. This class wraps a standard NHibernate session. By default, it will throw an exception when the session is used without an NHibernate transaction. However, it can be configured to check for and create a transaction automatically, much in the same way I've shown you here. See also Setting up an NHibernate repository Setting up an NHibernate Repository Many developers prefer the repository pattern over data access objects. In this recipe, we'll show you how to set up the repository pattern with NHibernate. How to do it... Create a new, empty class library project named Eg.Core.Data. Add a reference to Eg.Core project. Add the following IRepository interface: public interface IRepository<T>: IEnumerable<T> where T : Entity { void Add(T item); bool Contains(T item); int Count { get; } bool Remove(T item); } Create a new, empty class library project named Eg.Core.Data.Impl. Add references to the Eg.Core and Eg.Core.Data projects. Add a new abstract class named NHibernateBase using the following code: protected readonly ISessionFactory _sessionFactory; protected virtual ISession session { get { return _sessionFactory.GetCurrentSession(); } } public NHibernateBase(ISessionFactory sessionFactory) { _sessionFactory = sessionFactory; } protected virtual TResult WithinTransaction<TResult>( Func<TResult> func) { if (!session.Transaction.IsActive) { // Wrap in transaction TResult result; using (var tx = session.BeginTransaction()) { result = func.Invoke(); tx.Commit(); } return result; } // Don't wrap; return func.Invoke(); } protected virtual void WithinTransaction(Action action) { WithinTransaction<bool>(() => { action.Invoke(); return false; }); } Add a new class named NHibernateRepository using the following code: public class NHibernateRepository<T> : NHibernateBase, IRepository<T> where T : Entity { public NHibernateRepository( ISessionFactory sessionFactory) : base(sessionFactory) { } public void Add(T item) { WithinTransaction(() => session.Save(item)); } public bool Contains(T item) { if (item.Id == default(Guid)) return false; return WithinTransaction(() => session.Get<T>(item.Id)) != null; } public int Count { get { return WithinTransaction(() => session.Query<T>().Count()); } } public bool Remove(T item) { WithinTransaction(() => session.Delete(item)); return true; } public IEnumerator<T> GetEnumerator() { return WithinTransaction(() => session.Query<T>() .Take(1000).GetEnumerator()); } IEnumerator IEnumerable.GetEnumerator() { return WithinTransaction(() => GetEnumerator()); } } How it works... The repository pattern, as explained in http://martinfowler.com/eaaCatalog/repository.html, has two key features: It behaves as an in-memory collection Query specifications are submitted to the repository for satisfaction. In this recipe, we are concerned only with the first feature, behaving as an in-memory collection. The remaining recipes in this article will build on this base, and show various methods for satisfying the second point. Because our repository should act like an in-memory collection, it makes sense that our IRepository<T> interface should resemble ICollection<T>. Our NHibernateBase class provides both contextual session management and the automatic transaction wrapping explained in the previous recipe. NHibernateRepository simply implements the members of IRepository<T>. There's more... The Repository pattern reduces data access to its absolute simplest form, but this simplification comes with a price. We lose much of the power of NHibernate behind an abstraction layer. Our application must either do without even basic session methods like Merge, Refresh, and Load, or allow them to leak through the abstraction. See also Transaction Auto-wrapping for the data access layer Using Named Queries in the data access layer Using Named Queries in the data access layer Named Queries encapsulated in query objects is a powerful combination. In this recipe, we'll show you how to use Named Queries with your data access layer. Getting ready To complete this recipe you will need Common Service Locator from Microsoft Patterns & Practices. The documentation and source code could be found at http://commonservicelocator.codeplex.com. Complete the previous recipe Setting up an NHibernate repository. Include the Eg.Core.Data.Impl assembly as an additional mapping assembly in your test project's App.Config with the following xml: <mapping assembly="Eg.Core.Data.Impl"/> How to do it... In the Eg.Core.Data project, add a folder for the Queries namespace. Add the following IQuery interfaces: public interface IQuery { } public interface IQuery<TResult> : IQuery { TResult Execute(); } Add the following IQueryFactory interface: public interface IQueryFactory { TQuery CreateQuery<TQuery>() where TQuery :IQuery; } Change the IRepository interface to implement the IQueryFactory interface, as shown in the following code: public interface IRepository<T> : IEnumerable<T>, IQueryFactory where T : Entity { void Add(T item); bool Contains(T item); int Count { get; } bool Remove(T item); } In the Eg.Core.Data.Impl project, change the NHibernateRepository constructor and add the _queryFactory field, as shown in the following code: private readonly IQueryFactory _queryFactory; public NHibernateRepository( ISessionFactory sessionFactory, IQueryFactory queryFactory) : base(sessionFactory) { _queryFactory = queryFactory; } Add the following method to NHibernateRepository: public TQuery CreateQuery<TQuery>() where TQuery : IQuery { return _queryFactory.CreateQuery<TQuery>(); } In the Eg.Core.Data.Impl project, add a folder for the Queries namespace. Install Common Service Locator using NuGet Package Manager Console, using the command. Install-Package CommonServiceLocator To the Queries namespace, add this QueryFactory class: public class QueryFactory : IQueryFactory { private readonly IServiceLocator _serviceLocator; public QueryFactory(IServiceLocator serviceLocator) { _serviceLocator = serviceLocator; } public TQuery CreateQuery<TQuery>() where TQuery : IQuery { return _serviceLocator.GetInstance<TQuery>(); } } Add the following NHibernateQueryBase class: public abstract class NHibernateQueryBase<TResult> : NHibernateBase, IQuery<TResult> { protected NHibernateQueryBase( ISessionFactory sessionFactory) : base(sessionFactory) { } public abstract TResult Execute(); } Add an empty INamedQuery interface, as shown in the following code: public interface INamedQuery { string QueryName { get; } } Add a NamedQueryBase class, as shown in the following code: public abstract class NamedQueryBase<TResult> : NHibernateQueryBase<TResult>, INamedQuery { protected NamedQueryBase(ISessionFactory sessionFactory) : base(sessionFactory) { } public override TResult Execute() { var nhQuery = GetNamedQuery(); return Transact(() => Execute(nhQuery)); } protected abstract TResult Execute(IQuery query); protected virtual IQuery GetNamedQuery() { var nhQuery = session.GetNamedQuery(QueryName); SetParameters(nhQuery); return nhQuery; } protected abstract void SetParameters(IQuery nhQuery); public virtual string QueryName { get { return GetType().Name; } } } In Eg.Core.Data.Impl.Test, add a test fixture named QueryTests inherited from NHibernateFixture. Add the following test and three helper methods: [Test] public void NamedQueryCheck() { var errors = new StringBuilder(); var queryObjectTypes = GetNamedQueryObjectTypes(); var mappedQueries = GetNamedQueryNames(); foreach (var queryType in queryObjectTypes) { var query = GetQuery(queryType); if (!mappedQueries.Contains(query.QueryName)) { errors.AppendFormat( "Query object {0} references non-existent " + "named query {1}.", queryType, query.QueryName); errors.AppendLine(); } } if (errors.Length != 0) Assert.Fail(errors.ToString()); } private IEnumerable<Type> GetNamedQueryObjectTypes() { var namedQueryType = typeof(INamedQuery); var queryImplAssembly = typeof(BookWithISBN).Assembly; var types = from t in queryImplAssembly.GetTypes() where namedQueryType.IsAssignableFrom(t) && t.IsClass && !t.IsAbstract select t; return types; } private IEnumerable<string> GetNamedQueryNames() { var nhCfg = NHConfigurator.Configuration; var mappedQueries = nhCfg.NamedQueries.Keys .Union(nhCfg.NamedSQLQueries.Keys); return mappedQueries; } private INamedQuery GetQuery(Type queryType) { return (INamedQuery) Activator.CreateInstance( queryType, new object[] { SessionFactory }); } For our example query, in the Queries namespace of Eg.Core.Data, add the following interface: public interface IBookWithISBN : IQuery<Book> { string ISBN { get; set; } } Add the implementation to the Queries namespace of Eg.Core.Data.Impl using the following code: public class BookWithISBN : NamedQueryBase<Book>, IBookWithISBN { public BookWithISBN(ISessionFactory sessionFactory) : base(sessionFactory) { } public string ISBN { get; set; } protected override void SetParameters( NHibernate.IQuery nhQuery) { nhQuery.SetParameter("isbn", ISBN); } protected override Book Execute(NHibernate.IQuery query) { return query.UniqueResult<Book>(); } } Finally, add the embedded resource mapping, BookWithISBN.hbm.xml, to Eg.Core.Data.Impl with the following xml code: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping > <query name="BookWithISBN"> <![CDATA[ from Book b where b.ISBN = :isbn ]]> </query> </hibernate-mapping> How it works... As we learned in the previous recipe, according to the repository pattern, the repository is responsible for fulfilling queries, based on the specifications submitted to it. These specifications are limiting. They only concern themselves with whether a particular item matches the given criteria. They don't care for other necessary technical details, such as eager loading of children, batching, query caching, and so on. We need something more powerful than simple where clauses. We lose too much to the abstraction. The query object pattern defines a query object as a group of criteria that can self-organize in to a SQL query. The query object is not responsible for the execution of this SQL. This is handled elsewhere, by some generic query runner, perhaps inside the repository. While a query object can better express the different technical requirements, such as eager loading, batching, and query caching, a generic query runner can't easily implement those concerns for every possible query, especially across the half-dozen query APIs provided by NHibernate. These details about the execution are specific to each query, and should be handled by the query object. This enhanced query object pattern, as Fabio Maulo has named it, not only self-organizes into SQL but also executes the query, returning the results. In this way, the technical concerns of a query's execution are defined and cared for with the query itself, rather than spreading into some highly complex, generic query runner. According to the abstraction we've built, the repository represents the collection of entities that we are querying. Since the two are already logically linked, if we allow the repository to build the query objects, we can add some context to our code. For example, suppose we have an application service that runs product queries. When we inject dependencies, we could specify IQueryFactory directly. This doesn't give us much information beyond "This service runs queries." If, however, we inject IRepository<Product>, we have a much better idea about what data the service is using. The IQuery interface is simply a marker interface for our query objects. Besides advertising the purpose of our query objects, it allows us to easily identify them with reflection. The IQuery<TResult> interface is implemented by each query object. It specifies only the return type and a single method to execute the query. The IQueryFactory interface defines a service to create query objects. For the purpose of explanation, the implementation of this service, QueryFactory, is a simple service locator. IQueryFactory is used internally by the repository to instantiate query objects. The NamedQueryBase class handles most of the plumbing for query objects, based on named HQL and SQL queries. As a convention, the name of the query is the name of the query object type. That is, the underlying named query for BookWithISBN is also named BookWithISBN. Each individual query object must simply implement SetParameters and Execute(NHibernate.IQuery query), which usually consists of a simple call to query.List<SomeEntity>() or query.UniqueResult<SomeEntity>(). The INamedQuery interface both identifies the query objects based on Named Queries, and provides access to the query name. The NamedQueryCheck test uses this to verify that each INamedQuery query object has a matching named query. Each query has an interface. This interface is used to request the query object from the repository. It also defines any parameters used in the query. In this example, IBookWithISBN has a single string parameter, ISBN. The implementation of this query object sets the :isbn parameter on the internal NHibernate query, executes it, and returns the matching Book object. Finally, we also create a mapping containing the named query BookWithISBN, which is loaded into the configuration with the rest of our mappings. The code used in the query object setup would look like the following code: var query = bookRepository.CreateQuery<IBookWithISBN>(); query.ISBN = "12345"; var book = query.Execute(); See also Transaction Auto-wrapping for the data access layer Setting up an NHibernate repository Summary In this article we learned how to transact Auto-wrapping for the data access layer, setting up an NHibernate repository and how to use Named Queries in the data access layer Resources for Article: Further resources on this subject: Memory Management [article] Getting Started with Spring Security [article] Design with Spring AOP [article]
Read more
  • 0
  • 0
  • 13630

article-image-working-entity-client-and-entity-sql
Packt
21 Aug 2015
11 min read
Save for later

Working with Entity Client and Entity SQL

Packt
21 Aug 2015
11 min read
In this article by Joydip Kanjilal, author of the book Entity Framework Tutorial - Second Edition explains how Entity Framework contains a powerful client-side query engine that allows you to execute queries against the conceptual model of data, irrespective of the underlying data store in use. This query engine works with a rich functional language called Entity SQL (or E-SQL for short), a derivative of Transact SQL (T-SQL), that enables you to query entities or a collection of entities. (For more resources related to this topic, see here.) An overview of the E-SQL language Entity Framework allows you to write programs against the EDM and also add a level of abstraction on top of the relational model. This isolation of the logical view of data from the Object Model is accomplished by expressing queries in terms of abstractions using an enhanced query language called E-SQL. This language is specially designed to query data from the EDM. E-SQL was designed to address the need for a language that can query data from its conceptual view, rather than its logical view. From T-SQL to E-SQL SQL is the primary language that has been in use for years for querying databases. Remember, SQL is a standard and not owned by any particular database vendor. SQL-92 is a standard, and is the most popular SQL standard currently in use. This standard was released in 1992. The 92 in the name reflects this fact. Different database vendors implemented their own flavors of the SQL-92 standard. The T-SQL language was designed by Microsoft as an SQL Server implementation of the SQL-92 standard. Similar to other SQL languages implemented by different database vendors, the E-SQL language is Entity Framework implementation of the SQL-92 standard that can be used to query data from the EDM. E-SQL is a text-based, provider independent, query language used by Entity Framework to express queries in terms of EDM abstractions and to query data from the conceptual layer of the EDM. One of the major differences between E-SQL and T-SQL is in nested queries. Note that you should always enclose your nested queries in E-SQL using parentheses as seen here: SELECT d, (SELECT DEREF (e) FROM NAVIGATE (d, PayrollEntities.FK_Employee_Department) AS e) AS Employees FROM PayrollEntities.Department AS d; The Select VALUE... statement is used to retrieve singleton values. It is also used to retrieve values that don't have any column names. However, the Select ROW... statement is used to select one or more rows. As an example, if you want a value as a collection from an entity without the column name, you can use the VALUE keyword in the SELECT statement as shown here: SELECT VALUE emp.EmployeeName FROM PayrollEntities.Employee as emp The preceding statement will return the employee names from the Employee entity as a collection of strings. In T-SQL, you can have the ORDER BY clause at the end of the last query when using UNION ALL. SELECT EmployeeID, EmployeeName From Employee UNION ALL SELECT EmployeeID, Basic, Allowances FROM Salary ORDER BY EmployeeID On the contrary, you do not have the ORDER BY clause in the UNION ALL operator in E-SQL. Why E-SQL when I already have LINQ to Entities? LINQ to Entities is a new version of LINQ, well suited for Entity Framework. But why do you need E-SQL when you already have LINQ to Entities available to you? LINQ to Entities queries are verified at the time of compilation. Therefore, it is not at all suited for building and executing dynamic queries. On the contrary, E-SQL queries are verified at runtime, so they can be used for building and executing dynamic queries. You now have a new ADO.NET provider in E-SQL, which is a sophisticated query engine that can be used to query your data from the conceptual model. It should be noted, however, that both LINQ and E-SQL queries are converted into canonical command trees that are in turn translated into database-specific query statements based on the underlying database provider in use, as shown in the following diagram: We will now take a quick look at the features of E-SQL before we delve deep into this language. Features of E-SQL These are the features of E-SQL: Provider neutrality: E-SQL is independent of the underlying ADO.NET data provider in use because it works on top of the conceptual model. SQL like: The syntax of E-SQL statements resemble T-SQL. Expressive with support for entities and types: You can write your E-SQL queries in terms of EDM abstractions. Composable and orthogonal: You can use a subquery wherever you have support for an expression of that type. The subqueries are all treated uniformly regardless of where they have been used. In the sections that follow, we will take a look at the E-SQL language in depth. We will discuss the following points: Operators Expressions Identifiers Variables Parameters Canonical functions Operators in E-SQL An operator is one that operates on a particular operand to perform an operation. Operators in E-SQL can broadly be classified into the following categories: Arithmetic operators: These are used to perform arithmetic operations. Comparison operators: You can use these to compare the values of two operands. Logical operators: These are used to perform logical operations. Reference operators: These act as logical pointers to a particular entity belonging to a particular entity set. Type operators: These can operate on the type of an expression. Case operators: These operate on a set of Boolean expressions. Set operators: These operate on set operations. Arithmetic operators Here is an example of an arithmetic operator: SELECT VALUE s FROM PayrollEntities.Salary AS s where s.Basic = 5000 + 1000 The following arithmetic operators are available in E-SQL: + (add) - (subtract) / (divide) % (modulo) * (multiply) Comparison operators Here is an example of a comparison operator: SELECT VALUE e FROM PayrollEntities.Employee AS e where e.EmployeeID = 1 The following is a list of the comparison operators available in E-SQL: = (equals) != (not equal to) <> (not equal to) > (greater than) < (less than) >= (greater than or equal to) <= (less than or equal to) Logical operators Here is an example of using logical operators in E-SQL: SELECT VALUE s FROM PayrollEntities.Salary AS s where s.Basic > 5000 && s.Allowances > 3000 This is a list of the logical operators available in E-SQL: && (And) ! (Not) || (Or) Reference operators The following is an example of how you can use a reference operator in E-SQL: SELECT VALUE REF(e).FirstName FROM PayrollEntities.Employee as e The following is a list of the reference operators available in E-SQL: Key Ref CreateRef DeRef Type operators Here is an example of a type operator that returns a collection of employees from a collection of persons: SELECT VALUE e FROM OFTYPE(PayrollEntities.Person, PayrollEntities.Employee) AS e The following is a list of the type operators available in E-SQL: OfType Cast Is [Not] Of Treat Set operators This is an example of how you can use a set operator in E-SQL: (Select VALUE e from PayrollEntities.Employee as e where e.FirstName Like 'J%') Union All ( select VALUE s from PayrollEntities.Employee as s where s.DepartmentID = 1) Here is a list of the set operators available in E-SQL: Set Union Element AnyElement Except [Not] Exists [Not] In Overlaps Intersect Operator precedence When you have multiple operators operating in a sequence, the order in which the operators will be executed will be determined by the operator precedence. The following table shows the operator, operator type, and their precedence levels in E-SQL language: Operators Operator type Precedence level . , [] () Primary Level 1 ! not Unary Level 2 * / % Multiplicative Level 3 + and - Additive Level 4 < > <= >= Relational Level 5 = != <> Equality Level 6 && Conditional And Level 7 || Conditional Or Level 8 Expressions in E-SQL Expressions are the building blocks of the E-SQL language. Here are some examples of how expressions are represented: 1; //This represents one scalar item {2}; //This represents a collection of one element {3, 4, 5} //This represents a collection of multiple elements Query expressions in E-SQL Query expressions are used in conjunction with query operators to perform a certain operation and return a result set. Query expressions in E-SQL are actually a series of clauses that are represented using one or more of the following: SELECT: This clause is used to specify or limit the number of elements that are returned when a query is executed in E-SQL. FROM: This clause is used to specify the source or collection for retrieval of the elements in a query. WHERE: This clause is used to specify a particular expression. HAVING: This clause is used to specify a filter condition for retrieval of the result set. GROUP BY: This clause is used to group the elements returned by a query. ORDER BY: This clause is used to order the elements returned in either ascending or descending order. Here is the complete syntax of query expressions in E-SQL: SELECT VALUE [ ALL | DISTINCT ] FROM expression [ ,...n ] as C [ WHERE expression ] [ GROUP BY expression [ ,...n ] ] [ HAVING search_condition ] [ ORDER BY expression] And here is an example of a typical E-SQL query with all clause types being used: SELECT emp.FirstName FROM PayrollEntities.Employee emp, PayrollEntities.Department dept Group By dept.DepartmentName Where emp.DepartmentID = dept.DepartmentID Having emp.EmployeeID > 5 Identifiers, variables, parameters, and types in E-SQL Identifiers in E-SQL are of the following two types: Simple identifiers Quoted identifiers Simple identifiers are a sequence of alphanumeric or underscore characters. Note that an identifier should always begin with an alphabetical character. As an example, the following are valid identifiers: a12_ab M_09cd W0001m However, the following are invalid identifiers: 9abcd _xyz 0_pqr Quoted identifiers are those that are enclosed within square brackets ([]). Here are some examples of quoted identifiers: SELECT emp.EmployeeName AS [Employee Name] FROM Employee as emp SELECT dept.DepartmentName AS [Department Name] FROM Department as dept Quoted identifiers cannot contain a new line, tab, backspace, or carriage return characters. In E-SQL, a variable is a reference to a named expression. Note that the naming conventions for variables follow the same rules for an identifier. In other words, a valid variable reference to a named expression in E-SQL should be a valid identifier too. Here is an example: SELECT emp FROM Employee as emp; In the preceding example, emp is a variable reference. Types can be of three versions: Primitive types like integers and strings Nominal types such as entity types, entity sets, and relationships Transient types like rows, collections, and references The E-SQL language supports the following type categories: Rows Collections References Row A row, which is also known as a tuple, has no identity or behavior and cannot be inherited. The following statement returns one row that contains six elements: ROW (1, 'Joydip'); Collections Collections represent zero or more instances of other instances. You can use SET () to retrieve unique values from a collection of values. Here is an example: SET({1,1,2,2,3,3,4,4,5,5,6,6}) The preceding example will return the unique values from the set. Specifically, 2, 3, 4, 5, and 6. This is equivalent to the following statement: Select Value Distinct x from {1,1,2,2,3,3,4,4,5,5,6,6} As x; You can create collections using MULTISET () or even using {} as shown in the following examples: MULTISET (1, 2, 3, 4, 5, 6) The following represents the same as the preceding example: {1, 2, 3, 4, 5, 6} Here is how you can return a collection of 10 identical rows each with six elements in them: SELECT ROW(1,'Joydip') from {1,2,3,4,5,6,7,8,9,10} To return a collection of all rows from the employee set, you can use the following: Select emp from PayrollEntities.Employee as emp; Similarly, to select all rows from the department set, you use the following: Select dept from PayrollEntities.Department as dept; Reference A reference denotes a logical pointer or reference, to a particular entity. In essence, it is a foreign key to a specific entity set. Operators are used to perform operations on one or more operands. In E-SQL, the following operators are available to construct, deconstruct, and also navigate through references: KEY REF CREATEREF DEREF To create a reference to an instance of Employee, you can use REF() as shown here: SELECT REF (emp) FROM PayrollEntities.Employee as emp Once you have created a reference to an entity using REF(), you can also dereference the entity using DREF() as shown: DEREF (CREATEREF(PayrollEntities.Employee, ROW(@EmployeeID))) Summary In this article, we explored E-SQL and how it can be used with the Entity Client provider to perform CRUD operations in our applications. We discussed the differences between E-SQL and T-SQL and the differences between E-SQL and LINQ. We also discussed when one should choose E-SQL instead of LINQ to query data in applications. Resources for Article: Further resources on this subject: Hosting the service in IIS using the TCP protocol [article] Entity Framework Code-First: Accessing Database Views and Stored Procedures [article] Entity Framework DB First – Inheritance Relationships between Entities [article]
Read more
  • 0
  • 0
  • 13367

article-image-getting-places
Packt
13 Oct 2015
8 min read
Save for later

Getting Places

Packt
13 Oct 2015
8 min read
In this article by Nafiul Islam, the author of Mastering Pycharm, we'll learn all about navigation. It is divided into three parts. The first part is called Omni, which deals with getting to anywhere from any place. The second is called Macro, which deals with navigating to places of significance. The third and final part is about moving within a file and it is called Micro. By the end of this article, you should be able to navigate freely and quickly within PyCharm, and use the right tool for the job to do so. Veteran PyCharm users may not find their favorite navigation tool mentioned or explained. This is because the methods of navigation described throughout this article will lead readers to discover their own tools that they prefer over others. (For more resources related to this topic, see here.) Omni In this section, we will discuss the tools that PyCharm provides for a user to go from anywhere to any place. You could be in your project directory one second, the next, you could be inside the Python standard library or a class in your file. These tools are generally slow or at least slower than more precise tools of navigation provided. Back and Forward The Back and Forward actions allow you to move your cursor back to the place where it was previously for more than a few seconds or where you've made edits. This information persists throughout sessions, so even if you exit the IDE, you can still get back to the positions that you were in before you quit. This falls into the Omni category because these two actions could potentially get you from any place within a file to any place within a file in your directory (that you have been to) to even parts of the standard library that you've looked into as well as your third-party Python packages. The Back and Forward actions are perhaps two of my most used navigation actions, and you can use Keymap. Or, one can simply click on the Navigate menu to see the keyboard shortcuts: Macro The difference between Macro and Omni is subtle. Omni allows you to go to the exact location of a place, even a place of no particular significance (say, the third line of a documentation string) in any file. Macro, on the other hand, allows you to navigate anywhere of significance, such as a function definition, class declaration, or particular class method. Go to definition or navigate to declaration Go to definition is the old name for Navigate to Declaration in PyCharm. This action, like the one previously discussed, could lead you anywhere—a class inside your project or a third party library function. What this action does is allow you to go to the source file declaration of a module, package, class, function, and so on. Keymap is once again useful in finding the shortcut for this particular action. Using this action will move your cursor to the file where the class or function is declared, may it be in your project or elsewhere. Just place your cursor on the function or class and invoke the action. Your cursor will now be directly where the function or class was declared. There is, however, a slight problem with this. If one tries to go to the declaration of a .so object, such as the datetime module or the select module, what one will encounter is a stub file (discussed in detail later). These are helper files that allow PyCharm to give you the code completion that it does. Modules that are .so files are indicated by a terminal icon, as shown here: Search Everywhere The action speaks for itself. You search for classes, files, methods, and even actions. Universally invoked using double Shift (pressing Shift twice in quick succession), this nifty action looks similar to any other search bar. Search Everywhere searches only inside your project, by default; however, one can also use it to search non-project items as well. Not using this option leads to faster search and a lower memory footprint. Search Everywhere is a gateway to other search actions available in PyCharm. In the preceding screenshot, one can see that Search Everywhere has separate parts, such as Recent Files and Classes. Each of these parts has a shortcut next to their section name. If you find yourself using Search Everywhere for Classes all the time, you might start using the Navigate Class action instead which is much faster. The Switcher tool The Switcher tool allows you to quickly navigate through your currently open tabs, recently opened files as well as all of your panels. This tool is essential since you always navigate between tabs. A star to the left indicates open tabs; everything else is a recently opened or edited file. If you just have one file open, Switcher will show more of your recently opened files. It's really handy this way since almost always the files that you want to go to are options in Switcher. The Project panel The Project panel is what I use to see the structure of my project as well as search for files that I can't find with Switcher. This panel is by far the most used panel of all, and for good reason. The Project panel also supports search; just open it up and start typing to find your file. However, the Project panel can give you even more of an understanding of what your code looks similar to if you have Show Members enabled. Once this is enabled, you can see the classes as well as the declared methods inside your files. Note that search works just like before, meaning that your search is limited to only the files/objects that you can see; if you collapse everything, you won't be able to search either your files or the classes and methods in them. Micro Micro deals with getting places within a file. These tools are perhaps what I end up using the most in my development. The Structure panel The Structure panel gives you a bird's eye view of the file that you are currently have your cursor on. This panel is indispensable when trying to understand a project that one is not familiar with. The yellow arrow indicates the option to show inherited fields and methods. The red arrow indicates the option to show field names, meaning if that it is turned off, you will only see properties and methods. The orange arrow indicates the option to scroll to and from the source. If both are turned on (scroll to and scroll from), where your cursor is will be synchronized with what method, field, or property is highlighted in the structure panel. Inherited fields are grayed out in the display. Ace Jump This is my favorite navigation plugin, and was made by John Lindquist who is a developer at JetBrains (creators of PyCharm). Ace Jump is inspired from the Emacs mode with the same name. It allows you to jump from one place to another within the same file. Before one can use Ace Jump, one has to install the plugin for it. Ace Jump is usually invoked using Ctrl or command + ; (semicolon). You can search for Ace Jump in Keymap as well, and is called Ace Jump. Once invoked, you get a small box in which you can input a letter. Choose a letter from the word that you want to navigate to, and you will see letters on that letter pop up immediately. If we were to hit D, the cursor would move to the position indicated by D. This might seem long winded, but it actually leads to really fast navigation. If we wanted to select the word indicated by the letter, then we'd invoke Ace Jump twice before entering a letter. This turns the Ace Jump box red. Upon hitting B, the named parameter rounding will be selected. Often, we don't want to go to a word, but rather the beginning or the end of a line. In order to do this, just hit invoke Ace Jump and then the left arrow for line beginnings or the right arrow for line endings. In this case, we'd just hit V to jump to the beginning of the line that starts with num_type. This is an example, where we hit left arrow instead of the right one, and we get line-ending options. Summary In this article, I discussed some of the best tools for navigation. This is by no means an exhaustive list. However, these tools will serve as a gateway to more precise tools available for navigation in PyCharm. I generally use Ace Jump, Back, Forward, and Switcher the most when I write code. The Project panel is always open for me, with the most used files having their classes and methods expanded for quick search. Resources for Article: Further resources on this subject: Enhancing Your Blog with Advanced Features [article] Adding a developer with Django forms [article] Deployment and Post Deployment [article]
Read more
  • 0
  • 0
  • 13086
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-overview-tomcat-6-servlet-container-part-1
Packt
18 Jan 2010
11 min read
Save for later

An Overview of Tomcat 6 Servlet Container: Part 1

Packt
18 Jan 2010
11 min read
In practice, it is highly unlikely that you will interface an EJB container from WebSphere and a JMS implementation from WebLogic, with the Tomcat servlet container from the Apache foundation, but it is at least theoretically possible. Note that the term 'interface', as it is used here, also encompasses abstract classes. The specification's API might provide a template implementation whose operations are defined in terms of some basic set of primitives that are kept abstract for the service provider to implement. A service provider is required to make available concrete implementations of these interfaces and abstract classes. For example, the HttpSession interface is implemented by Tomcat in the form of org.apache.catalina.session.StandardSession. Let's examine the image of the Tomcat container: The objective of this article is to cover the primary request processing components that are present in this image. Advanced topics, such as clustering and security, are shown as shaded in this image and are not covered. In this image, the '+' symbol after the Service, Host, Context, and Wrapper instances indicate that there can be one or more of these elements. For instance, a Service may have a single Engine, but an Engine can contain one or more Hosts. In addition, the whirling circle represents a pool of request processor threads. Here, we will fly over the architecture of Tomcat from a 10,000-foot perspective taking in the sights as we go. Component taxonomy Tomcat's architecture follows the construction of a Matrushka doll from Russia. In other words, it is all about containment where one entity contains another, and that entity in turn contains yet another. In Tomcat, a 'container' is a generic term that refers to any component that can contain another, such as a Server, Service, Engine, Host, or Context. Of these, the Server and Service components are special containers, designated as Top Level Elements as they represent aspects of the running Tomcat instance. All the other Tomcat components are subordinate to these top level elements. The Engine, Host, and Context components are officially termed Containers, and refer to components that process incoming requests and generate an appropriate outgoing response. Nested Components can be thought of as sub-elements that can be nested inside either Top Level Elements or other Containers to configure how they function. Examples of nested components include the Valve, which represents a reusable unit of work; the Pipeline, which represents a chain of Valves strung together; and a Realm which helps set up container-managed security for a particular container. Other nested components include the Loader which is used to enforce the specification's guidelines for servlet class loading; the Manager that supports session management for each web application; the Resources component that represents the web application's static resources and a mechanism to access these resources; and the Listener that allows you to insert custom processing at important points in a container's life cycle, such as when a component is being started or stopped. Not all nested components can be nested within every container. A final major component, which falls into its own category, is the Connector. It represents the connection end point that an external client (such as a web browser) can use to connect to the Tomcat container. Before we go on to examine these components, let's take a quick look at how they are organized structurally. Note that this diagram only shows the key properties of each container. When Tomcat is started, the Java Virtual Machine (JVM) instance in which it runs will contain a singleton Server top level element, which represents the entire Tomcat server. A Server will usually contain just one Service object, which is a structural element that combines one or more Connectors (for example, an HTTP and an HTTPS connector) that funnel incoming requests through to a single Catalina servlet Engine. The Engine represents the core request processing code within Tomcat and supports the definition of multiple Virtual Hosts within it. A virtual host allows a single running Tomcat engine to make it seem to the outside world that there are multiple separate domains (for example, www.my-site.com and www.your-site.com) being hosted on a single machine. Each virtual host can, in turn, support multiple web applications known as Contexts that are deployed to it. A context is represented using the web application format specified by the servlet specification, either as a single compressed WAR (Web Application Archive) file or as an uncompressed directory. In addition, a context is configured using a web.xml file, as defined by the servlet specification. A context can, in turn, contain multiple servlets that are deployed into it, each of which is wrapped in a Wrapper component. The Server, Service, Connector, Engine, Host, and Context elements that will be present in a particular running Tomcat instance are configured using the server.xml configuration file. Architectural benefits This architecture has a couple of useful features. It not only makes it easy to manage component life cycles (each component manages the life cycle notifications for its children), but also to dynamically assemble a running Tomcat server instance that is based on the information that has been read from configuration files at startup. In particular, the server.xml file is parsed at startup, and its contents are used to instantiate and configure the defined elements, which are then assembled into a running Tomcat instance. The server.xml file is read only once, and edits to it will not be picked up until Tomcat is restarted. This architecture also eases the configuration burden by allowing child containers to inherit the configuration of their parent containers. For instance, a Realm defines a data store that can be used for authentication and authorization of users who are attempting to access protected resources within a web application. For ease of configuration, a realm that is defined for an engine applies to all its children hosts and contexts. At the same time, a particular child, such as a given context, may override its inherited realm by specifying its own realm to be used in place of its parent's realm. Top Level Components The Server and Service container components exist largely as structural conveniences. A Server represents the running instance of Tomcat and contains one or more Service children, each of which represents a collection of request processing components. Server A Server represents the entire Tomcat instance and is a singleton within a Java Virtual Machine, and is responsible for managing the life cycle of its contained services. The following image depicts the key aspects of the Server component. As shown, a Server instance is configured using the server.xml configuration file. The root element of this file is <Server> and represents the Tomcat instance. Its default implementation is provided using org.apache.catalina.core.StandardServer, but you can specify your own custom implementation through the className attribute of the <Server> element. A key aspect of the Server is that it opens a server socket on port 8005 (the default) to listen a shutdown command (by default, this command is the text string SHUTDOWN). When this shutdown command is received, the server gracefully shuts itself down. For security reasons, the connection requesting the shutdown must be initiated from the same machine that is running this instance of Tomcat. A Server also provides an implementation of the Java Naming and Directory Interface (JNDI) service, allowing you to register arbitrary objects (such as data sources) or environment variables, by name. At runtime, individual components (such as servlets) can retrieve this information by looking up the desired object name in the server's JNDI bindings. While a JNDI implementation is not integral to the functioning of a servlet container, it is part of the Java EE specification and is a service that servlets have a right to expect from their application servers or servlet containers. Implementing this service makes for easy portability of web applications across containers. While there is always just one server instance within a JVM, it is entirely possible to have multiple server instances running on a single physical machine, each encased in its own JVM. Doing so insulates web applications that are running on one VM from errors in applications that are running on others, and simplifies maintenance by allowing a JVM to be restarted independently of the others. This is one of the mechanisms used in a shared hosting environment (the other is virtual hosting, which we will see shortly) where you need isolation from other web applications that are running on the same physical server. Service While the Server represents the Tomcat instance itself, a Service represents the set of request processing components within Tomcat. A Server can contain more than one Service, where each service associates a group of Connector components with a single Engine. Requests from clients are received on a connector, which in turn funnels them through into the engine, which is the key request processing component within Tomcat. The image shows connectors for HTTP, HTTPS, and the Apache JServ Protocol (AJP). There is very little reason to modify this element, and the default Service instance is usually sufficient. A hint as to when you might need more than one Service instance can be found in the above image. As shown, a service aggregates connectors, each of which monitors a given IP address and port, and responds in a given protocol. An example use case for having multiple services, therefore, is when you want to partition your services (and their contained engines, hosts, and web applications) by IP address and/or port number. For instance, you might configure your firewall to expose the connectors for one service to an external audience, while restricting your other service to hosting intranet applications that are visible only to internal users. This would ensure that an external user could never access your Intranet application, as that access would be blocked by the firewall. The Service, therefore, is nothing more than a grouping construct. It does not currently add any other value to the proceedings. Connectors A Connector is a service endpoint on which a client connects to the Tomcat container. It serves to insulate the engine from the various communication protocols that are used by clients, such as HTTP, HTTPS, or the Apache JServ Protocol (AJP). Tomcat can be configured to work in two modes—Standalone or in Conjunction with a separate web server. In standalone mode, Tomcat is configured with HTTP and HTTPS connectors, which make it act like a full-fledged web server by serving up static content when requested, as well as by delegating to the Catalina engine for dynamic content. Out of the box, Tomcat provides three possible implementations of the HTTP/1.1 and HTTPS connectors for this mode of operation. The most common are the standard connectors, known as Coyote which are implemented using standard Java I/O mechanisms. You may also make use of a couple of newer implementations, one which uses the non-blocking NIO features of Java 1.4, and the other which takes advantage of native code that is optimized for a particular operating system through the Apache Portable Runtime (APR). Note that both the Connector and the Engine run in the same JVM. In fact, they run within the same Server instance. In conjunction mode, Tomcat plays a supporting role to a web server, such as Apache httpd or Microsoft's IIS. The client here is the web server, communicating with Tomcat either through an Apache module or an ISAPI DLL. When this module determines that a request must be routed to Tomcat for processing, it will communicate this request to Tomcat using AJP, a binary protocol that is designed to be more efficient than the text based HTTP when communicating between a web server and Tomcat. On the Tomcat side, an AJP connector accepts this communication and translates it into a form that the Catalina engine can process. In this mode, Tomcat is running in its own JVM as a separate process from the web server. In either mode, the primary attributes of a Connector are the IP address and port on which it will listen for incoming requests, and the protocol that it supports. Another key attribute is the maximum number of request processing threads that can be created to concurrently handle incoming requests. Once all these threads are busy, any incoming request will be ignored until a thread becomes available. By default, a connector listens on all the IP addresses for the given physical machine (its address attribute defaults to 0.0.0.0). However, a connector can be configured to listen on just one of the IP addresses for a machine. This will constrain it to accept connections from only that specified IP address. Any request that is received by any one of a service's connectors is passed on to the service's single engine. This engine, known as Catalina, is responsible for the processing of the request, and the generation of the response. The engine returns the response to the connector, which then transmits it back to the client using the appropriate communication protocol.
Read more
  • 0
  • 0
  • 12989

article-image-running-your-applications-aws
Cheryl Adams
17 Aug 2016
4 min read
Save for later

Running Your Applications with AWS

Cheryl Adams
17 Aug 2016
4 min read
If you’ve ever been told not to run with scissors, you should not have the same concern when running with AWS. It is neither dangerous nor unsafe when you know what you are doing and where to look when you don’t. Amazon’s current service offering, AWS (Amazon Web Services), is a collection of services, applications and tools that can be used to deploy your infrastructure and application environment to the cloud.  Amazon gives you the option to start their service offerings with a ‘free tier’ and then move toward a pay as you go model.  We will highlight a few of the features when you open your account with AWS. One of the first things you will notice is that Amazon offers a bulk of information regarding cloud computing right up front. Whether you are a novice, amateur or an expert in cloud computing, Amazon offers documented information before you create your account.  This type of information is essential if you are exploring this tool for a project or doing some self-study on your own. If you are a pre-existing Amazon customer, you can use your same account to get started with AWS. If you want to keep your personal account separate from your development or business, it would be best to create a separate account. Amazon Web Services Landing Page The Free Tier is one of the most attractive features of AWS. As a new account you are entitled to twelve months within the Free Tier. In addition to this span of time, there are services that can continue after the free tier is over. This gives the user ample time to explore the offerings within this free-tier period. The caution is not to exceed the free service limitations as it will incur charges. Setting up the free-tier still requires a credit card. Fee-based services will be offered throughout the free tier, so it is important not to select a fee-based charge unless you are ready to start paying for it. Actual paid use will vary based on what you have selected.   AWS Service and Offerings (shown on an open account)     AWS overview of services available on the landing page Amazon’s service list is very robust. If you are already considering AWS, hopefully this means you are aware of what you need or at least what you would like to use. If not, this would be a good time to press pause and look at some resource-based materials. Before the clock starts ticking on your free-tier, I would recommend a slow walk through the introductory information on this site to ensure that you are selecting the right mix of services before creating your account. Amazon’s technical resources has a 10-minute tutorial that gives you a complete overview of the services. Topics like ‘AWS Training and Introduction’ and ‘Get Started with AWS’ include a list of 10-minute videos as well as a short list of ‘how to’ instructions for some of the more commonly used features. If you are a techie by trade or hobby, this may be something you want to dive into immediately.In a company, generally there is a predefined need or issue that the organization may feel can be resolved by the cloud.  If it is a team initiative, it would be good to review the resources mentioned in this article so that everyone is on the same page as to what this solution can do.It’s recommended before you start any trial, subscription or new service that you have a set goal or expectation of why you are doing it. Simply stated, a cloud solution is not the perfect solution for everyone.  There is so much information here on the AWS site. It’s also great if you are comparing between competing cloud service vendors in the same space. You will be able to do a complete assessment of most services within the free-tier. You can map use case scenarios to determine if AWS is the right fit for your project. AWS First Project is a great place to get started if you are new to AWS. If you are wondering how to get started, these technical resources will set you in the right direction. By reviewing this information during your setup or before you start, you will be able to make good use out of your first few months and your introduction to AWS. About the author Cheryl Adams is a senior cloud data and infrastructure architect in the healthcare data realm. She is also the co-author of Professional Hadoop by Wrox.
Read more
  • 0
  • 0
  • 12861

article-image-why-mybatis
Packt
10 Jul 2013
8 min read
Save for later

Why MyBatis

Packt
10 Jul 2013
8 min read
(For more resources related to this topic, see here.) Eliminates a lot of JDBC boilerplate code Java has a Java DataBase Connectivity (JDBC) API to work with relational databases. But JDBC is a very low-level API, and we need to write a lot of code to perform database operations. Let us examine how we can implement simple insert and select operations on a STUDENTS table using plain JDBC. Assume that the STUDENTS table has STUD_ID, NAME, EMAIL, and DOB columns. The corresponding Student JavaBean is as follows: package com.mybatis3.domain; import java.util.Date; public class Student { private Integer studId; private String name; private String email; private Date dob; // setters and getters } The following StudentService.java program implements the SELECT and INSERT operations on the STUDENTS table using JDBC. public Student findStudentById(int studId) { Student student = null; Connection conn = null; try{ //obtain connection conn = getDatabaseConnection(); String sql = "SELECT * FROM STUDENTS WHERE STUD_ID=?"; //create PreparedStatement PreparedStatement pstmt = conn.prepareStatement(sql); //set input parameters pstmt.setInt(1, studId); ResultSet rs = pstmt.executeQuery(); //fetch results from database and populate into Java objects if(rs.next()) { student = new Student(); student.setStudId(rs.getInt("stud_id")); student.setName(rs.getString("name")); student.setEmail(rs.getString("email")); student.setDob(rs.getDate("dob")); } } catch (SQLException e){ throw new RuntimeException(e); }finally{ //close connection if(conn!= null){ try { conn.close(); } catch (SQLException e){ } } } return student; } public void createStudent(Student student) { Connection conn = null; try{ //obtain connection conn = getDatabaseConnection(); String sql = "INSERT INTO STUDENTS(STUD_ID,NAME,EMAIL,DOB) VALUES(?,?,?,?)"; //create a PreparedStatement PreparedStatement pstmt = conn.prepareStatement(sql); //set input parameters pstmt.setInt(1, student.getStudId()); pstmt.setString(2, student.getName()); pstmt.setString(3, student.getEmail()); pstmt.setDate(4, new java.sql.Date(student.getDob().getTime())); pstmt.executeUpdate(); } catch (SQLException e){ throw new RuntimeException(e); }finally{ //close connection if(conn!= null){ try { conn.close(); } catch (SQLException e){ } } } } protected Connection getDatabaseConnection() throws SQLException { try{ Class.forName("com.mysql.jdbc.Driver"); return DriverManager.getConnection ("jdbc:mysql://localhost:3306/test", "root", "admin"); } catch (SQLException e){ throw e; } catch (Exception e){ throw new RuntimeException(e); } } There is a lot of duplicate code in each of the preceding methods, for creating a connection, creating a statement, setting input parameters, and closing the resources, such as the connection, statement, and result set. MyBatis abstracts all these common tasks so that the developer can focus on the really important aspects, such as preparing the SQL statement that needs to be executed and passing the input data as Java objects. In addition to this, MyBatis automates the process of setting the query parameters from the input Java object properties and populates the Java objects with the SQL query results as well. Now let us see how we can implement the preceding methods using MyBatis: Configure the queries in a SQL Mapper config file, say StudentMapper.xml. <select id="findStudentById" parameterType="int" resultType=" Student"> SELECT STUD_ID AS studId, NAME, EMAIL, DOB FROM STUDENTS WHERE STUD_ID=#{Id} </select> <insert id="insertStudent" parameterType="Student"> INSERT INTO STUDENTS(STUD_ID,NAME,EMAIL,DOB) VALUES(#{studId},#{name},#{email},#{dob}) </insert> Create a StudentMapper interface. public interface StudentMapper { Student findStudentById(Integer id); void insertStudent(Student student); } In Java code, you can invoke these statements as follows: SqlSession session = getSqlSessionFactory().openSession(); StudentMapper mapper = session.getMapper(StudentMapper.class); // Select Student by Id Student student = mapper.selectStudentById(1); //To insert a Student record mapper.insertStudent(student); That's it! You don't need to create the Connection, PrepareStatement, extract, and set parameters and close the connection by yourself for every database operation. Just configure the database connection properties and SQL statements, and MyBatis will take care of all the ground work. Don't worry about what SqlSessionFactory, SqlSession, and Mapper XML files are. Along with these, MyBatis provides many other features that simplify the implementation of persistence logic. It supports the mapping of complex SQL result set data to nested object graph structures It supports the mapping of one-to-one and one-to-many results to Java objects It supports building dynamic SQL queries based on the input data Low learning curve One of the primary reasons for MyBatis' popularity is that it is very simple to learn and use because it depends on your knowledge of Java and SQL. If developers are familiar with Java and SQL, they will fnd it fairly easy to get started with MyBatis. Works well with legacy databases Sometimes we may need to work with legacy databases that are not in a normalized form. It is possible, but diffcult, to work with these kinds of legacy databases with fully-fedged ORM frameworks such as Hibernate because they attempt to statically map Java objects to database tables. MyBatis works by mapping query results to Java objects; this makes it easy for MyBatis to work with legacy databases. You can create Java domain objects following the object-oriented model, execute queries Embraces SQL Full-fedged ORM frameworks such as Hibernate encourage working with entity objects and generate SQL queries under the hood. Because of this SQL generation, we may not be able to take advantage of database-specifc features. Hibernate allows to execute native SQLs, but that might defeat the promise of a database-independent persistence. The MyBatis framework embraces SQL instead of hiding it from developers. As MyBatis won't generate any SQLs and developers are responsible for preparing the queries, you can take advantage of database-specifc features and prepare optimized SQL queries. Also, working with stored procedures is supported by MyBatis. Supports integration with Spring and Guice frameworks MyBatis provides out-of-the-box integration support for the popular dependency injection frameworks Spring and Guice; this further simplifes working with MyBatis. Supports integration with third-party cache libraries MyBatis has inbuilt support for caching SELECT query results within the scope of SqlSession level ResultSets. In addition to this, MyBatis also provides integration support for various third-party cache libraries, such as EHCache, OSCache, and Hazelcast. Better performance Performance is one of the key factors for the success of any software application. There are lots of things to consider for better performance, but for many applications, the persistence layer is a key for overall system performance. MyBatis supports database connection pooling that eliminates the cost of creating a database connection on demand for every request. MyBatis has an in-built cache mechanism which caches the results of SQL queries at the SqlSession level. That is, if you invoke the same mapped select query, then MyBatis returns the cached result instead of querying the database again. MyBatis doesn't use proxying heavily and hence yields better performance compared to other ORM frameworks that use proxies extensively. There are no one-size-fits-all solutions in software development. Each application has a different set of requirements, and we should choose our tools and frameworks based on application needs. In the previous section, we have seen various advantages of using MyBatis. But there will be cases where MyBatis may not be the ideal or best solution.If your application is driven by an object model and wants to generate SQL dynamically, MyBatis may not be a good ft for you. Also, if you want to have a transitive persistence mechanism (saving the parent object should persist associated child objects as well) for your application, Hibernate will be better suited for it. Installing and configuring MyBatis We are assuming that the JDK 1.6+ and MySQL 5 database servers have been installed on your system. The installation process of JDK and MySQL is outside the scope of this article. At the time of writing this article, the latest version of MyBatis is MyBatis 3.2.2. Even though it is not mandatory to use IDEs, such as Eclipse, NetBeans IDE, or IntelliJ IDEA for coding, they greatly simplify development with features such as handy autocompletion, refactoring, and debugging. You can use any of your favorite IDEs for this purpose. This section explains how to develop a simple Java project using MyBatis: By creating a STUDENTS table and inserting sample data By creating a Java project and adding mybatis-3.2.2.jar to the classpath By creating the mybatis-config.xml and StudentMapper.xml configuration files By creating the MyBatisSqlSessionFactory singleton class By creating the StudentMapper interface and the StudentService classes By creating a JUnit test for testing StudentService Summary In this article, we discussed about MyBatis and the advantages of using MyBatis instead of plain JDBC for database access. Resources for Article : Further resources on this subject: Building an EJB 3.0 Persistence Model with Oracle JDeveloper [Article] New Features in JPA 2.0 [Article] An Introduction to Hibernate and Spring: Part 1 [Article]
Read more
  • 0
  • 0
  • 12792

article-image-applying-themes-sails-applications-part-2
Luis Lobo
14 Oct 2016
4 min read
Save for later

Applying Themes to Sails Applications, Part 2

Luis Lobo
14 Oct 2016
4 min read
In Part 1 of this series covering themes in the Sails Framework, we bootstrapped our sample Sails app (step 1). Here in Part 2, we will complete steps 2 and 3, compiling our theme’s CSS and the necessary Less files and setting up the theme Sails hook to complete our application. Step 2 – Adding a task for compiling our theme's CSS and the necessary Less files Let’s pick things back up where we left of in Part 1. We now want to customize our page to have our burrito style. We need to add a task that compiles our themes. Edit your /tasks/config/less.js so that it looks like this one: module.exports = function (grunt) { grunt.config.set('less', { dev: { files: [{ expand: true, cwd: 'assets/styles/', src: ['importer.less'], dest: '.tmp/public/styles/', ext: '.css' }, { expand: true, cwd: 'assets/themes/export', src: ['*.less'], dest: '.tmp/public/themes/', ext: '.css' }] } }); grunt.loadNpmTasks('grunt-contrib-less'); }; Basically, we added a second object to the files section, which tells the Less compiler task to look for any Less file in assets/themes/export, compile it, and put the resulting CSS in the .tmp/public/themes folder. In case you were not aware of it, the .tmp/public folder is the one Sails uses to publish its assets. We now create two themes: one is default.less and the other is burrito.less, which is based on default.less. We also have two other Less files, each one holding the variables for each theme. This technique allows you to have one base theme and many other themes based on the default. /assets/themes/variables.less @app-navbar-background-color: red; @app-navbar-brand-color: white; /assets/themes/variablesBurrito.less @app-navbar-background-color: green; @app-navbar-brand-color: yellow; /assets/themes/export/default.less @import "../variables.less"; .navbar-inverse { background-color: @app-navbar-background-color; .navbar-brand { color: @app-navbar-brand-color; } } /assets/themes/export/burrito.less @import "default.less"; @import "../variablesBurrito.less"; So, burrito.less just inherits from default.less but overrides the variables with the ones on its own, creating a new theme based on the default. If you lift Sails now, you will notice that the Navigation bar has a red background on white. Step 3 – Setting up the theme Sails hook The last step involves creating a Hook, a Node module that adds functionality to the Sails corethat catches the hostname, and if it has burrito in it, sets the new theme. First, let’s create the folder for the hook: mkdir -p ./api/hooks/theme Now create a file named index.js in that folder with this content: /** * theme hook - Sets the correct CSS to be displayed */ module.exports = function (sails) { return { routes: { before: { 'all /*': function (req, res, next) { if (!req.isSocket) { // makes theme variable available in views res.locals.theme = sails.hooks.theme.getTheme(req); } returnnext(); } } }, /** * getTheme defines which css needs to be used for this request * In this case, we select the theme by pattern matching certain words from the hostname */ getTheme: function (req) { var hostname = 'default'; var theme = 'default'; try { hostname = req.get('host').toLowerCase(); } catch(e) { // host may not be available always (ie, socket calls. If you need that, add a Host header in your // sails socket configuration) } // if burrito is found on the hostname, change the theme if (hostname.indexOf('burrito') > -1) { theme = 'burrito'; } return theme; } }; }; Finally, to test our configuration, we need to add a host entry in our OS hosts file. In Linux/Unix-based operating systems, you have to edit /etc/hosts (with sudo or root). Add the following line: 127.0.0.1 burrito.smartdelivery.localwww.smartdelivery.local Now navigate using those host names, first to www.smartdelivery.local: And lastly, navigate to burrito.smartdelivery.local: You now have your Burrito Smart Delivery! And you have a Themed Sails Application! I hope you have enjoyed this series.  You can get the source code from here. Enjoy! About the author Luis Lobo Borobia is the CTO at FictionCity.NET, is a mentor and advisor, independent software engineer consultant, and conference speaker. He has a background as a software analyst and designer, creating, designing, and implementing software products, solutions, frameworks, and platforms for several kinds of industries. In the last few years, he has focused on research and development for the Internet of Things, using the latest bleeding-edge software and hardware technologies available.
Read more
  • 0
  • 0
  • 12744
article-image-mono-micro-services-split-fat-application
Xavier Bruhiere
16 Oct 2015
7 min read
Save for later

Mono to Micro-Services: Splitting that fat application

Xavier Bruhiere
16 Oct 2015
7 min read
As articles state everywhere, we're living in a fast pace digital age. Project complexity, or business growth, challenges existing development patterns. That's why many developers are evolving from the monolithic application toward micro-services. Facebook is moving away from its big blue app. Soundcloud is embracing microservices. Yet this can be a daunting process, so what for? Scale. Better plugging new components than digging into an ocean of code. Split a complex problem into smaller ones, which is easier to solve and maintain. Distribute work through independent teams. Open technologies friendliness. Isolating a service into a container makes it straightforward to distribute and use. It also allows different, loosely coupled stacks to communicate. Once upon a time, there was a fat code block called Intuition, my algorithmic trading platform. In this post, we will engineer a simplified version, divided into well defined components. Code Components First, we're going to write the business logic, following the single responsibility principle, and one of my favorite code mantras: Prefer composition over inheritance The point is to identify key components of the problem, and code a specific solution for each of them. It will articulate our application around the collaboration of clear abstractions. As an illustration, start with the RandomAlgo class. Python tends to be the go-to language for data analysis and rapid prototyping. It is a great fit for our purpose. class RandomAlgo(object): """ Represent the algorithm flow. Heavily inspired from quantopian.com and processing.org """ def initialize(self, params): """ Called once to prepare the algo. """ self.threshold = params.get('threshold', 0.5) # As we will see later, we return here data channels we're interested in return ['quotes'] def event(self, data): """ This method is called every time a new batch of data is ready. :param data: {'sid': 'GOOG', 'quote': '345'} """ # randomly choose to invest or not if random.random() > self.threshold: print('buying {0} of {1}'.format(data['quote'], data['sid'])) This implementation focuses on a single thing: detecting buy signals. But once you get such a signal, how do you invest your portfolio? This is the responsibility of a new component. class Portfolio(object): def__init__(self, amount): """ Starting amount of cash we have. """ self.cash = amount def optimize(self, data): """ We have a buy signal on this data. Tell us how much cash we should bet. """ # We're still baby traders and we randomly choose what fraction of our cash available to invest to_invest = random.random() * self.cash self.cash = self.cash - to_invest return to_invest Then we can improve our previous algorithm's event method, taking advantage of composition. def initialize(self, params): # ... self.portfolio = Portfolio(params.get('starting_cash', 10000)) def event(self, data): # ... print('buying {0} of {1}'.format(portfolio.optimize(data), data['sid'])) Here are two simple components that produce readable and efficient code. Now we can develop more sophisticated portfolio optimizations without touching the algorithm internals. This is also a huge gain early in a project when we're not sure how things will evolve. Developers should only focus on this core logic. In the next section, we're going to unfold a separate part of the system. The communication layer will solve one question: how do we produce and consume events? Inter-components messaging Let's state the problem. We want each algorithm to receive interesting events and publish its own data. The kind of challenge Internet of Things (IoT) is tackling. We will find empirically that our modular approach allows us to pick the right tool, even within a-priori unrelated fields. The code below leverages MQTT to bring M2M messaging to the application. Notice we're diversifying our stack with node.js. Indeed it's one of the most convenient languages to deal with event-oriented systems (Javascript, in general, is gaining some traction in the IoT space). var mqtt = require('mqtt'); // connect to the broker, responsible to route messages // (thanks mosquitto) var conn = mqtt.connect('mqtt://test.mosquitto.org'); conn.on('connect', function () { // we're up ! Time to initialize the algorithm // and subscribe to interesting messages }); // triggered on topic we're listening to conn.on('message', function (topic, message) { console.log('received data:', message.toString()); // Here, pass it to the algo for processing }); That's neat! But we still need to connect this messaging layer with the actual python algorithm. RPC (Remote Procedure Call) protocol comes in handy for the task, especially with zerorpc. Here is the full implementation with more explanations. // command-line interfaces made easy var program = require('commander'); // the MQTT client for Node.js and the browser var mqtt = require('mqtt'); // a communication layer for distributed systems var zerorpc = require('zerorpc'); // import project properties var pkg = require('./package.json') // define the cli program .version(pkg.version) .description(pkg.description) .option('-m, --mqtt [url]', 'mqtt broker address', 'mqtt://test.mosquitto.org') .option('-r, --rpc [url]', 'rpc server address', 'tcp://127.0.0.1:4242') .parse(process.argv); // connect to mqtt broker var conn = mqtt.connect(program.mqtt); // connect to rpc peer, the actual python algorithm var algo = new zerorpc.Client() algo.connect(program.rpc); conn.on('connect', function () { // connections are ready, initialize the algorithm var conf = { cash: 50000 }; algo.invoke('initialize', conf, function(err, channels, more) { // the method returns an array of data channels the algorithm needs for (var i = 0; i < channels.length; i++) { console.log('subscribing to channel', channels[i]); conn.subscribe(channels[i]); } }); }); conn.on('message', function (topic, message) { console.log('received data:', message.toString()); // make the algorithm to process the incoming data algo.invoke('event', JSON.parse(message.toString()), function(err, res, more) { console.log('algo output:', res); // we're done algo.close(); conn.end(); }); }); The code above calls our algorithm's methods. Here is how to expose them over RPC. import click, zerorpc # ... algo code ... @click.command() @click.option('--addr', default='tcp://127.0.0.1:4242', help='address to bind rpc server') def serve(addr): server = zerorpc.Server(RandomAlgo()) server.bind(addr) click.echo(click.style('serving on {} ...'.format(addr), bold=True, fg='cyan')) # listen and serve server.run() if__name__ == '__main__': serve() At this point we are ready to run the app. Let's fire up 3 terminals, install requirements, and make the machines to trade. sudo apt-get install curl libpython-dev libzmq-dev # Install pip curl https://bootstrap.pypa.io/get-pip.py | python # Algorithm requirements pip install zerorpc click # Messaging requirements npm init npm install --save commander mqtt zerorpc # Activate backend python ma.py --addr tcp://127.0.0.1:4242 # Manipulate algorithm and serve messaging system node app.js --rpc tcp://127.0.0.1:4242 # Publish messages node_modules/.bin/mqtt pub -t 'quotes' -h 'test.mosquitto.org' -m '{"goog": 3.45}' In this state, our implementation is over-engineered. But we designed a sustainable architecture to wire up small components. And from here we can extend the system. One can focus on algorithms without worrying about events plumbing. The corollary: switching to a new messaging technology won't affect the way we develop algorithms. We can even swipe algorithms by changing the rpc address. A service discovery component could expose which backends are available and how to reach them. A project like octoblu adds devices authentification, data sharing, and more. We could implement data sources that connect to live market or databases, compute indicators like moving averages and publish them to algorithms. Conclusion Given our API definition, a contributor can hack on any component without breaking the project as a whole. In a fast pace environment, with constant iterations, this architecture can make or break products. This is especially true in the raising container world. Assuming we package each component into specialized containers, we smooth the way to a scalable infrastructure that we can test, distribute, deploy and grow. Not sure where to start when it comes to containers and microservices? Visit our Docker page!  About the Author Xavier Bruhiere is the CEO of Hive Tech. He contributes to many community projects, including Occulus Rift, Myo, Docker and Leap Motion. In his spare time he enjoys playing tennis, the violin and the guitar. You can reach him at @XavierBruhiere.
Read more
  • 0
  • 0
  • 12666

article-image-hello-small-world
Packt
07 Sep 2016
20 min read
Save for later

Hello, Small World!

Packt
07 Sep 2016
20 min read
In this article by Stefan Björnander, the author of the book C++ Windows Programming, we will see how to create Windows applications using C++. This article introduces Small Windows by presenting two small applications: The first application writes "Hello, Small Windows!" in a window The second application handles circles of different colors in a document window (For more resources related to this topic, see here.) Hello, Small Windows! In The C Programming Language by Brian Kernighan and Dennis Richie, the hello-world example was introduced. It was a small program that wrote hello, world on the screen. In this section, we shall write a similar program for Small Windows. In regular C++, the execution of the application starts with the main function. In Small Windows, however, main is hidden in the framework and has been replaced by MainWindow, which task is to define the application name and create the main window object. The argumentList parameter corresponds to argc and argv in main. The commandShow parameter forwards the system's request regarding the window's appearance. MainWindow.cpp #include "..\SmallWindows\SmallWindows.h" #include "HelloWindow.h" void MainWindow(vector<String> /* argumentList */, WindowShow windowShow) { Application::ApplicationName() = TEXT("Hello"); Application::MainWindowPtr() = new HelloWindow(windowShow); } In C++, there are to two character types: char and wchar_t, where char holds a regular character of one byte and wchar_t holds a wide character of larger size, usually two bytes. There is also the string class that holds a string of char values and the wstring class that holds a string of wchar_t values. However, in Windows there is also the generic character type TCHAR that is char or wchar_t, depending on system settings. There is also the String class holds a string of TCHAR values. Moreover, TEXT is a macro that translates a character value to TCHAR and a text value to an array of TCHAR values. To sum it up, following is a table with the character types and string classes: Regular character Wide character Generic character char wchar_t TCHAR string wstring String In the applications of this book, we always use the TCHAR type, the String class, and the TEXT macro. The only exception to that rule is the clipboard handling. Our version of the hello-world program writes Hello, Small Windows! in the center of the client area. The client area of the window is the part of the window where it is possible to draw graphical objects. In the following window, the client area is the white area. The HelloWindow class extends the Small Windows Window class. It holds a constructor and the Draw method. The constructor calls the Window constructor with suitable information regarding the appearance of the window. Draw is called every time the client area of the window needs to be redrawn. HelloWindow.h class HelloWindow : public Window { public: HelloWindow(WindowShow windowShow); void OnDraw(Graphics& graphics, DrawMode drawMode); }; The constructor of HelloWindow calls the constructor of Window with the following parameter: The first parameter of the HelloWindow constructor is the coordinate system. LogicalWithScroll indicates that each logical unit is one hundredth of a millimeter, regardless of the physical resolution of the screen. The current scroll bar settings are taken into consideration. The second parameter of the window constructor is the preferred size of the window. It indicates that a default size shall be used. The third parameter is a pointer to the parent window. It is null since the window has no parent window. The fourth and fifth parameters set the window's style, in this case overlapped windows. The last parameter is windowShow given by the surrounding system to MainWindow, which decide the window's initial appearance (minimized, normal, or maximized). Finally, the constructor sets the header of the window by calling the Window method SetHeader. HelloWindow.cpp #include "..\SmallWindows\SmallWindows.h" #include "HelloWindow.h" HelloWindow::HelloWindow(WindowShow windowShow) :Window(LogicalWithScroll, ZeroSize, nullptr, OverlappedWindow, NoStyle, windowShow) { SetHeader(TEXT("Hello Window")); } The OnDraw method is called every time the client area of the window needs to be redrawn. It obtains the size of the client area and draws the text in its center with black text on white background. The SystemFont parameter will make the text appear in the default system font. The Small Windows Color class holds the constants Black and White. Point holds a 2-dimensional point. Size holds a width and a height. The Rect class holds a rectangle. More specifically, it holds the four corners of a rectangle. void HelloWindow::OnDraw(Graphics& graphics, DrawMode /* drawMode */) { Size clientSize = GetClientSize(); Rect clientRect(Point(0, 0), clientSize); Font textFont("New Times Roman", 12, true); graphics.DrawText(clientRect, TEXT("Hello, Small Windows!"), textFont , Black, White); } The Circle application In this section, we look into a simple circle application. As the name implies, it provides the user the possibility to handle circles in a graphical application. The user can add a new circle by clicking the left mouse button. They can also move an existing circle by dragging it. Moreover, the user can change the color of a circle as well as save and open the document.   The main window As we will see thought out this book, MainWindow does always do the same thing: it sets the application name and creates the main window of the application. The name is used by the Save and Open standard dialogs, the About menu item, and the registry. The difference between the main window and other windows of the application is that when the user closes the main window, the application exits. Moreover, when the user selects the Exit menu item the main window is closed, and its destructor is called. MainWindow.cpp #include "..\SmallWindows\SmallWindows.h" #include "Circle.h" #include "CircleDocument.h" void MainWindow(vector<String> /* argumentList */, WindowShow windowShow) { Application::ApplicationName() = TEXT("Circle"); Application::MainWindowPtr() = new CircleDocument(windowShow); } The CircleDocument class The CircleDocumentclass extends the Small Windows class StandardDocument, which in turn extends Document and Window. In fact, StandardDocument constitutes of a framework; that is, a base class with a set of virtual methods with functionality we can override and further specify. The OnMouseDown and OnMouseUp methods are overridden from Window and are called when the user presses or releases one of the mouse buttons. OnMouseMove is called when the user moves the mouse. The OnDraw method is also overridden from Window and is called every time the window needs to be redrawn. The ClearDocument, ReadDocumentFromStream, and WriteDocumentToStream methods are overridden from Standard­Document and are called when the user creates a new file, opens a file, or saves a file. CircleDocument.h class CircleDocument : public StandardDocument { public: CircleDocument(WindowShow windowShow); ~CircleDocument(); void OnMouseDown(MouseButton mouseButtons, Point mousePoint, bool shiftPressed, bool controlPressed); void OnMouseUp(MouseButton mouseButtons, Point mousePoint, bool shiftPressed, bool controlPressed); void OnMouseMove(MouseButton mouseButtons, Point mousePoint, bool shiftPressed, bool controlPressed); void OnDraw(Graphics& graphics, DrawMode drawMode); bool ReadDocumentFromStream(String name, istream& inStream); bool WriteDocumentToStream(String name, ostream& outStream) const; void ClearDocument(); The DEFINE_BOOL_LISTENER and DEFINE_VOID_LISTENER macros define listeners: methods without parameters that are called when the user selects a menu item. The only difference between the macros is the return type of the defined methods: bool or void. In the applications of this book, we use the common standard that the listeners called in response to user actions are prefixed with On, for instance OnRed. The methods that decide whether the menu item shall be enabled are suffixed with Enable, and the methods that decide whether the menu item shall be marked with a check mark or a radio button are suffixed with Check or Radio. In this application, we define menu items for the red, green, and blue colors. We also define a menu item for the Color standard dialog.     DEFINE_VOID_LISTENER(CircleDocument,OnRed);     DEFINE_VOID_LISTENER(CircleDocument,OnGreen);     DEFINE_VOID_LISTENER(CircleDocument,OnBlue);     DEFINE_VOID_LISTENER(CircleDocument,OnColorDialog); When the user has chosen one of the color red, green, or blue, its corresponding menu item shall be checked with a radio button. RedRadio, GreenRadio, and BlueRadio are called before the menu items become visible and return a Boolean value indicating whether the menu item shall be marked with a radio button.     DEFINE_BOOL_LISTENER(CircleDocument, RedRadio);     DEFINE_BOOL_LISTENER(CircleDocument, GreenRadio);     DEFINE_BOOL_LISTENER(CircleDocument, BlueRadio); The circle radius is always 500 units, which correspond to 5 millimeters.     static const int CircleRadius = 500; The circleList field holds the circles, where the topmost circle is located at the beginning of the list. The nextColor field holds the color of the next circle to be added by the user. It is initialized to minus one to indicate that no circle is being moved at the beginning. The moveIndex and movePoint fields are used by OnMouseDown and OnMouseMove to keep track of the circle being moved by the user. private: vector<Circle> circleList; Color nextColor; int moveIndex = -1; Point movePoint; }; In the StandardDocument constructor call, the first two parameters are LogicalWithScroll and USLetterPortrait. They indicate that the logical size is hundredths of millimeters and that the client area holds the logical size of a US letter: 215.9 * 279.4 millimeters (8.5 * 11 inches). If the window is resized so that the client area becomes smaller than a US letter, scroll bars are added to the window. The third parameter sets the file information used by the standard Save and Open dialogs, the text description is set to Circle Files and the file suffix is set to cle. The null pointer parameter indicates that the window does not have a parent window. The OverlappedWindow constant parameter indicates that the window shall overlap other windows and the windowShow parameter is the window's initial appearance passed on from the surrounding system by MainWindow. CircleDocument.cpp #include "..\SmallWindows\SmallWindows.h" #include "Circle.h" #include "CircleDocument.h" CircleDocument::CircleDocument(WindowShow windowShow) :StandardDocument(LogicalWithScroll, USLetterPortrait, TEXT("Circle Files, cle"), nullptr, OverlappedWindow, windowShow) { The StandardDocument framework adds the standard File, Edit, and Help menus to the window menu bar. The File menu holds the New, Open, Save, Save As, Page Setup, Print Preview, and Exit items. The Page Setup and Print Preview items are optional. The seventh parameter of the StandardDocument constructor (default false) indicates their presence. The Edit menu holds the Cut, Copy, Paste, and Delete items. They are disabled by default; we will not use them in this application. The Help menu holds the About item, the application name set in MainWindow is used to display a message box with a standard message: Circle, version 1.0. We add the standard File and Edit menus to the menu bar. Then we add the Color menu, which is the application-specific menu of this application. Finally, we add the standard Help menu and set the menu bar of the document. The Color menu holds the menu items used to set the circle colors. The OnRed, OnGreen, and OnBlue methods are called when the user selects the menu item, and the RedRadio, GreenRadio, BlueRadio are called before the user selects the color menu in order to decide if the items shall be marked with a radio button. OnColorDialog opens a standard color dialog. In the text &RedtCtrl+R, the ampersand (&) indicates that the menu item has a mnemonic; that is, the letter R will be underlined and it is possible to select the menu item by pressing R after the menu has been opened. The tabulator character (t) indicates that the second part of the text defines an accelerator; that is, the text Ctrl+R will occur right-justified in the menu item and the item can be selected by pressing Ctrl+R. Menu menuBar(this); menuBar.AddMenu(StandardFileMenu(false)); The AddItem method in the Menu class also takes two more parameters for enabling the menu item and setting a check box. However, we do not use them in this application. Therefore, we send null pointers. Menu colorMenu(this, TEXT("&Color")); colorMenu.AddItem(TEXT("&RedtCtrl+R"), OnRed, nullptr, nullptr, RedRadio); colorMenu.AddItem(TEXT("&GreentCtrl+G"), OnGreen, nullptr, nullptr, GreenRadio); colorMenu.AddItem(TEXT("&BluetCtrl+B"), OnBlue, nullptr, nullptr, BlueRadio); colorMenu.AddSeparator(); colorMenu.AddItem(TEXT("&Dialog ..."), OnColorDialog); menuBar.AddMenu(colorMenu); menuBar.AddMenu(StandardHelpMenu()); SetMenuBar(menuBar); Finally, we read the current color (the color of the next circle to be added) from the registry; red is the default color in case there is no color stored in the registry. nextColor.ReadColorFromRegistry(TEXT("NextColor"), Red); } The destructor saves the current color in the registry. In this application, we do not need to perform the destructor's normal tasks, such as deallocate memory or closing files. CircleDocument::~CircleDocument() { nextColor.WriteColorToRegistry(TEXT("NextColor")); } The ClearDocument method is called when the user selects the New menu item. In this case, we just clear the circle list. Every other action, such as redrawing the window or changing its title, is taken care of by StandardDocument. void CircleDocument::ClearDocument() { circleList.clear(); } The WriteDocumentToStream method is called by StandardDocument when the user saves a file (by selecting Save or Save As). It writes the number of circles (the size of the circle list) to the output stream and calls WriteCircle for each circle in order to write their states to the stream. bool CircleDocument::WriteDocumentToStream(String name, ostream& outStream) const { int size = circleList.size(); outStream.write((char*) &size, sizeof size); for (Circle circle : circleList) { circle.WriteCircle(outStream); } return ((bool) outStream); } The ReadDocumentFromStream method is called by StandardDocument when the user opens a file by selecting the Open menu item. It reads the number of circles (the size of the circle list) and for each circle it creates a new object of the Circle class, calls ReadCircle in order to read the state of the circle, and adds the circle object to circleList. bool CircleDocument::ReadDocumentFromStream(String name, istream& inStream) { int size; inStream.read((char*) &size, sizeof size); for (int count = 0; count < size; ++count) { Circle circle; circle.ReadCircle(inStream); circleList.push_back(circle); } return ((bool) inStream); } The OnMouseDown method is called when the user presses one of the mouse buttons. First we need to check that they have pressed the left mouse button. If they have, we loop through the circle list and call IsClick for each circle in order to decide whether they have clicked at a circle. Note that the top-most circle is located at the beginning of the list; therefore, we loop from the beginning of the list. If we find a clicked circle, we break the loop. If the user has clicked at a circle, we store its index moveIndex and the current mouse position in movePoint. Both values are needed by OnMouseMove method that will be called when the user moves the mouse. void CircleDocument::OnMouseDown (MouseButton mouseButtons, Point mousePoint, bool shiftPressed /* = false */, bool controlPressed /* = false */) { if (mouseButtons == LeftButton) { moveIndex = -1; int size = circleList.size(); for (int index = 0; index < size; ++index) { if (circleList[index].IsClick(mousePoint)) { moveIndex = index; movePoint = mousePoint; break; } } However, if the user has not clicked at a circle, we add a new circle. A circle is defined by its center position (mousePoint), radius (CircleRadius), and color (nextColor). An invalidated area is a part of the client area that needs to be redrawn. Remember that in Windows we normally do not draw figures directly. Instead, we call Invalidate to tell the system that an area needs to be redrawn and forces the actually redrawing by calling UpdateWindow, which eventually results in a call to OnDraw. The invalidated area is always a rectangle. Invalidate has a second parameter (default true) indicating that the invalidated area shall be cleared. Technically, it is painted in the window's client color, which in this case is white. In this way, the previous location of the circle becomes cleared and the circle is drawn at its new location. The SetDirty method tells the framework that the document has been altered (the document has become dirty), which causes the Save menu item to be enabled and the user to be warned if they try to close the window without saving it. if (moveIndex == -1) { Circle newCircle(mousePoint, CircleRadius, nextColor); circleList.push_back(newCircle); Invalidate(newCircle.Area()); UpdateWindow(); SetDirty(true); } } } The OnMouseMove method is called every time the user moves the mouse with at least one mouse button pressed. We first need to check whether the user is pressing the left mouse button and is clicking at a circle (whether moveIndex does not equal minus one). If they have, we calculate the distance from the previous mouse event (OnMouseDown or OnMouseMove) by comparing the previous mouse position movePoint by the current mouse position mousePoint. We update the circle position, invalidate both the old and new area, forcing a redrawing of the invalidated areas with UpdateWindow, and set the dirty flag. void CircleDocument::OnMouseMove (MouseButton mouseButtons, Point mousePoint, bool shiftPressed /* = false */, bool controlPressed /* = false */) { if ((mouseButtons == LeftButton)&&(moveIndex != -1)) { Size distanceSize = mousePoint - movePoint; movePoint = mousePoint; Circle& movedCircle = circleList[moveIndex]; Invalidate(movedCircle.Area()); movedCircle.Center() += distanceSize; Invalidate(movedCircle.Area()); UpdateWindow(); SetDirty(true); } } Strictly speaking, OnMouseUp could be excluded since moveIndex is set to minus one in OnMouseDown, which is always called before OnMouseMove. However, it has been included for the sake of completeness. void CircleDocument::OnMouseUp (MouseButton mouseButtons, Point mousePoint, bool shiftPressed /* = false */, bool controlPressed /* = false */) { moveIndex = -1; } The OnDraw method is called every time the window needs to be (partly or completely) redrawn. The call can have been initialized by the system as a response to an event (for instance, the window has been resized) or by an earlier call to UpdateWindow. The Graphics reference parameter has been created by the framework and can be considered a toolbox for drawing lines, painting areas and writing text. However, in this application we do not write text. We iterate throw the circle list and, for each circle, call the Draw method. Note that we do not care about which circles are to be physically redrawn. We simple redraw all circles. However, only the circles located in an area that has been invalidated by a previous call to Invalidate will be physically redrawn. The Draw method has a second parameter indicating the draw mode, which can be Paint or Print. Paint indicates that OnDraw is called by OnPaint in Window and that the painting is performed in the windows' client area. The Print method indicates that OnDraw is called by OnPrint and that the painting is sent to a printer. However, in this application we do not use that parameter. void CircleDocument::OnDraw(Graphics& graphics, DrawMode /* drawMode */) { for (Circle circle : circleList) { circle.Draw(graphics); } } The RedRadio, GreenRadio, and BlueRadio methods are called before the menu items are shown, and the items will be marked with a radio button in case they return true. The Red, Green, and Blue constants are defined in the Color class. bool CircleDocument::RedRadio() const { return (nextColor == Red); } bool CircleDocument::GreenRadio() const { return (nextColor == Green); } bool CircleDocument::BlueRadio() const { return (nextColor == Blue); } The OnRed, OnGreen, and OnBlue methods are called when the user selects the corresponding menu item. They all set the nextColor field to an appropriate value. void CircleDocument::OnRed() { nextColor = Red; } void CircleDocument::OnGreen() { nextColor = Green; } void CircleDocument::OnBlue() { nextColor = Blue; } The OnColorDialog method is called when the user selects the Color dialog menu item and displays the standard Color dialog. If the user choses a new color, nextcolor will be given the chosen color value. void CircleDocument::OnColorDialog() { ColorDialog(this, nextColor); } The Circle class The Circle class is a class holding the information about a single circle. The default constructor is used when reading a circle from a file. The second constructor is used when creating a new circle. The IsClick method returns true if the given point is located inside the circle (to check whether the user has clicked in the circle), Area returns the circle's surrounding rectangle (for invalidating), and Draw is called to redraw the circle. Circle.h class Circle { public: Circle(); Circle(Point center, int radius, Color color); bool WriteCircle(ostream& outStream) const; bool ReadCircle(istream& inStream); bool IsClick(Point point) const; Rect Area() const; void Draw(Graphics& graphics) const; Point Center() const {return center;} Point& Center() {return center;} Color GetColor() {return color;} As mentioned in the previous section, a circle is defined by its center position (center), radius (radius), and color (color). private: Point center; int radius; Color color; }; The default constructor does not need to initialize the fields, since it is called when the user opens a file and the values are read from the file. The second constructor, however, initializes the center point, radius, and color of the circle. Circle.cpp #include "..\SmallWindows\SmallWindows.h" #include "Circle.h" Circle::Circle() { // Empty. } Circle::Circle(Point center, int radius, Color color) :color(color), center(center), radius(radius) { // Empty. } The WriteCircle method writes the color, center point, and radius to the stream. Since the radius is a regular integer, we simply use the C standard function write, while Color and Point have their own methods to write their values to a stream. In ReadCircle we read the color, center point, and radius from the stream in a similar manner. bool Circle::WriteCircle(ostream& outStream) const { color.WriteColorToStream(outStream); center.WritePointToStream(outStream); outStream.write((char*) &radius, sizeof radius); return ((bool) outStream); } bool Circle::ReadCircle(istream& inStream) { color.ReadColorFromStream(inStream); center.ReadPointFromStream(inStream); inStream.read((char*) &radius, sizeof radius); return ((bool) inStream); } The IsClick method uses the Pythagoras theorem to calculate the distance between the given point and the circle's center point, and return true if the point is located inside the circle (if the distance is less than or equal to the circle radius). Circle::IsClick(Point point) const { int width = point.X() - center.X(), height = point.Y() - center.Y(); int distance = (int) sqrt((width * width) + (height * height)); return (distance <= radius); } The top-left corner of the resulting rectangle is the center point minus the radius, and the bottom-right corner is the center point plus the radius. Rect Circle::Area() const { Point topLeft = center - radius, bottomRight = center + radius; return Rect(topLeft, bottomRight); } We use the FillEllipse method (there is no FillCircle method) of the Small Windows Graphics class to draw the circle. The circle's border is always black, while its interior color is given by the color field. void Circle::Draw(Graphics& graphics) const { Point topLeft = center - radius, bottomRight = center + radius; Rect circleRect(topLeft, bottomRight); graphics.FillEllipse(circleRect, Black, color); } Summary In this article, we have looked into two applications in Small Windows: a simple hello-world application and a slightly more advance circle application, which has introduced the framework. We have looked into menus, circle drawing, and mouse handling. Resources for Article: Further resources on this subject: C++, SFML, Visual Studio, and Starting the first game [article] Game Development Using C++ [article] Boost.Asio C++ Network Programming [article]
Read more
  • 0
  • 0
  • 12615

article-image-python-multimedia-fun-animations-using-pyglet
Packt
31 Aug 2010
8 min read
Save for later

Python Multimedia: Fun with Animations using Pyglet

Packt
31 Aug 2010
8 min read
(For more resources on Python, see here.) So let's get on with it. Installation prerequisites We will cover the prerequisites for the installation of Pyglet in this section. Pyglet Pyglet provides an API for multimedia application development using Python. It is an OpenGL-based library, which works on multiple platforms. It is primarily used for developing gaming applications and other graphically-rich applications. Pyglet can be downloaded from http://www.pyglet.org/download.html. Install Pyglet version 1.1.4 or later. The Pyglet installation is pretty straightforward. Windows platform For Windows users, the Pyglet installation is straightforward—use the binary distribution Pyglet 1.1.4.msi or later. You should have Python 2.6 installed. For Python 2.4, there are some more dependencies. We won't discuss them in this article, because we are using Python 2.6 to build multimedia applications. If you install Pyglet from the source, see the instructions under the next sub-section, Other platforms. Other platforms The Pyglet website provides a binary distribution file for Mac OS X. Download and install pyglet-1.1.4.dmg or later. On Linux, install Pyglet 1.1.4 or later if it is available in the package repository of your operating system. Otherwise, it can be installed from source tarball as follows: Download and extractthetarballextractthetarball the tarball pyglet-1.1.4.tar.gz or a later version. Make sure that python is a recognizable command in shell. Otherwise, set the PYTHONPATH environment variable to the correct Python executable path. In a shell window, change to the mentioned extracted directory and then run the following command: python setup.py install Review the succeeding installation instructions using the readme/install instruction files in the Pyglet source tarball. If you have the package setuptools (http://pypi.python.org/pypi/setuptools) the Pyglet installation should be very easy. However, for this, you will need a runtime egg of Pyglet. But the egg file for Pyglet is not available at http://pypi.python.org. If you get hold of a Pyglet egg file, it can be installed by running the following command on Linux or Mac OS X. You will need administrator access to install the package: $sudo easy_install -U pyglet Summary of installation prerequisites Package Download location Version Windows platform Linux/Unix/OS X platforms Python http://python.org/download/releases/ 2.6.4 (or any 2.6.x) Install using binary distribution Install from binary; also install additional developer packages (For example, with python-devel in the package name in a rpm-based Linux distribution).   Build and install from the source tarball. Pyglet http://www.pyglet.org/download.html 1.1.4 or later Install using binary distribution (the .msi file) Mac: Install using disk image file (.dmg file). Linux: Build and install using the source tarball. Testing the installation Before proceeding further, ensure that Pyglet is installed properly. To test this, just start Python from the command line and type the following: >>>import pyglet If this import is successful, we are all set to go! A primer on Pyglet Pyglet provides an API for multimedia application development using Python. It is an OpenGL-based library that works on multiple platforms. It is primarily used for developing gaming and other graphically-rich applications. We will cover some important aspects of Pyglet framework. Important components We will briefly discuss some of the important modules and packages of Pyglet that we will use. Note that this is just a tiny chunk of the Pyglet framework. Please review the Pyglet documentation to know more about its capabilities, as this is beyond the scope of this article. Window The pyglet.window.Window module provides the user interface. It is used to create a window with an OpenGL context. The Window class has API methods to handle various events such as mouse and keyboard events. The window can be viewed in normal or full screen mode. Here is a simple example of creating a Window instance. You can define a size by specifying width and height arguments in the constructor. win = pyglet.window.Window() The background color for the image can be set using OpenGL call glClearColor, as follows: pyglet.gl.glClearColor(1, 1, 1, 1) This sets a white background color. The first three arguments are the red, green, and blue color values. Whereas, the last value represents the alpha. The following code will set up a gray background color. pyglet.gl.glClearColor(0.5, 0.5, 0.5, 1) The following illustration shows a screenshot of an empty window with a gray background color. Image The pyglet.image module enables the drawing of images on the screen. The following code snippet shows a way to create an image and display it at a specified position within the Pyglet window. img = pyglet.image.load('my_image.bmp')x, y, z = 0, 0, 0img.blit(x, y, z) A later section will cover some important operations supported by the pyglet.image module. Sprite This is another important module. It is used to display an image or an animation frame within a Pyglet window discussed earlier. It is an image instance that allows us to position an image anywhere within the Pyglet window. A sprite can also be rotated and scaled. It is possible to create multiple sprites of the same image and place them at different locations and with different orientations inside the window. Animation Animation module is a part of pyglet.image package. As the name indicates, pyglet.image.Animation is used to create an animation from one or more image frames. There are different ways to create an animation. For example, it can be created from a sequence of images or using AnimationFrame objects. An animation sprite can be created and displayed within the Pyglet window. AnimationFrame This creates a single frame of an animation from a given image. An animation can be created from such AnimationFrame objects. The following line of code shows an example. animation = pyglet.image.Animation(anim_frames) anim_frames is a list containing instances of AnimationFrame. Clock Among many other things, this module is used for scheduling functions to be called at a specified time. For example, the following code calls a method moveObjects ten times every second. pyglet.clock.schedule_interval(moveObjects, 1.0/10) Displaying an image In the Image sub-section, we learned how to load an image using image.blit. However, image blitting is a less efficient way of drawing images. There is a better and preferred way to display the image by creating an instance of Sprite. Multiple Sprite objects can be created for drawing the same image. For example, the same image might need to be displayed at various locations within the window. Each of these images should be represented by separate Sprite instances. The following simple program just loads an image and displays the Sprite instance representing this image on the screen. 1 import pyglet23 car_img= pyglet.image.load('images/car.png')4 carSprite = pyglet.sprite.Sprite(car_img)5 window = pyglet.window.Window()6 pyglet.gl.glClearColor(1, 1, 1, 1)78 @window.event9 def on_draw():10 window.clear()11 carSprite.draw()1213 pyglet.app.run() On line 3, the image is opened using pyglet.image.load call. A Sprite instance corresponding to this image is created on line 4. The code on line 6 sets white background for the window. The on_draw is an API method that is called when the window needs to be redrawn. Here, the image sprite is drawn on the screen. The next illustration shows a loaded image within a Pyglet window. In various examples in this article, the file path strings are hardcoded. We have used forward slashes for the file path. Although this works on Windows platform, the convention is to use backward slashes. For example, images/car.png is represented as imagescar.png. Additionally, you can also specify a complete path to the file by using the os.path.join method in Python. Regardless of what slashes you use, the os.path.normpath will make sure it modifies the slashes to fit to the ones used for the platform. The use of oos.path.normpath is illustrated in the following snippet: import osoriginal_path = 'C:/images/car.png"new_path = os.path.normpath(original_path) The preceding image illustrates Pyglet window showing a still image. Mouse and keyboard controls The Window module of Pyglet implements some API methods that enable user input to a playing animation. The API methods such as on_mouse_press and on_key_press are used to capture mouse and keyboard events during the animation. These methods can be overridden to perform a specific operation. Adding sound effects The media module of Pyglet supports audio and video playback. The following code loads a media file and plays it during the animation. 1 background_sound = pyglet.media.load(2 'C:/AudioFiles/background.mp3',3 streaming=False)4 background_sound.play() The second optional argument provided on line 3 decodes the media file completely in the memory at the time the media is loaded. This is important if the media needs to be played several times during the animation. The API method play() starts streaming the specified media file.
Read more
  • 0
  • 0
  • 12514
article-image-designing-user-interface
Packt
23 Nov 2016
7 min read
Save for later

Designing a User Interface

Packt
23 Nov 2016
7 min read
In this article by Marcin Jamro, the author of the book Windows Application Development Cookbook, we will see how to add a button in your application. (For more resources related to this topic, see here.) Introduction You know how to start your adventure by developing universal applications for smartphones, tablets, and desktops running on the Windows 10 operating system. In the next step, it is crucial to get to know how to design particular pages within the application to provide the user with a convenient user interface that works smoothly on screens with various resolutions. Fortunately, designing the user interface is really simple using the XAML language, as well as Microsoft Visual Studio Community 2015. A designer can use a set of predefined controls, such as textboxes, checkboxes, images, or buttons. What's more, one can easily arrange controls in various variants, either vertically, horizontally, or in a grid. This is not all; developers could prepare their own controls as well. Such controls could be configured and placed on many pages within the application. It is also possible to prepare dedicated versions of particular pages for various types of devices, such as smartphones and desktops. You have already learned how to place a new control on a page by dragging it from the Toolbox window. In this article, you will see how to add a control as well as how to programmatically handle controls. Thus, some controls could either change their appearance, or the new controls could be added to the page when some specific conditions are met. Another important question is how to provide the user with a consistent user interface within the whole application. While developing solutions for the Windows 10 operating system, such a task could be easily accomplished by applying styles. In this article, you will learn how to specify both page-limited and application-limited styles that could be applied to either particular controls or to all the controls of a given type. At the end, you could ask yourself a simple question, "Why should I restrict access to my new awesome application only to people who know a particular language in which the user interface is prepared?" You should not! And in this article, you will also learn how to localize content and present it in various languages. Of course, the localization will use additional resource files, so translations could be prepared not by a developer, but by a specialist who knows the given language well. Adding a button When developing applications, you can use a set of predefined controls among which a button exists. It allows you to handle the event of pressing the button by a user. Of course, the appearance of the button could be easily adjusted, for instance, by choosing a proper background or border, as you could see in this recipe. The button can present textual content; however, it can also be adjusted to the user's needs, for instance, by choosing a proper color or font size. This is not all because the content shown on the button could not be only textual. For instance, you can prepare a button that presents an image instead of a text, a text over an image, or a text located next to the small icon that visually informs about the operation. Such modifications are presented in the following part of this recipe as well. Getting ready To step through this recipe, you only need the automatically generated project. How to do it… Add a button to the page by modifying the content of the MainPage.xaml file, as follows: <Page (...)> <Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}"> <Button Content="Click me!" Foreground="#0a0a0a" FontWeight="SemiBold" FontSize="20" FontStyle="Italic" Background="LightBlue" BorderBrush="RoyalBlue" BorderThickness="5" Padding="20 10" VerticalAlignment="Center" HorizontalAlignment="Center" /> </Grid> </Page> Generate a method for handling the event of clicking the button by pressing the button (either in a graphical designer or in the XAML code) and double-clicking on the Click field in the Properties window with the Event handlers for the selected element option (the lightning icon) selected. The automatically generated method is as follows: private void Button_Click(object sender, RoutedEventArgs e) { } How it works… In the preceding example, the Button control is placed within a grid. It is centered both vertically and horizontally, as specified by the VerticalAlignment and HorizontalAlignment properties that are set to Center. The background color (Background) is set to LightBlue. The border is specified by two properties, namely BorderBrush and BorderThickness. The first property chooses its color (RoyalBlue), while the other represents its thickness (5 pixels). What's more, the padding (Padding) is set to 20 pixels on the left- and right-hand side and 10 pixels at the top and bottom. The button presents the Click me! text defined as a value of the Content property. The text is shown in the color #0a0a0a with semi-bold italic font with size 20, as specified by the Foreground, FontWeight, FontStyle, and FontSize properties, respectively. If you run the application on a local machine, you should see the following result: It is worth mentioning that the IDE supports a live preview of the designed page. So, you can modify the values of particular properties and have real-time feedback regarding the target appearance directly in the graphical designer. It is a really great feature that does not require you to run the application to see an impact of each introduced change. There's more… As already mentioned, even the Button control has many advanced features. For example, you could place an image instead of a text, present a text over an image, or show an icon next to the text. Such scenarios are presented and explained now. First, let's focus on replacing the textual content with an image by modifying the XAML code that represents the Button control, as follows: <Button MaxWidth="300" VerticalAlignment="Center" HorizontalAlignment="Center"> <Image Source="/Assets/Image.jpg" /> </Button> Of course, you should also add the Image.jpg file to the Assets directory. To do so, navigate to Add | Existing Item… from the context menu of the Assets node in the Solution Explorer window, shown as follows: In the Add Existing Item window, choose the Image.jpg file and click on the Add button. As you could see, the previous example uses the Image control. In this recipe, no more information about such a control is presented because it is the topic of one of the next recipes, namely Adding an image. If you run the application now, you should see a result similar to the following: The second additional example presents a button with a text over an image. To do so, let's modify the XAML code, as follows: <Button MaxWidth="300" VerticalAlignment="Center" HorizontalAlignment="Center"> <Grid> <Image Source="/Assets/Image.jpg" /> <TextBlock Text="Click me!" Foreground="White" FontWeight="Bold" FontSize="28" VerticalAlignment="Bottom" HorizontalAlignment="Center" Margin="10" /> </Grid> </Button> You'll find more information about the Grid, Image, and TextBlock controls in the next recipes, namely Arranging controls in a grid, Adding an image, and Adding a label. For this reason, the usage of such controls is not explained in the current recipe. If you run the application now, you should see a result similar to the following: As the last example, you will see a button that contains both a textual label and an icon. Such a solution could be accomplished using the StackPanel, TextBlock, and Image controls, as you could see in the following code snippet: <Button Background="#353535" VerticalAlignment="Center" HorizontalAlignment="Center" Padding="20"> <StackPanel Orientation="Horizontal"> <Image Source="/Assets/Icon.png" MaxHeight="32" /> <TextBlock Text="Accept" Foreground="White" FontSize="28" Margin="20 0 0 0" /> </StackPanel> </Button> Of course, you should not forget to add the Icon.png file to the Assets directory, as already explained in this recipe. The result should be similar to the following: Resources for Article: Further resources on this subject: Deployment and DevOps [article] Introduction to C# and .NET [article] Customizing Kernel and Boot Sequence [article]
Read more
  • 0
  • 0
  • 12388

article-image-getting-started-zeromq
Packt
04 Apr 2013
5 min read
Save for later

Getting Started with ZeroMQ

Packt
04 Apr 2013
5 min read
(For more resources related to this topic, see here.) The message queue A message queue, or technically a FIFO (First In First Out) queue is a fundamental and well-studied data structure. There are different queue implementations such as priority queues or double-ended queues that have different features, but the general idea is that the data is added in a queue and fetched when the data or the caller is ready. Imagine we are using a basic in-memory queue. In case of an issue, such as power outage or a hardware failure, the entire queue could be lost. Hence, another program that expects to receive a message will not receive any messages. However, adopting a message queue guarantees that messages will be delivered to the destination no matter what happens. Message queuing enables asynchronous communication between loosely-coupled components and also provides solid queuing consistency. In case of insufficient resources, which prevent you from immediately processing the data that is sent, you can queue them up in the message queue server that would store the data until the destination is ready to accept the messages. Message queuing has an important role in large-scaled distributed systems and enables asynchronous communication. Let's have a quick overview on the difference between synchronous and asynchronous systems. In ordinary synchronous systems, tasks are processed one at a time. A task is not processed until the task in-process is finished. This is the simplest way to get the job done. Synchronous system We could also implement this system with threads. In this case threads process each task in parallel. Threaded synchronous system In the threading model, threads are managed by the operating system itself on a single processor or multiple processors/cores. Asynchronous Input/Output (AIO) allows a program to continue its execution while processing input/output requests. AIO is mandatory in real-time applications. By using AIO, we could map several tasks to a single thread. Asynchronous system The traditional way of programming is to start a process and wait for it to complete. The downside of this approach is that it blocks the execution of the program while there is a task in progress. However, AIO has a different approach. In AIO, a task that does not depend on the process can still continue. You may wonder why you would use message queue instead of handling all processes with a single-threaded queue approach or multi-threaded queue approach. Let's consider a scenario where you have a web application similar to Google Images in which you let users type some URLs. Once they submit the form, your application fetches all the images from the given URLs. However: If you use a single-threaded queue, your application would not be able to process all the given URLs if there are too many users If you use a multi-threaded queue approach, your application would be vulnerable to a distributed denial of service attack (DDoS) You would lose all the given URLs in case of a hardware failure In this scenario, you know that you need to add the given URLs into a queue and process them. So, you would need a message queuing system. Introduction to ZeroMQ Until now we have covered what a message queue is, which brings us to the purpose of this article, that is, ZeroMQ. The community identifies ZeroMQ as "sockets on steroids". The formal definition of ZeroMQ is it is a messaging library that helps developers to design distributed and concurrent applications. The first thing we need to know about ZeroMQ is that it is not a traditional message queuing system, such as ActiveMQ, WebSphereMQ, or RabbitMQ. ZeroMQ is different. It gives us the tools to build our own message queuing system. It is a library. It runs on different architectures from ARM to Itanium, and has support for more than 20 programming languages. Simplicity ZeroMQ is simple. We can do some asynchronous I/O operations and ZeroMQ could queue the message in an I/O thread. ZeroMQ I/O threads are asynchronous when handling network traffic, so it can do the rest of the job for us. If you have worked on sockets before, you will know that it is quite painful to work on. However, ZeroMQ makes it easy to work on sockets. Performance ZeroMQ is fast. The website Second Life managed to get 13.4 microseconds end-to-end latencies and up to 4,100,000 messages per second. ZeroMQ can use multicast transport protocol, which is an efficient method to transmit data to multiple destinations. The brokerless design Unlike other traditional message queuing systems, ZeroMQ is brokerless. In traditional message queuing systems, there is a central message server (broker) in the middle of the network and every node is connected to this central node, and each node communicates with other nodes via the central broker. They do not directly communicate with each other. However, ZeroMQ is brokerless. In a brokerless design, applications can directly communicate with each other without any broker in the middle. ZeroMQ does not store messages on disk. Please do not even think about it. However, it is possible to use a local swap file to store messages if you set zmq.SWAP. Summary This article explained what a message queuing system is, discussed the importance of message queuing, and introduced ZeroMQ to the reader. Resources for Article : Further resources on this subject: RESTful Web Service Implementation with RESTEasy [Article] BizTalk Server: Standard Message Exchange Patterns and Types of Service [Article] AJAX Chat Implementation: Part 1 [Article]  
Read more
  • 0
  • 0
  • 12325
Modal Close icon
Modal Close icon