Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Data

1210 Articles
article-image-pentaho-reporting-building-interactive-reports-html
Packt
26 Oct 2009
1 min read
Save for later

Pentaho Reporting: Building Interactive Reports in HTML

Packt
26 Oct 2009
1 min read
Interactive HTML report properties All reporting elements share a common set of HTML-related properties that may be used to create a dynamic report. Below is a list of properties and their uses: HTML Properties class This property sets the class attribute of the current HTML entity to the specified value. name This property sets the name attribute of the current HTML entity to the specified value. title This property sets the title attribute of the current HTML entity to the specified value. xml-id This property allows the naming of the current HTML entity, setting the id attribute, making it possible to reference in outside scripts. append-body This property allows the placement of raw HTML within the body of the HTML document, prior to the rendering of the current element. append-body-footer This property allows the placement of raw HTML within the body of the HTML document, after the rendering of the current element. append-header Defined only at the master report level, this property allows the inclusion of raw HTML within the header of the HTML document generated. This location is traditionally used to load additional CSS files, as well as external JavaScript files.
Read more
  • 0
  • 0
  • 3033

article-image-graphical-report-design-ireport-part-2
Packt
26 Oct 2009
4 min read
Save for later

Graphical Report Design with iReport: Part 2

Packt
26 Oct 2009
4 min read
By the end of this article, you should be able to: Add multiple columns to a report Group report data Add images and charts to a report Creating more elaborate reports In this section, we will modify the simple report we created in the previous article to illustrate how to add images, charts, and multiple columns to a report. We will also see how to group report data. We will perform all of these tasks graphically with iReport. Adding images to a report Adding static images to a report is very simple with iReport. Just drag the Image component from the Palette to the band where it will be rendered in the report. When we drop the image component into the appropriate band, a window pops up asking us to specify the location of the image file to display. After we select the image, we can drag it to its exact location where it will be rendered. As we can see, adding images to a report using iReport couldn't be any simpler. Adding multiple columns to a report The report we've been creating so far in this article (continued from the first part of this article, Graphical Report Design with iReport: Part 1, for database details, refer to Database for our reports section of Creating Dynamic Reports from Databases) contains over 11,000 records. It spans over 300 pages. As we can see, there is a lot of space between the text fields. Perhaps it would be a good idea to place the text fields closer together and add an additional column. This would cut the number of pages in the report by half. To change the number of columns in the report, we simply need to select the root report node in the Report Inspector window at the top left and then modify its Columns property in the Properties window at the bottom right. When we modify the Columns property, iReport automatically modifies the Column Width property to an appropriate value. We are free, of course, to modify this value if it doesn't meet our needs. As our report now contains more than one column, it makes sense to re-add the Column Header band we deleted earlier. This can be done by right-clicking on the band in the Report Inspector window and selecting Add Band. Next, we need to move the static text in the page header to the Column Header band. To move any element from one band to another, all we need to do is drag it to the appropriate band in the Report Inspector window. Next, we need to resize and reposition the text fields in the Detail band and the static text elements in the Column Header band so that they fit in the new, narrower width of the columns. Also, resize the Column Header band to avoid having too much whitespace between the elements of the Column Header and Detail bands. Our report now looks like this: We can see the resulting report by clicking on Preview. Grouping report data Suppose we are asked to modify our report so that data is divided by the state where the aircraft is registered. This is a perfect situation to apply report groups. Report groups allow us to divide report data when a report expression changes. To define a report group, we need to right-click on the root report node in the Report Inspector window, and then select Add Report Group. Then, enter the Group name and indicate whether we want to group by a field or by a report expression. In our case, we want to group the data by state field. After clicking on Next>, we need to indicate whether we want to add a group header and/or footer to our report. For aesthetic purposes, we move the static text fields in the Column Header band to the Group Header band, remove the column and page header bands, and add additional information to the Group Header band. After making all of these changes, our report preview will look like this: We can preview the report by clicking Preview.
Read more
  • 0
  • 0
  • 2215

article-image-graphical-report-design-ireport-part-1
Packt
26 Oct 2009
7 min read
Save for later

Graphical Report Design with iReport: Part 1

Packt
26 Oct 2009
7 min read
In 2008, iReport was rewritten to take advantage of the NetBeans platform. It is freely available both as a standalone product and as a plugin to the NetBeans IDE. In this article, we will be covering the standalone version of iReport; however, the material is also applicable to the iReport NetBeans plugin. By the end of this article, you will be able to: Obtain and set up iReport Quickly create database reports by taking advantage of iReport's Report Wizard Design reports graphically with iReport Obtaining iReport iReport can be downloaded from its home page at http://jasperforge.org/projects/ireport by clicking on the Download iReport image slightly above the center of the page. Once we click on the image, we are directed to an intermediate page where we can either log in with our JasperForge account or go straight to the download page. Either logging in or clicking on the No Thanks, Download Now button takes us to the iReport download page. The standalone iReport product is in the first row of the table on the page. To download it, we simply click on the Download link in the last column. Other downloads on the page are for older versions of JasperReports, iReport NetBeans plugin, and other JasperSoft products. iReport can be downloaded as a DMG file for Macintosh computers, as a Windows installer for Windows PCs, as a source file, as a ZIP file, or as a gzipped TAR file. To install iReport, simply follow the usual application installation method for your platform. If you chose to download the ZIP or gzipped TAR file, simply extract it into any directory. A subdirectory called something like iReport-nb-3.5.1 will be created. (The exact name will depend on the version of iReport that was downloaded.) Inside this directory, you will find a bin subdirectory containing an executable shell script called ireport and a couple of Windows executables, ireport.exe and ireport_w.exe. On Windows systems, either EXE file will start iReport. The difference between the two Windows executables is that theireport.exe will display a command-line window when iReport is executed, and ireport_w.exe won't. Both versions provide exactly the same functionality. On Unix and Unix-like systems, such as Linux and Mac OS, iReport can be started by executing the ireport shell script. The following screenshot illustrates how iReport looks when it is opened for the first time: Setting up iReport iReport can help us quickly generate database reports. To do so, we need to provide it with the JDBC driver and connection information for our database. iReport comes bundled with JDBC drivers for several open source relational database systems, such as MySQL, PostgreSQL, HSQLDB, and others. If we want to connect to a different database, we need to add the JDBC driver to iReport's CLASSPATH. This can be done by clicking on Tools | Options and then selecting the Classpath tab. To add the JDBC driver to the CLASSPATH, click on the Add JAR button, and then navigate to the location of the JAR file containing the JDBC driver. Select the JAR file and click on the OK button at the bottom of the window. We won't actually add a JDBC driver, as we are using MySQL for our examples, which is one of the RDBMS systems supported out of the box by iReport. The information just provided is for the benefit of readers using an RDBMS system that is not supported out of the box. Before we can create reports that use an RDBMS as a datasource, we need to create a database connection. In order to do so, we need to click on the Report Datasources icon in the toolbar: After doing so, the Connections / Datasources configuration window should pop up. To add the connection, we need to click on the New button, select Database JDBC connection, and then click on the Next> button. We then need to select the appropriate JDBC driver, fill in the connection information, and click on the Save button. Before saving the database connection properties, it is a good idea to click on theTest button to make sure we can connect to the database. If we can, we should see a pop-up window like the following: After verifying that we can successfully connect to the database, we are ready to create some database reports. Creating a database report in record time iReport contains a wizard that allows us to quickly generate database reports (very useful if the boss asks for a report 15 minutes before the quitting time on a Friday!). The wizard allows us to use one of the predefined templates that are included with iReport. The included report templates are divided into two groups: templates laid out in a "columnar" manner and templates laid out in a "tabular" manner. Columnar templates generate reports that are laid out in columns, and tabular templates generate reports that are laid out like a table. In this section, we will create a report displaying all the aircraft with a horsepower of 1000 or more. To quickly create a database report, we need to go to File | New | Report Wizard. We should then enter an appropriate name and location for our report and click on Next>. Next, we need to select the datasource or database connection to use for our report. For our example, we will use the JDBC connection we configured in the previous section. We can then enter the database query we will use to create the report. Alternatively, we can use the iReport query designer to design the query. For individuals with SQL experience, in many cases it is easier to come up with the database query in a separate database client tool and then paste it in the Query text area than using the query designer. The complete query for the report is: selecta.tail_num,a.aircraft_serial,am.model as aircraft_model,ae.model as engine_modelfrom aircraft a, aircraft_models am, aircraft_engines aewhere a.aircraft_model_code = am.aircraft_model_codeand a.aircraft_engine_code = ae.aircraft_engine_codeand ae.horsepower >= 1000 The following window shows a list of all the columns selected in the query, allowing us to select which ones we would like to use as report fields: In this case, we want the data for all columns in the query to be displayed in the report. Therefore, we select all columns by clicking on the second button. We then select how we want to group the data and click on Next>. This creates a report group. In this example, we will not group the report data. The screenshot illustrates how the drop-down box contains the report fields selected in the previous step. We then select the report layout (Columnar or Tabular). In this example, we will use the Tabular Layout. After selecting the layout, we click on Next> to be presented with the last step. We then click on Finish to generate the report's JRXML template. While the template is automatically saved when it is created, the report generated by the Preview button is not automatically saved. We can then preview our report by clicking on Preview. That's it! We have created a report by simply entering a query and selecting a few options from a wizard.  
Read more
  • 0
  • 0
  • 3108

article-image-installation-and-basic-features-enterprisedb
Packt
24 Oct 2009
3 min read
Save for later

Installation and basic features of EnterpriseDB

Packt
24 Oct 2009
3 min read
Installing the EnterpriseDB Download PostgrePlus Advanced Server 8.3 ( pgplus-advsvr-windows-83012b.exe (120MB) ) from the following site: http://www.enterprisedb.com/products/download.do. After downloading the program double click the executable file. You may need to choose a language from a list of languages. Here English has been chosen. The welcome window gets displayed as shown. Click Next. Choose the option you need. Read notes on this page to make the choice. Here, the Oracle compatibility has been chosen. Click Next or, choose a different location by browsing. Here the default location is accepted. Click Next. The window that shows up displays all the various features that are available. Pick and choose the features. Here all features are chosen. Click Next. The next window shows the links from where the JDBC drivers for connecting to Oracle and MySQL are available displayed. Click Next . In the window that shows up you need to choose the password for the Operating System UserID and Password. Read the cautionary remarks on this page. Choose Next. At this point your anti-virus program may require you to permit to run the program. McAfee is the anti-virus program on this computer. In the window that gets displayed you may need to choose the administrator's log in credentials. You may also Browse and select the Data Destination Directory. Herein the default is accepted. Click Next. In the windows that gets displayed you may choose the type of environment for which the server will be used as well as the work load for which you may be using the server. The dynamic tuning options available are: Server Utilization Development: This is a development machine and many other applications will be running on it. Stress testing should not be performed with this configuration. EnterpriseDB will use a minimal amount of memory. Mixed: Several applications will be running on this machine. Choose this option for web/application servers. Dedicated: This machine is dedicated to run EnterpriseDB and will use available memory to optimize performance. The Workload Profile Transaction Processing: The running application is a transaction intensive applications. General Purpose: The database will be used for transaction processing as well as complex queries and reporting. Reporting: The database will be used for reporting applications. For this tutorial, the Mixed option for Server Utilization and General Purpose for Workload Profile were chosen. Click on Next. The Summary page gets displayed showing all the options chosen. Click on the Install button. The window with a progress bar gets displayed. You may get a warning from the anti-virus program on your computer to allow the file to be executed. Click OK to allow install.
Read more
  • 0
  • 0
  • 2302

article-image-migrating-ms-access-2003-data-using-oracle-sql-developer-12
Packt
24 Oct 2009
7 min read
Save for later

Migrating MS Access 2003 Data using the Oracle SQL Developer 1.2

Packt
24 Oct 2009
7 min read
Introduction Business needs often necessitate data migration from a smaller, less secure database to a higher end, faster database server with a more reliable availability. A typical scenario is the migration of data from a desktop sized database such as MS Access or Fox Pro to any other higher end database servers such as MS SQL Server, Oracle, DB2 or SQL Anywhere Server. Most of the database vendors provide tools to migrate from third party to their own database servers. In his three previous articles, the author has described the built-in tools to migrate from MS Access to SQL 2000 Server, SQL Anywhere Server, and from Oracle 10G XE to SQL Anywhere server.   In an earlier article on this site, the author showed how you may connect to an MS Access 2003 database and execute SQL statements using the Oracle SQL Developer 1.2 tool. In this tutorial the author shows you how to migrate an MS Access database to an Oracle 10G XE Server delineating all the steps involved in the migration process. Oracle SQL Developer 1.2 with this latest version is sometimes called the Migration version as it supports migrating data from three vendors (MySQL, SQL Server and MS Access) to an Oracle database. In fact, it has been designed to migrate from more than one version of MS Access. This feature was not available in the version 1.1 of this tool. Overview of this Tutorial Like in the earlier article, a simple MS Access 2003 database file will be created with just one table, a query and a linked table. This database file, about 292 KB, will be migrated to Oracle 10G XE database. Oracle 10G XE, by design, can have just one database on a computer. However, you can have separate applications by having different user schemas. Oracle 10G XE comes bundled with a sample database schema and data which can be accessed by using the credentials, username hr with a password hr. For the purposes of this example a new user will be created and his authentication will be used for creating necessary migration related schemas to be stored in a repository. This will become clear as you follow the various details and the steps. Once the ‘Repository’ is created then you can begin by capturing the metadata of the source followed by converting the captured source information into Oracle specific model where a mapping between the source data and the Oracle will be accomplished. After this process, you generate the data definition language script which will create the Oracle objects such as tables, views, etc. In the final step these tables will be populated by transferring the data from the source to Oracle 10G XE. MS Access 2003 Source An empty MS Access database file TestMigration.mdb is created in the default directory, My Documents. An Employees table will be imported, an Orders table will be linked and a TestQuery based on selecting a few columns of Employees table will be created. The Employees table and the Orders table may be found in the Northwind Database that ships with most of the MS Access versions. Creating a New User in Oracle 10G XE As described in the overview, the MS Access Database will be migrated to a User schema in Oracle 10G XE, but this requires reating this schema. Only a user with DBA privileges can create a new user. Open the Homepage of the Oracle 10G XE Server. Login with the credentials you supplied while installing the software where the user is system and the password is what you chose at that time, as shown in the next figure.   This gives you access to several of the tools that you can use to administer as well as work with database objects. Click on the icon for Administration and follow the drop-downs till you get to the menu item, Create User, as shown in the next figure. Create a new user MigrateAccess with some password that you choose and confirm. Keep the account status unlocked. This uses the default tablespace called USERS. The default user privilege does not include the DBA role but for this example, the DBA is also included by placing a check mark in this selection. Also several other system wide privileges are also granted. Please follow steps described in the earlier article for the details. The next figure shows all the details filled in. After this when you click the Create button you will have created the user, MigrateAccess. When you click the button Create, you will notice that the ‘bread crumb’ will change to Manage Database Users. You will notice that the new user MigrateAccess has been added to the list of users, as shown in the next figure. As no expiry was set for this user in the previous screen, you can notice that there is no expiry shown in the following screen. Now if you logout (remember you logged in as SYSTEM) and login with the new credentials, MigrateAccess/[chosen password] you can access all the tools on the database. Of course, all the objects (tables, views, etc) will be empty. Creating the Repository to Store Schemas Migration using this tool requires an Oracle database schema to store the Meta data it collects about the source. You will create a connection from the Oracle SQL Developer to the Oracle 10 XE, in which, you just finished creating a new user schema. This user’s schema is where the repository contents will be stored. Making a connection to the Oracle Right click on the Connections node, and from the drop-down menu select New Connection. This brings up the New / Select Database Connection (this has been described in the earlier referenced article) window. It comes up with the default connection to an Oracle database. It even recognizes the local Oracle 10G XE, capturing all its details as shown. You need to provide a Connection Name, a Username and a Password. The connection name is your choice (herein called conMigrate) and the user name and password is the same that was used while creating the new user MigrateAccess. When you click on the button ‘Test’, a (success) status message will be posted to this form above the Help button, as shown in the next figure after a little while, preceded by a little progress window. Now click on the OK button on the New / Select Database Connection window. This adds the conMigrate connection to the list of Connections as shown in the next figure. Notice that objects are all empty as we discussed earlier. Create Repository Click on the main menu item Migrate. From the drop-down, click on Repository Management –> Create Repository as shown in the next figure. This brings up the Create Repository window showing the connection conMigrate as shown in the next figure. You may connect or disconnect this from the tool as long as the authentication information is available. Now click on the Create button. This brings up the Installing Repository window which reports the various objects installed and finally shows a message “Repository Built Successfully” as shown in the next figure. Click on the Close button on this window. Now login to the Oracle 10G XE with the credentials for the user MigrateAccess, and click on the object browser. Now you see all the Tables, Views, etc in the repository as shown. You will notice that either two more windows, named captured and converted models appear below the Connections node in Oracle SQL Developer, or if they are not found in the Connections node, you may find in the submenu of the main menu, View. The next figure shows the submenus of the View menu. Connect to the Source Database Right click on the connection node and establish a new connection so that you can connect to the source database, conTestMigration as shown in the next figure. When you click the Test button you will see a message that gets posted to the screen indicating the connection was a success. Click on the Connect button. This adds the conTestMigrate connection to the list of Connections in the navigator window.
Read more
  • 0
  • 0
  • 3006

article-image-building-queries-visually-mysql-query-browser
Packt
23 Oct 2009
3 min read
Save for later

Building Queries Visually in MySQL Query Browser

Packt
23 Oct 2009
3 min read
MySQL Query Browser, one of the open source MySQL GUI tools from MySQL AB, is used for building MySQL database queries visually. In MySQL Query Browser, you build database queries using just your mouse—click, drag and drop! MySQL Query Browser has plenty of visual query building functions and features. This article shows two examples, building Join and Master-detail queries. These examples will demonstrate some of these functions and features. Join Query A pop-up query toolbar will appear when you drag a table or column from the Object Browser’s Schemata tab to the Query Area. You drop the table or column on the pop-up query toolbar’s button to build your query. The following example demonstrates the use of the pop-up query toolbar to build a join query that involves three tables and two types of join (equi and left outer). Drag and drop the product table from the Schemata to Add Table(s) button. A SELECT query on the product table is written in the Query Area. Drag and drop the item table from Schemata to the JOIN Table(s) button on the Pop-up Query Toolbar. The two tables are joined on the foreign-key, product_code. If no foreign-key relationship exists, the drag and drop won’t have any effect. Drag and drop the order table from Schemata to the LEFT OUTER JOIN button on the Pop-up Query Toolbar. Maximize query area by pressing F11. You get a larger query area, and your lines are sequentially numbered (for easier identification). Move the FROM clause to its next line, by putting your cursor just before the FROM word and press Enter. Similarly, move the ON clause to its next line. Now, you can see all lines completely, and that the item table is left join to the order table on their foreign-key relationship column, the order_number column. As of now our query is SELECT *, i.e. selecting all columns from all tables. Let’s now select the columns we’d like to show at the query’s output. For example, drag and drop the order_number from the item table, product_name from the product table, and then quantity from the item table. (If necessary, expand the table folders to see their columns). The sequence of the selecting the columns is reflected in the SELECT clause (from left to right). Note that you can’t select column from the left join of the order table (if you try, nothing will happen) Next, add an additional condition. Drag and drop the amount column on the WHERE button in the Pop-up Query Toolbar. The column is added, with an AND, in the WHERE clause of the query. Type in its condition value, for example, > 1000. To finalize our query, drag and drop product_name on the ORDER button, and then, order_number (from item table, not order table) on the GROUP button. You’ll see that the GROUP BY and ORDER clauses are ordered correctly, i.e. the GROUP BY clause first before the ORDER BY, regardless of your drag & drop sequence. To test your query, click the Execute button. Your query should run without any error, and display its output in the query area (below the query).  
Read more
  • 0
  • 0
  • 4920
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-adonet-entity-framework
Packt
23 Oct 2009
6 min read
Save for later

ADO.NET Entity Framework

Packt
23 Oct 2009
6 min read
Creating an Entity Data Model You can create the ADO.NET Entity Data Model in one of the two ways: Use the ADO.NET Entity Data Model Designer Use the command line Entity Data Model Designer called EdmGen.exe We will first take a look at how we can design an Entity Data Model using the ADO.NET Entity Data Model Designer which is a Visual Studio wizard that is enabled after you install ADO.NET Entity Framework and its tools. It provides a graphical interface that you can use to generate an Entity Data Model. Creating the Payroll Entity Data Model using the ADO.NET Entity Data Model Designer Here are the tables of the 'Payroll' database that we will use to generate the data model: Employee Designation Department Salary ProvidentFund To create an entity data model using the ADO.NET Entity Data Model Designer, follow these simple steps: Open Visual Studio.NET and create a solution for a new web application project as seen below and save with a name. Switch to the Solution Explorer, right click and click on Add New Item as seen in the following screenshot: Next, select ADO.NET Entity Data Model from the list of the templates displayed as shown in the following screenshot:   Name the Entity Data Model PayrollModel and click on Add. Select Generate from database from the Entity Data Model Wizard as shown in the following screenshot: Note that you can also use the Empty model template to create the Entity Data Model yourself. If you select the Empty Data Model template and click on next, the following screen appears: As you can see from the above figure, you can use this template to create the Entity Data Model yourself. You can create the Entity Types and their relationships manually by dragging items from the toolbox. We will not use this template in our discussion here. So, let's get to the next step. Click on Next in the Entity Data Model Wizard window shown earlier. The modal dialog box will now appear and prompts you to choose your connection as shown in the following figure: Click on New Connection Now you will need to specify the connection properties and parameters as shown in the following figure: We will use a dot to specify the database server name. This implies that we will be using the database server of the localhost, which is the current system in use. After you specify the necessary user name, password, and the server name, you can test your connection using the Test Connection button. When you do so, the message Test connection succeeded gets displayed in the message box as shown in the previous figure. When you click on OK on the Test connection dialog box, the following screen appears: <connectionStrings> <add name="PayrollEntities" connectionString="metadata=res:// *; provider=System.Data.SqlClient;provider connection string=&quot; Data Source=.;Initial Catalog=Payroll;User ID=sa;Password=joydip1@3; MultipleActiveResultSets=True&quot;" providerName="System.Data.EntityClient" /> </connectionStrings> Note the Entity Connection String generated automatically. This connection string will be saved in the ConnectionStrings section of your application's web.config file. This is how it will look like: When you click on Next in the previous figure, the following screen appears: Expand the Tables node and specify the database objects that you require in the Entity Data Model to be generated as shown in the following figure: Click on Finish to generate the Entity Data Model. Here is the output displayed in the Output Window while the Entity Data Model is being generated: Your Entity Data Model has been generated and saved in a file named PayrollModel.edmx. We are done creating our first Entity Data Model using the ADO.NET Entity Data Model Designer tool. When you open the Payroll Entity Data Model that we just created in the designer view, it will appear as shown in the following figure: Note how the Entity Types in the above model are related to one another. These relationships have been generated automatically by the Entity Data Model Designer based on the relationships between the tables of the Payroll database. In the next section, we will learn how we can create an Entity Data Model using the EdmGen.exe command line tool. Creating the Payroll Data Model Using the EdmGen Tool We will now take a look at how to create a data model using the Entity Data Model generation tool called EdmGen. The EdmGen.exe command line tool can be used to do one or more of the following: Generate the .cdsl, .msl, and .ssdl files as part of the Entity Data Model Generate object classes from a .csdl file Validate an Entity Data Model The EdmGen.exe command line tool generates the Entity Data Model as a set of three files: .csdl, .msl, and .ssdl. If you have used the ADO.NET Entity Data Model Designer to generate your Entity Data Model, the .edmx file generated will contain the CSDL, MSL, and the SSDL sections. You will have a single .edmx file that bundles all of these sections into it. On the other hand, if you use the EdmGen.exe tool to generate the Entity Data Model, you would find three distinctly separate files with .csdl, .msl or .ssdl extensions. Here is a list of the major options of the EdmGen.exe command line tool: Option Description /help Use this option to display help on all the possible options of this tool. The short form is /? /language:CSharp Use this option to generate code using C# language /language:VB Use this option to generate code using VB language /provider:<string> Use this option to specify the name of the ADO.NET data provider that you would like to use. /connectionstring: <connection string> Use this option to specify the connection string to be used to connect to the database /namespace:<string> Use this option to specify the name of the namespace /mode:FullGeneration Use this option to generate your CSDL, MSL, and SSDL objects from the database schema /mode:EntityClassGeneration Use this option to generate your entity classes from a given CSDL file /mode:FromSsdlGeneration Use this option to generate MSL, CSDL, and Entity Classes from a given SSDL file /mode:ValidateArtifacts Use this option to validate the CSDL, SSDL, and MSL files /mode:ViewGeneration Use this option to generate mapping views from the CSDL, SSDL, and MSL files  
Read more
  • 0
  • 0
  • 3771

article-image-comparing-cursor-and-set-approaches-processing-relational-data
Packt
23 Oct 2009
5 min read
Save for later

Comparing Cursor and Set Approaches in Processing Relational Data

Packt
23 Oct 2009
5 min read
To give you an idea about cursor, the following DECLARE statement creates a cursor (named zero_bal_crs), which gives you access to the rows of a payment table, rows that have 100 or less balances. DECLARE zero_bal_crs CURSOR FOR SELECT * FROM payment WHERE payment_bal < 100 You then fetch and process each of the rows sequentially; the process can be, for example, summing up rows by product group and loading the sums into a summary table. You develop the process outside of the cursor, for example, in a stored procedure. (You’ll see in the examples that the cursor-based process is procedural). Instead of processing row-by-row sequentially, you can process relational data set-by-set, without a cursor. For example, to sum all payment rows with 100 or less balances and load it into a table (named payment_100orless_bal in the following SQL statement), you can use the following SQL statement. This single SQL statement completely processes the rows that meet the condition, all at once, as a set. INSERT INTO payment_100orless_bal SELECT SUM(payment_bal) FROM payment WHERE payment_bal < 100 The following three examples, which are among those I most frequently encountered, further compare cursor-based processing with set processing. Example 1: Sequential Loop cf. One SQL Statement Our process in the first example is to summarize sales transactions by product. Listing 1 shows the DDLs for creating the sales transaction table and sales product table that stores the summary. Listing 1.1: DDLs of the Example 1 Tables CREATE TABLE sales_transactions( product_code INT, sales_amount DEC(8,2), discount INT);CREATE TABLE sales_product(product_code INT, sales_amount DEC(8,2)); Listing 1.2 shows a cursor-based stored procedure that implements the process. In this example, what the cursor (sales_crs) does is simply putting all rows in ascending order by their product codes. As you might have expected, this stored procedure applies loop with if-then-else programming construct to process the data row-by-row. Listing 1.2 Cursor Procedural solution DELIMITER $$DROP PROCEDURE IF EXISTS ex1_cursor $$USE sales $$CREATE PROCEDURE ex1_cursor()BEGINDECLARE p INT DEFAULT 0;DECLARE s DECIMAL(8,2) DEFAULT 0;DECLARE d INT DEFAULT 0;DECLARE done INT DEFAULT 0;DECLARE first_row INT DEFAULT 0;DECLARE px INT DEFAULT 0;DECLARE sx DECIMAL(8,2) DEFAULT 0;DECLARE sales_crs CURSOR FOR SELECT * FROM sales_transactions ORDER BY product_code;DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = 1;OPEN sales_crs; REPEAT FETCH sales_crs INTO p, s, d;IF first_row = 0 THEN SET first_row = 1; SET px = p; SET sx = s * (100 - d)/100; ELSEIF NOT done THEN IF p <> px THEN INSERT INTO sales_product VALUES(px, sx); SET sx = s * (100 - d)/100; SET px = p; ELSE SET sx = sx + s * (100 - d)/100; END IF; ELSE INSERT INTO sales_product VALUES(px, sx); END IF; UNTIL done END REPEAT;CLOSE sales_crs;END $$DELIMITER ; The stored procedure in Listing 1.3 implements a set, processing to accomplish the same purpose as its foregoing cursor-based processing. You can see that this stored procedure is simpler than its cursor-based counterpart. Listing 1.3 Set Operation solution DELIMITER //USE sales //DROP PROCEDURE IF EXISTS ex1_sql //CREATE PROCEDURE ex1_sql ()BEGIN INSERT INTO sales_product SELECT product_code , SUM(sales_amount * (100 - discount)/100) FROM sales_transactions GROUP BY product_code;END //DELIMITER ; Example 2: Nested Cursor cf. Join Our 2nd example is to consolidate order by customer. While example 1 has one table, example 2 has two: order and item. The DDLs of the two tables are shown in Listing 2. To process two tables, our cursor implementation uses a nested loop (Listing 2.2) which makes it even more complex than example 1. Listing 2.1: DDL’s of the Example 2 Tables CREATE TABLE order ( , order_number INT , order_date DATE , customer_number INT);CREATE TABLE item ( , order_number INT , product_code INT , quantity INT , unit_price DEC(8,2)); Listing 2.2: Nested Cursor DELIMITER $$DROP PROCEDURE IF EXISTS ex2_cursor $$CREATE PROCEDURE ex2_cursor()BEGINDECLARE so INT;DECLARE sc INT;DECLARE io INT;DECLARE iq INT;DECLARE iu DEC(10,2);DECLARE done1 VARCHAR(5) DEFAULT 'START' ;DECLARE done2 VARCHAR(5) DEFAULT 'START' ;DECLARE sales_crs CURSOR FOR SELECT customer_number, order_number FROM sales_order ORDER BY customer_number, order_number;DECLARE order_crs CURSOR FOR SELECT order_number, quantity, unit_price FROM item WHERE order_number = so ORDER BY order_number;DECLARE CONTINUE HANDLER FOR NOT FOUND SET done1 = 'END';OPEN sales_crs; WHILE done1 <> 'END' DO FETCH sales_crs INTO sc, so; IF done1 <> 'END' THEN/* inner cursor */ OPEN oorder_crs; SET done2 = done1; WHILE done1 <> 'END' DO FETCH order_crs INTO io, iq, iu; IF done1 <> 'END' THEN INSERT INTO customer_order VALUES (sc, iq * iu); END IF;END WHILE; CLOSE order_crs; SET done1 = done2; END IF; END WHILE;CLOSE sales_crs;END $$DELIMITER ; Listing 2.3 is the set solution. We apply join in this 2nd example which makes it just a single SQL statement. You can see that this set stored procedure is again simpler than its cursor-based equivalent.    
Read more
  • 0
  • 0
  • 5580

article-image-creating-simple-report-using-birt
Packt
23 Oct 2009
5 min read
Save for later

Creating a Simple Report using BIRT

Packt
23 Oct 2009
5 min read
Setting up a Simple Project The first thing we want to do when setting up our simple report project is to define what the project is going to be, and what our first simple report will be. Our first report will be a simple dump of the employees who work for Classic Cars. So, the first thing we need to do is set up a project. To do this, we will use the Navigator. Make sure you have the BIRT report perspective open. Use the following steps to create our project: Open up the Navigator by single-clicking on the Navigator tab. Right-click anywhere in the white-space in the Navigator. Select New from the menu, and under New select Project. From the Dialog screen, select Business Intelligence and Reporting Tools from the list of folders; expand that view, and select Report Project. Then click on the Next button. For the Project name, enter Class_Cars_BIRT_Reports. You can either leave the Use Default Location checkbox checked, or uncheck it and enter a location on your local drive to store this report project. Now, we have a very simple report project in which to store our BIRT reports starting with the first that we are about to create. Creating a Simple Report Now that we have our first project open, we will look at creating our first report. As mentioned earlier, we will create a basic listing report that will display all the information in the employees table. In order to do this, we will use the following steps: Right-click on the Class_Cars_BIRT_Reports project under the Navigator, and choose New and Report. Make sure the Class_Cars_BIRT_Reports project is highlighted in the new report Dialog, and enter in the name as EmployeeList.rptdesign. I chose this name as it is somewhat descriptive of the purpose of the report, which is to display a list of employees. As a rule of thumb, always try to name your reports after the expected output, such as QuarterlyEarningReport.rptdesign, weeklyPayStub.rptdesign, or accountsPayable.rptdesign. On the next screen is a list of different report templates that we can use. We will select  Simple Listing and then click on the Finish button. Go to the Data Explorer, right-click on Data Sources, and choose New Data Source. From the New Data Source Dialog box, select Classic Models Inc. Sample Database and click on the Next button. On the next screen, it will inform you of the driver information. You can ignore this for now and click Finish. Under the Data Explorer, right-click on Data Sets and choose New Data Set. On the next screen, enter the Data Set Name as dsetEmployees, and make sure that our created Data Source is selected in the list of Data sources. You can click Next when this is finished. On the Query Dialog, enter the following query and click Finish: On the next screen, just click OK. This screen is used to edit information about Data Sets, and we will ignore it for now. Now, from the Outline select Data Sets and expand it to show all of the fields. Drag the EMPLOYEENUMBER element over to the Report Designer, and drop it on the cell with the label of Detail Row. This will be the second row and the first column. You will notice that when you do this, the header row also gets an element placed in it called EMPLOYEENUMBER. This is the Header label. Double- click on this cell and it will become highlighted. We can now edit it. Type in "Employee ID". Drag and drop the LASTNAME, FIRSTNAME, and JOBTITLE to the detail cells to the right of the EMPLOYEENUMBER cell. Now, we want to put the header row in bold. Under the Outline, select the Row element located under Body/Table/Header. This will change the Property Editor. Click on Font, and then click on the Bold button. That's it! We have created our first basic report. To see what this report looks like, under the Report Designer pane, click on the Preview tab. This will allow you to get a good idea of what this report will look like. Alternatively you can actually Run the report and get an idea what this report will look like in the BIRT Report Viewer application, by going up to File/View Report/View Report in Web Viewer. This option is also available by right-click on the report design file under the Navigator, and choosing Report followed by Run. Although it may be a simple report, this exercise demonstrated how a report developer can get through the BIRT environment, and how the different elements of the BIRT perspective work together. Summary For a very simple report design, we utilized all of the major areas of the BIRT perspective. We used the Navigator to create a new report project and a new report design, the Data explorer to create out data connection and Data Set, dragged elements from the Outline to the Report Designer to get the data elements into the right place, and used the Property Editor and Outline cooperatively to bold the text in the table header.
Read more
  • 0
  • 0
  • 4572

article-image-postgresqls-transaction-model
Packt
23 Oct 2009
7 min read
Save for later

PostgreSQL's Transaction Model

Packt
23 Oct 2009
7 min read
On Databases Databases come in many forms. The simplest definition of a database is any system of storing, organizing, and retrieving data. With this definition, things like memory, hard drives, file systems, files on those file systems (stored in plain text, tab-delimited, XML, JSON, or even BDB formats), and even applications like MySQL, PostgreSQL, and Oracle are considered databases. Databases allow users to: Store Data Organize Data Retrieve Data It is important to keep a broad perspective on what data and databases really are so that you can always choose the best solution for your particular problem. The SQL databases (MySQL, PostgreSQL, Oracle, and others) are remarkable because of the flexibility and performance they provide. In my work, I look to them first when developing an application, with an eye towards getting the data model right before optimization. Once the application is solid, and once I fully understand what parts of the data system are too slow or fast enough, then I can start building my own database on top of the file system or other existing technologies that will give me the kind of performance I need. PostgreSQL: Free, BSD -licensed popular database. http://postgresql.org/ MySQL: Free, GPL-licensed popular database. http://mysql.org Oracle: Commercial industrial database. http://oracle.com SQL Server: Microsoft's commercial database. http://www.microsoft.com/SQL/default.mspx Among the SQL databases, which one is best? There are many criteria I use to evaluate SQL databases, and the one I pay attention to most is how they comply (if at all) with the ACID model. And given the technical merits of the various SQL databases, I consistently choose PostgreSQL above all other SQL databases when given a choice. Allow me to explain why. The ACID Model ACID is an acronym, standing for the four words Atomicity, Consistency, Isolation, and Durability. These are fancy words for some very basic and essential concepts. Atomicity means that you either do all of the changes you want, or none of them, without leaving the database in some weird in-between state. When you take into account catastrophes like power failures or corruption, atomicity isn't as simple as it first seems. Consistency means that any state of the database will be internally consistent with the rules that constrain the data. That is, if you have a table with a primary key, then that table will not contain any violations of the primary key constraints after any transaction. Isolation means that you can be modifying many different parts of the database at the same time without affecting each other. (As a higher feature, there is Serialization, which requires that transactions occur one after the other, or at least the results of transactions.) Durability means that once a transaction completes, it is never lost, ever. Atomicity: All or nothing Consistency: Rules kept Isolation: No partials seen Durability: Doesn't disappear ACID compliance isn't rocket science, but it isn't trivial either. These requirements form a minimum standard absolutely necessary to provide a database for a reasonable application. That is, if you can't guarantee these things, then the users of your application are going to be frustrated since they assume, naturally, that the ACID model is followed. And if the users of the application get frustrated, then the developers of the application will get frustrated as they try to comply with the user's expectations. A lot of frustration can be avoided if the database simply complies with the principles of the ACID model. If the database gets it right, then the rest of the application will have no problem getting it right as well. Our users will be happy since their expectations of ACID compliance will be met. Remember: Users expect ACID! What Violating the ACID Model Looks Like To consider the importance of the ACID model, let's examine, briefly, what happens when the model is violated. When Atomicity isn't adhered to, users will see their data partially committed. For instance, they might find their online profile only partially modified, or their bank transfer partially transferred. This is, of course, devastating to the unwary user. When Consistency is violated, the rules that the data should follow aren't adhered to. Perhaps the number of friends shown doesn't match the friends they actually have in a social networking application. Or perhaps they see their bank balance doesn't match what the numbers add up to. Or worse, perhaps your order system is counting orders that don't even exist and not counting orders that do. When Isolation isn't guaranteed, they will either have to use a system where only one person can change something at a time, locking out all others, or they will see inconsistencies throughout the world of data, inconsistencies resulting from transactions that are in progress elsewhere. This will make the data unreliable just like violating Atomicity or Consistency. A bank user, for instance, will believe their transfer of funds was successful when in reality their money was simultaneously being withdrawn by another transaction. When Durability is lost, then users will never know if their transaction really went through, and won't mysteriously disappear down the road with all the trouble that entails. I am sure we have all had experiences dealing with data systems that didn't follow the ACID model. I remember the days when you had to save your files frequently, and even then you still weren't ensured that all of your data would be properly saved. I also recall applications that would make partial changes, or incomplete changes, and expose these inconsistent states to the user. In today's world, writing applications with faults like the above is simply inexcusable. There are too many tools out there that are readily available that make writing ACID compliant systems easy. One of those tools, probably the most popular of all, is the SQL database. Satisfying ACID with Transactions The principle way that databases comply with ACID requirements is through the concept of transactions. Ideally, each transaction would occur in an instant, updating the database according to the state of the database at that moment. In reality, this isn't possible. It takes time to accumulate the data and apply the changes. Typical transaction SQL commands: BEGIN: Start a new transaction COMMIT: Commit the transaction ROLLBACK: Roll back the transaction in progress Since multiple sessions can each be creating and applying a transaction simultaneously, special precautions have to be taken to ensure that the data that each transaction “sees” is consistent, and that the effects of each transaction appear all together or not at all. Special care is also taken to ensure that when a transaction is committed, the database will be put in a state where catastrophic events will not leave the transaction partially committed. Contrary to popular belief, there are a variety of ways that databases support transactions. It is well worth the time to read and understand PostgreSQL's two levels of transaction isolation and the four possible isolation levels in Section 12.2 of the PostgreSQL documentation. Note that some of the inferior levels of transaction isolation violate some extreme cases of ACID compliance for the sake of performance. These edge cases can be properly handled with appropriate use of row-locking techniques. Row-locking is an issue beyond this article. Keep in mind that the levels of transaction isolation are only what appear to users of the database. Inside the database, there is a remarkable variety of methods on actually implementing transactions. Consider that while you are in a transaction, making changes to the database, every other transaction has to see one version of the database while you see another. In effect, you have to have copies of some of the data lying around somewhere. Queries to that data have to know which version of the data to retrieve the copy, the original, or the modified version (and which modified version?) Changes to the data have to go somewhere the original, a copy, or some modified version (again, which?) Answering these questions leads to the various implementations of transactions in ACID compliant databases. For the purposes of this article, I will examine only two: Oracle's and PostgreSQL's implementations. If you are only familiar with Oracle, then hopefully you will learn something new and fascinating as you investigate PostgreSQL's method.
Read more
  • 0
  • 0
  • 7047
article-image-aspnet-repeater-control
Packt
23 Oct 2009
6 min read
Save for later

The ASP.NET Repeater Control

Packt
23 Oct 2009
6 min read
The Repeater control in ASP.NET is a data-bound container control that can be used to automate the display of a collection of repeated list items. These items can be bound to either of the following data sources: Database Table XML File In a Repeater control, the data is rendered as DataItems that are defined using one or more templates. You can even use HTML tags such as <li>, <ul>, or <div> if required. Similar to the DataGrid, DataList, or GridView controls, the Repeater control has a DataSource property that is used to set the DataSource of this control to any ICollection, IEnumerable, or IListSource instance. Once this is set, the data from one of these types of data sources can be easily bound to the Repeater control using itsDataBind() method. However, the Repeater control by itself does not support paging or editing of data. The Repeater control is light weight and does not contain so many features as the DataGrid contains. However, it enables you to place HTML code in its templates. It is great in situations where you need to display the data quickly and format the data to be displayed easily. Using the Repeater Control The Repeater control is a data-bound control that uses templates to display data. It does not have any built-in support for paging, editing, or sorting of the data that is rendered through one or more of its templates. The Repeater control works by looping through the records in your data source and then repeating the rendering of one of its templates called the ItemTemplate, one that contains the records that the control needs to render. To use this control, drag and drop the control in the design view of the web form onto a web form from the toolbox. Refer to the following screenshot: You can also drag and drop the Repeater control from the toolbox onto the source view directly. This is shown in the following screenshot: For customizing the behavior of this control, you have to use the built-in templates that this control comes with. These templates are actually blocks of HTML code. The Repeater control contains the following five templates: HeaderTemplate ItemTemplate AlternatingItemTemplate SeparatorTemplate FooterTemplate The following screenshot shows how a Repeater control looks when populated with data. Note that the templates (Header, Item, Footer, Alternate and Separator) have all been used. The following code snippet is an example of the order in which the templates of the Repeater control are used. <asp:Repeater id="repEmployee" runat="server"><HeaderTemplate>...</HeaderTemplate><ItemTemplate></ItemTemplate><FooterTemplate>...</FooterTemplate></asp:Repeater> When the Repeater control is bound to a data source, the data from the data source is displayed using the ItemTemplate element and any other optional elements, if used. Note that the contents of the HeaderTemplate and the FooterTemplate are rendered once for each Repeater control. The contents of the ItemTemplate are rendered for each record in the control. You can also use the additional AlternatingItemTemplate element after the ItemTemplate element for specifying the appearance of each alternate record. You can also use the SeparatorTemplate element between each record for specifying the separators for the records. Displaying Data Using the Repeater Control This section discusses how we can display data using the Repeater control. As discussed earlier, the Repeater control uses templates for formatting the data that it displays. The following code snippet displays the code in an .aspx file that contains a Repeater control. Note that we would be making use of templates and that the data would be bound to the control from the code-behind file using the DataManager class. The Repeater control is populated with data in the Page_Load event by reusing the DataManager(). Note how the SeparatorTemplate and the AlternatingItemTemplate have been used in the previous code example. Further, the DataBinder.Eval() method has been used to display the values of the corresponding fields from the data container, (in our case, the DataSet instance) in the Repeater control. The FooterTemplate uses the Total Records variable and substitutes its value to display the total number of records displayed by the control. The following is the output on execution. The Header and the Footer templates of the Repeater control are still rendered even if the data source does not contain any data. If you want to suppress their display, you can use the Visible property of the Repeater control and use it to suppress the display of these templates with a simple logic. Here is how you specify the Visible property of this control in your .aspx file to achieve this_Visible="<%# Repeater1.Items.Count > 0 %>"When you specify the Visible property as shown here, the Repeater is made visible only if there are records in your data source.   Displaying Checkboxes in a Repeater Control Let us now understand how we can display checkboxes in a Repeater Control and retrieve the number of checked items. We will use a Button control and a Label control in our page. When you click on the Button control, the number of checked items in the Repeater Control will be displayed in the Label control. The output on execution is similar to what is shown in the following screenshot: Here is the code that we will use in the .aspx file to display checkboxes in a Repeater control. The data is bound to the Repeater control in the Page_Load event as follows: Note that we have used the Page.IsPostBack to check whether the page has posted back in the Page_Load method. If you don't bind data by checking whether the page has posted back, the Repeater control will be rebound to data once again after a postback and all the checkboxes in your web page will be reset to the unchecked state. The source code for the click event of the Button control that we have used is as follows: When you execute the application, the Repeater control is displayed with records from the employee table. Now you check one or more of the checkboxes and then click on the Button control just beneath the Repeater control as follows: Note that the number of checked records is displayed in the Label Control. Summary This article discussed the Repeater control and how we can use it in ourASP.NET applications. Though this control does not support all the functionalities of other data controls, like DataGrid and GridView, it is still a good choice if you want faster rendering of data as it is light weight, and is very flexible.
Read more
  • 0
  • 0
  • 5278

article-image-creating-view-mysql-query-browser
Packt
23 Oct 2009
2 min read
Save for later

Creating a View with MySQL Query Browser

Packt
23 Oct 2009
2 min read
Please refer to an earlier article by the author to learn how to build queries visually. Creating a View from an Existing Query To create a view from a query, you must have executed the query successfully. To be more precise, the view is created from the latest successfully executed query, not necessarily from the query currently in the Query Area. To further clarify, the following three examples are cases where the view is not created from the current query: Your current query fails, and immediately after you create a view from the query. The view created is not from the failed query. If the failed query is the first query in your MySQL Query Browser session, you can’t create any view. You have just moved forward or backward the query in the Query Area without executing it, and then your current query is not the latest successfully executed. You open a saved query that you have never executed successfully in your active Resultset. Additionally, if you’re changing your Resultset, the view created is from the latest successfully executed query that uses the currently active Resultset to display its output. To make sure your view is from the query you want, select the query, confirm it as written in the Query Area, execute the query, and then, immediately create its view. You create a view from an existing query by selecting Query | Create View from Select from the Menu bar. Type in the name you want to give to the view, and then click Create View. MySQL Query Browser creates the view. When successfully created, you can see the view in the Schemata. You can modify a view by editing it: Right-click the view and select Edit View. You can edit the CREATE view statement by right-clicking it and select Edit View. The CREATE view statement opens in its Script tab. When you finish editing, you can execute the modified view. If successful, the existing view is replaced with the modified one. To replace the view you’re editing with the modified view, change the name of the view before you execute it. If you want to keep the view you’re editing, remove the DROP VIEW statement.
Read more
  • 0
  • 0
  • 5211

article-image-visual-mysql-database-design-mysql-workbench
Packt
21 Oct 2009
3 min read
Save for later

Visual MySQL Database Design in MySQL Workbench

Packt
21 Oct 2009
3 min read
MySQL Workbench is a visual database design tool recently released by MySQL AB. The tool is specifically for designing MySQL database. What you build in MySQL Workbench is called physical data model. A physical data model is a data model for a specific RDBMS product; the model in this article will have some MySQL unique specifications. We can generate (forward-engineer) the database objects from its physical model, which in addition to tables and their columns, can also include other objects such as view. MySQL Workbench has many functions and features; this article by Djoni Darmawikarta shows some of them by way of an example. We’ll build a physical data model for an order system where an order can be a sale order or a purchase order; and then, forward-engineer our model into an MySQL database. The physical model of our example in EER diagram will look like in the following MySQL Workbench screenshot. Creating ORDER Schema Let’s first create a schema where we want to store our order physical model. Click the + button (circled in red). Change the new schema’s default name to ORDER. Notice that when you’re typing in the schema name, its tab name on the Physical Schemata also changes accordingly—a nice feature. The order schema is added to the Catalog (I circled the order schema and its objects in red). Close the schema window. Confirm to rename the schema when prompted. Creating Order Tables We’ll now create three tables that model the order: ORDER table and its two subtype tables: SALES_ORDER and PURCHASE_ORDER, in the ORDER schema. First of all, make sure you select the ORDER schema tab, so that the tables we’ll create will be in this schema. We’ll create our tables as EER diagram (EER = Enhanced Entity Relationship). So, double-click the Add Diagram button. Select (click) the Table icon, and then move your mouse onto the EER Diagram canvas and click on the location you want to place the first table. Repeat for the other two tables. You can move around the tables by dragging and dropping. Next, we’ll work on table1, which we’ll do so using the Workbench’s table editor. We start the table editor by right-clicking the table1 and selecting Edit Table. Next, we’ll work on table1, which we’ll do so using the Workbench’s table editor. We start the table editor by right-clicking the table1 and selecting Edit Table. Rename the table by typing in ORDER over table1. We’ll next add its columns, so select the Columns tab. Replace idORDER column name with ORDER_NO. Select INT as the data type from the drop-down list. We’d like this ORDER_NO column to be valued incrementally by MySQL database, so we specify it as AI column (Auto Increment). AI is a specific feature of MySQL database. You can also specify other physical attributes of the table, such as its Collation; as well as other advanced options, such as its trigger and partioning (the Trigger and Partioning tabs). Notice that on the diagram our table1 has changed to ORDER, and it has its first column, ORDER_NO. In the Catalog you can also see the three tables. The black dots on the right of the tables indicate that they’ve been included in an diagram.  
Read more
  • 0
  • 0
  • 13050
article-image-setting-most-popular-journal-articles-your-personalized-community-liferay-portal
Packt
21 Oct 2009
6 min read
Save for later

Setting up the most Popular Journal Articles in your Personalized Community in Liferay Portal

Packt
21 Oct 2009
6 min read
Personal community is a dynamic feature of Liferay portal. By default, the personal community is a portal-wide setting that will affect all of the users. It would be nice to have more features in the personal community such as showing the most popular journal articles. This article by Jonas Yuan will address how to set up the most popular journal articles in you personalized community and view the counter for other assets. In a web site, we will have a lot of journal articles (that is, web content) for a given article type. For example, for the article type Article Content, we will have articles talking about product family. We may want to know how many times the end users read each article. Meanwhile, it would be nice if we could show the most popular articles (for example, TOP 10 articles) for this given article type. As shown in the following screenshot, a journal article My EDI Product I is shown via a portlet Ext Web Content Display. Rating and comments on this article are also exhibited. At the same time, the medium-size image, polls, and related content of this article are listed, too. A view counter of this article is especially displayed under the ratings. Moreover, the most popular articles are exhibited with article title and number of views under related content. All these articles belong to the article type article-content. That is, the article in the current portlet Ext Web Content Display has the most popular articles only for the article type article-content. Of course, you can customize the portlet Web Content Display directly through changing JSP files. For demo purposes, we will implement the view counter in the portlet Ext Web Content Display. Meanwhile, we will implement the mostly popular articles via VM services and article templates. In addition, we will analyze the view counter for other assets such as Image Gallery images, Document Library documents, Wiki articles, Blog entries, Message Boards threads, and so on. Adding a view counter in the Web Content Display portlet First of all, let's add a view counter in the Ext Web Content Display portlet. As the function of view counter for assets (including journal articles) is provided in the model TagsAssetModel of the com.liferay.portlet.tags.model package in the /portal/portal-service/src folder, we could use this feature in this portlet directly. To do so, use the following steps: Create a folder journal_content in the folder /ext/ext-web/docroot/html/portlet/. Copy the JSP file view.jsp in the folder /portal/portal-web/docroot/html/portlet/ to the folder /ext/ext-web/docroot/html/portlet/journal_content and open it. Add the line <%@ page import="com.liferay.portlet.tags.model.TagsAsset" %> after the line <%@ include file="/html/portlet/journal_content/init.jsp" %>, and check the following lines: JournalArticleDisplay articleDisplay = (JournalArticleDisplay) request.getAttribute( WebKeys.JOURNAL_ARTICLE_DISPLAY); if (articleDisplay != null) { TagsAssetLocalServiceUtil.incrementViewCounter( JournalArticle.class.getName(), articleDisplay.getResourcePrimKey());} Then add the following lines after the line <c:if test="<%=enableComments %>"> and save it: <span class="view-count"> <% TagsAsset asset = TagsAssetLocalServiceUtil.getAsset (JournalArticle.class.getName(), articleDisplay.getResourcePrimKey());%> <c:choose> <c:when test="<%= asset.getViewCount() == 1 %>"> <%= asset.getViewCount() %> <liferay-ui:message key="view" />, </c:when> <c:when test="<%= asset.getViewCount() > 1 %>"> <%= asset.getViewCount() %> <liferay-ui:message key="views" />, </c:when> </c:choose></span> The code above shows a way to increase the view counter via the TagsAssetLocalServiceUtil.incrementViewCounter method. This method takes two parameters className and classPK as inputs. For the current journal article, the two parameters are JournalArticle.class.getName() and articleDisplay.getResourcePrimKey(). Then, this code shows a way to display view counted through the TagsAssetLocalServiceUtil.getAsset method. Similarly, this method also takes two parameters, className and classPK, as inputs. This approach would be useful for other assets, as the className parameter could be Image Gallery, Document Library, Wiki, Blogs, Message Boards, Bookmark, and so on. Setting up VM service We can set up the VM service to exhibit the most popular articles. We can also add the getMostPopularArticles method in the custom velocity tool ExtVelocityToolUtil. To do so, first add the following method in the ExtVelocityToolService interface: public List<TagsAsset> getMostPopularArticles(String companyId, String groupId, String type, int limit); And then add an implementation of the getMostPopularArticles method in the ExtVelocityToolServiceImpl class as follows: public List<TagsAsset> getMostPopularArticles(String companyId, String groupId, String type, int limit) { List<TagsAsset> results = Collections.synchronizedList(new ArrayList<TagsAsset>()); DynamicQuery dq0 = DynamicQueryFactoryUtil.forClass( JournalArticle.class, "journalarticle"). setProjection(ProjectionFactoryUtil.property ("resourcePrimKey")).add(PropertyFactoryUtil. forName("journalarticle.companyId"). eqProperty("tagsasset.companyId")). add(PropertyFactoryUtil.forName( "journalarticle.groupId").eqProperty( "tagsasset.groupId")).add(PropertyFactoryUtil. forName("journalarticle.type").eq( "article-content")); DynamicQuery query = DynamicQueryFactoryUtil.forClass( TagsAsset.class, "tagsasset") .add(PropertyFactoryUtil.forName( "tagsasset.classPK").in(dq0)) .addOrder(OrderFactoryUtil.desc( "tagsasset.viewCount")); try{ List<Object> assets = TagsAssetLocalServiceUtil. dynamicQuery(query); int index = 0; for (Object obj: assets) { TagsAsset asset = (TagsAsset) obj; results.add(asset); index ++; if(index == limit) break; } } catch (Exception e){ return results; } return results; } The preceding code shows a way to get the most popular articles by company ID, group ID, article type, and limited articles to be returned. DynamicQuery API allows us to leverage the existing mapping definitions through access to the Hibernate session. For example, DynamicQuery dq0 selects the journal articles by companyID, groupId, and type; DynamicQuery query selects tagsassets by classPK, which exists in DynamicQuery dq0; and tagsassets are ordered by viewCount as well. Finally, add the following method to register the above method in ExtVelocityToolUtil: public List<TagsAsset> getRelatedArticles(String companyId, String groupId, String articleId, int limit){ return _extVelocityToolService.getRelatedArticles(companyId, groupId, articleId, limit);} The code above shows a generic approach to get TOP 10 articles for any article types. Of course, you can extend this approach to find TOP 10 assets. This can include Image Gallery images, Document Library documents, Wiki articles, Blog entries, Message Boards threads, Bookmark entries, slideshow, videos, games, video queue, video list, playlist, and so on. You may practice these TOP 10 assets feature. Building article template for the most popular journal articles We have added view counter on journal articles. We have already built VM service for the most popular articles too. Now let's build an article template for them. Setting up the default article type As mentioned earlier, there is a set of types of journal articles, for example, announcements, blogs, general, news, press-release, updates, article-tout, article-content, and so on. In real case, only some of these types will require view counter, for example article-content. Let's configure the default article type for mostly popular articles. We can add the following line at the end of portal-ext.properties. ext.most_popular_articles.article_type=article-content The code above shows that the default article type for most_popular_articles is article-content.
Read more
  • 0
  • 0
  • 6716

article-image-customizing-page-management-liferay-portal-52-systems-development
Packt
20 Oct 2009
5 min read
Save for later

Customizing Page Management in Liferay Portal 5.2 Systems Development

Packt
20 Oct 2009
5 min read
Customizing page management with more features The Ext Manage Pages portlet not only clones the out of the box Manage Pages portlet, but it also extends the model and service — supporting customized data, for example, Keywords. We can make these Keywords localized too.   Adding localized feature Liferay portal is designed to handle as many languages as you want to support. By default, it supports up to 22 languages. When a page is loading, the portal will detect the language, pull up the corresponding language file, and display the text in the correct language. We want the Keywords to be localized too. For example, the default language is English (United States) and the localized language is Deutsch (Deutschland). Thus, you have the ability to enter not only the Name and HTML Title in German, but also the Keywords in German. As shown in the following screenshot, when you change the language of the page in German using the language portlet, you will see the entire web site changed to German, including the portlet title and input fields. For example, the title of the portlet now has the Ext Seiteneinstellungen value and the Keywords now become Schlüsselwörter. How do we implement this feature? In other words, how do we customize the language display in the page management? Let's add the localized feature for the Ext Manage Pages portlet. Extending model for locale First of all, we need to extend the model and to implement that model in order to support the localized feature. For the ExtLayout model, let's add the locale method first. Locate the ExtLayout.java file from the com.ext.portlet.layout.model package in the /ext/ext-service/src folder, and open it. Add the following lines before the line } in ExtLayout.java and save it: public String getKeywords(Locale locale);public String getKeywords(String localeLanguageId);public String getKeywords(Locale locale, boolean useDefault);public String getKeywords(String localeLanguageId, boolean useDefault);public void setKeywords(String keywords, Locale locale); As shown in the code above, it adds getting and setting methods for the Keywords field with locale features. Now let's add the implementation for the ExtLayout model: Locate the ExtLayoutImpl.java file from the com.ext.portlet.layout.model.impl package in the /ext/ext-impl/src folder and open it. Add the following lines before the last } in ExtLayoutImpl.java file and save it: public String getKeywords(Locale locale) { String localeLanguageId = LocaleUtil.toLanguageId(locale); return getKeywords(localeLanguageId);}public String getKeywords(String localeLanguageId) { return LocalizationUtil.getLocalization(getKeywords(), localeLanguageId);}public String getKeywords(Locale locale, boolean useDefault) { String localeLanguageId = LocaleUtil.toLanguageId(locale); return getKeywords(localeLanguageId, useDefault);}public String getKeywords(String localeLanguageId, boolean useDefault) { return LocalizationUtil.getLocalization( getKeywords(), localeLanguageId, useDefault);}public void setKeywords(String keywords, Locale locale) { String localeLanguageId = LocaleUtil.toLanguageId(locale); if (Validator.isNotNull(keywords)) { setKeywords(LocalizationUtil.updateLocalization( getKeywords(), "keywords", keywords, localeLanguageId)); } else { setKeywords(LocalizationUtil.removeLocalization( getKeywords(), "keywords", localeLanguageId)); }} As shown in the code above, it adds implementation for get and set methods of the ExtLayout model. Customizing language properties Language files have locale-specific definitions. By default, Language.properties (at /portal/portal-impl/src/content) contains English phrase variations further defined for United States, while Language_de.properties (at /portal/portal-impl/src/content) contains German phrase variations further defined for Germany. In Ext, Language-ext.properties (available at /ext/ext-impl/src/content) contains English phrase variations further defined for United States, while Language-ext_de.properties (should be available at /ext/ext-impl/src/content) contains German phrase variations further defined for Germany. First, let's add a message in Language-ext.properties, by using the following steps: Locate the Language-ext.properties file in the /ext/ext-impl/src/content folder and open it. Add the following line after the line view-reports=View Reports for Books and save it. keywords=Keywords This code specifies the keywords message key with a Keywords value in English: Then we need to add German language feature in Language-ext_de.properties as follows: Create a language file Language-ext_de.properties in the /ext/ext-impl/src/content folder and open it. Add the following lines at the beginning and save it: ## Portlet namesjavax.portlet.title.EXT_1=Berichtejavax.portlet.title.jsp_portlet=JSP Portletjavax.portlet.title.book_reports=Berichte für das Buchjavax.portlet.title.extLayoutManagement=Ext Seiteneinstellungenjavax.portlet.title.extCommunities=Ext Communities## Messagesview-reports=Ansicht-Berichte für Bücherkeywords=Schlüsselwörter## Category titlescategory.book=Buch## Model resourcesmodel.resource.com.ext.portlet.reports.model.ReportsEntry= Buch ## Action namesaction.ADD_BOOK=Fügen Sie Buch hinzu As shown in the code above, it specifies the same keys as that of Language-ext.properties. But all the keys' values were specified in German instead of English. For example, the message keywords has a Schlüsselwörter value in German. In addition, you can set German as the default language and Germany as the default country if it is required. Here are the simple steps to do so: Locate the system-ext.properties file in the /ext/ext-impl/src folder and open it. Add the following lines at the end of system-ext.properties and save it: user.country=DEuser.language=de The code above sets the default locale — the language German (Deutsch) and the country Germany (Deutschland). In general, there are many language files, for example Language-ext.properties and Language-ext_de.properties, and some language files would overwrite others in runtime loading. For example, Languageext_de.properties will overwrite Language-ext.properties when the language is set as German. These are the three simple rules which indicate the priorities of these language files: The ext versions take precedence over the non-ext versions. The language-specific versions, for example _de, take precedence over the non language-specific versions. The location-specific versions, such as -ext_de, take precedence over the non location-specific versions. For instance, the following is a ranking from bottom to top for the German language: Language-ext_de.properties Language_de.properties Language-ext.properties Language.properties
Read more
  • 0
  • 0
  • 2315
Modal Close icon
Modal Close icon