Oracle's RDBMS SQL Command Dump Block

Exclusive offer: get 50% off this eBook here
Oracle Database 11g – Underground Advice for Database Administrators

Oracle Database 11g – Underground Advice for Database Administrators — Save 50%

A real-world DBA survival guide for Oracle 11g database implementations with this Oracle book and eBook

$29.99    $15.00
by April C. Sims | April 2010 | Oracle

Dump block gives you the view of the data at the block level, the smallest piece of storage for the data. Working through this article by April Sims, author of Oracle Database 11g – Underground Advice for Database Administrators, allows you to apply concepts of how data moves, block by block, through the physical structures of the database, really solidifying how transaction integrity is maintained. You would only use dump block on a production system with an Oracle Support Engineer, which is usually done during advanced recovery situations. The dump block is often used to determine the level and extent of corruption and what data can be recovered. It can also be used to resolve some tuning and performance issues.

Do not do this in a production database.

Before continuing with this article, you should read the Oracle Database Concepts 11g Release 2 (11.2) of the documentation, the book every DBA should start with.

Our examination of data blocks starts in Section 12-6 of the Concepts Manual.

Data block format: "Every Oracle data block has a format or internal structure that enables the database to track the data and free space in the block. This format is similar whether the data block contains table, index, or table cluster data." A block is the smallest unit of logical storage that the Relational Database Management System (RDBMS) can manipulate. Block size is determined by the database parameter DB_BLOCK_SIZE. The logical storage of data blocks, extents, segments, and table spaces (from smallest to largest) map to the data files, which are stored in operating system blocks.

An undo block will store the undo transaction that is the actual SQL command needed to reverse the original SQL transaction statement. This undo is needed for read consistency for all read-only queries until you commit or rollback that transaction.

Read consistency within a changed block (transaction) is maintained for any of the following commands: insert, update, delete, merge, select for update, or lock table. Any of the previous changes are tracked until the command is issued to either commit or rollback a particular transaction. This consistency keeps the data view to each user the same, whether they are just doing queries or actually changing any other data.

A point in time or what is called the System Change Number (SCN) identifies each transaction, and transaction flags show the state of the transaction. The only end user that can see any changed data will be the one making the changes, no matter the application used until they commit that change.

The SCN advances for every change to the database as a sequential counter, which identifies a certain point in time. The SCN tracks more than just single transactions by end users. These transactions will be in Data Definition Language (DDL) or Data Manipulation Language (DML). DDL statements are associated with creating objects (create table) or what is also called metadata. DML are the other commands mentioned earlier (insert, update, delete, among others) that manipulate the data in some way. The RDBMS advances the SCN if another person logs in, reconnects, or alters their session as well as when Oracle background processes (which constantly check the state of activity inside of the database) take place.

It is undo that gives everyone a point-in-time consistent view of the data, which is called Read Consistency. There are controls created from business rules within the application called triggers and integrity constraints that validate the data entered by the user. Database locks control access to data during changes for exclusive access by the end user changing it.

During a delete or update statement:

  • The data block is read, loading it into a memory structure called a buffer cache
  • The redo log buffer will contain the corresponding delete or update statement
  • An entry in the undo segment header block is created for this transaction
  • It also copies the delete or update row into an undo block
  • For a delete, the row is removed from the data block and that block is marked as dirty
  • Locks keep exclusive use of that block until a commit or rollback occurs

Dirty is an internal designation where the block is identified as having changed data that has not been written to disk. The RDBMS needs to track this information for transactional integrity and consistency. The underlying dynamic performance view v$bh indicates when a particular block is dirty, as seen by the following query:

SYS@ORCL11>select file#, block# from v$bh where dirty='Y';

When a transaction is committed by the end user:

  • The transaction SCN is updated in the data block and the undo segment header marks that statement as committed in the header section of the undo block.
  • The logwriter process (LGWR) will flush the log buffer to the appropriate online redo log file.
  • SCN is changed on the data block if it is still in the buffer cache (fast commit).

Delayed block cleanout can happen when all of the changed blocks don't have the updated SCN indicating the commit has occurred. This can cause problems with a transaction that is updating large numbers of rows if a rollback needs to occur. Symptoms include hanging onto an exclusive lock until that rollback is finished, and causing end users to wait.

The delayed block cleanout process does occasionally cause problems that would require opening an Oracle Support Request. Delayed block cleanout was implemented to save time by reducing the number of disk reads to update the SCN until the RDBMS needs to access data from that same block again. If the changed block has already been written to the physical disk and the Oracle background process encounters this same block (for any other query, DML, or DDL), it will also record the committed change at the same time. It does this by checking the transaction entry by SCN in the undo header, which indicates the changes that have been committed. That transaction entry is located in the transaction table, which keeps track of all active transactions for that undo segment.

Each transaction is uniquely identified by the assignment of a transaction ID (XID), which is found in the v$transaction view. This XID is written in the undo header block along with the Undo Byte Address (Uba), which consists of the file and block numbers UBAFIL data file and UBABLK data block, and columns found in the v$transaction view, respectively.

Please take the time to go through the following demonstration; it will solidify the complex concepts in this article.

Demonstration of data travel path

Dumping a block is one of the methods to show how data is stored. It will show the actual contents of the block, whether it is a Table or Index Block, and an actual address that includes the data file number and block number. Remember from the concepts manual that several blocks together make up an extent, and extents then make up segments. A single segment maps to a particular table or index. It is easy to see from the following simplified diagram how different extents can be stored in different physical locations in different data files but the same logical tablespace:

The data in the test case comes from creating a small table (segment) with minimum data in a tablespace with a single data file created just for this demonstration. Automatic Segment Space Management (ASSM) is the default in 11g. If you create a tablespace in 11g with none of the optional storage parameters, the RDBMS by default creates an ASSM segment with locally managed autoallocated extents.

It is possible to define the size of the extents at tablespace creation time that depends on the type of data to be stored. If all of the data is uniform and you need to maintain strict control over the amount of space used, then uniform extents are desirable.

Allowing the RDBMS to autoallocate extents is typical in situations where the data is not the same size for each extent, reducing the amount of time spent in allocating and maintaining space for database segments. Discussing the details, options, and differences for all of the ways to manage segment space in Oracle Database 11g is beyond the scope of this article.

For this example, we will be using race car track information as the sample data. For this demonstration, you will create a specific user with the minimum amount of privileges needed to complete this exercise; SQL is provided for that step in the script.

There are several key files in the zipped code for this article that you will need for this exercise, listed as follows:

  • dumpblock_sys.sql
  • dumpblock_ttracker.sql
  • dumpblocksys.lst
  • dumpblockttracker.lst
  • NEWDB_ora_8582_SYSDUMP1.rtf
  • NEWDB_ora_8582_SYSDUMP1.txt
  • NEWDB_ora_8621_SYSDUMP2.rtf
  • NEWDB_ora_8621_SYSDUMP2.txt
  • NEWDB_ora_8628_SYSDUMP3.rtf
  • NEWDB_ora_8628_SYSDUMP3.txt
  • NEWDB_ora_8635_SYSDUMP4.rtf
  • NEWDB_ora_8635_SYSDUMP4.txt

You will also need access to a conversion calculator to translate the hexadecimal to a number that is the first listing below—use hexadecimal input and decimal output. The second will allow you to look up Hex (Hexadecimal) equivalents for characters.
http://calculators.mathwarehouse.com/binary-hexadecimal-calculator.php#hexadecimalBinaryCalculator
http://www.asciitable.com/

Location of trace files

The dump block statement will create a trace file in the user dump (udump) directory on any version prior to 11gR1, which can be viewed by a text editor. Using 11gR1 and above, you will find it in the diag directory location. This example will demonstrate how to use the adrci command-line utility to view trace files. First we set the home path where the utility will find the files, then search with the most recent listed first—in this case, it is the NEWDB_ora_9980.trc file.

Now that you know the location for the trace files, how do you determine which trace file was produced? The naming convention for trace files includes the actual process number associated with that session. Use the following command to produce trace files with a specific name, making it easier to identify a separate task:

SYS@NEWDB>ALTER SESSION SET TRACEFILE_IDENTIFIER = SYSDUMP_SESSION;
Oracle Database 11g – Underground Advice for Database Administrators A real-world DBA survival guide for Oracle 11g database implementations with this Oracle book and eBook
Published: April 2010
eBook Price: $29.99
Book Price: $49.99
See more
Select your format and quantity:

Running dump block SQL statements

Let's run through the test case demonstrating how transaction data moves from one component to the next. The first script dumpblock_sys.sql creates the tablespace and a user called TTRACKER with the minimum database privileges needed to perform this exercise. Please edit the script dumpblock_sys.sql to change the location of your data file in the test database. You could also leave it blank if the startup initialization database parameter db_create_file_dest is populated with a valid operating system location that Oracle has read and writes privileges to. Then, the database would create the data files in the db_create_file_dest and assign a system-generated name to them.

In this dump block exercise, you are going to need two different SQL*PLUS windows or sessions. You start by logging into the database as someone with sysdba privileges, which is usually the user sys on a small test database. In a production database, more often sysdba privileges are assigned to a DBA account, which can be more accurately tracked for auditing purposes. The SYS account is only used in this scenario to create the TTRACKER user and dump some of the blocks.

Run the dumpblock_sys.sql script first logged in as SYS, it will create the tablespace and table. Then pause and you'll see the following screenshots for the output:

SYS@NEWDB>@dumpblock_sys.sql

After the pause, log on as ttracker in the second window, as shown by the output below, and run the second script:

TTRACKER@NEWDB>@dumpblock_ttracker.sql

The second script will pause at a point and request that you run the first script again, but the first script needs some information from the ttracker session—the data file number and block number, in this example 8 and 135. Your data file and block numbers will be different from any provided in the sample code. Return to the first window (sys), put in the correct numbers and press the Enter key until it says to CONTINUE DUMPBLOCK_TTRACKER.SQL IN PREVIOUS SESSION.

When it pauses, return to the second window (ttracker) and hit the Enter key. There will be another pause where a request for the undo file number and undo block number is found by the select statement just above it on the SQL*PLUS command line. Go back and forth between the different windows at each pause until both scripts are done.

What is happening is that you are dumping blocks with the sys login and manipulating data with the ttracker login (that is, insert, update, commit). At the same time, you are looking at how the SCN advances through the different transaction steps as well as matching the transaction with the corresponding undo SQL statement in the undo segment. We could have done this entire demonstration with a single logon, but I wanted you to visualize the read consistency view of a transaction.

What you should have when the script is finished is something similar to the following list in the trace directory. What will be different for your environment is the session number and ORACLE_SID. Notice that there are rtf versions of the following text files in the code section:

  • NEWDB_ora_8582_SYSDUMP1.rtf
  • NEWDB_ora_8621_SYSDUMP2.rtf
  • NEWDB_ora_8628_SYSDUMP3.rtf
  • NEWDB_ora_8635_SYSDUMP4.rtf

You should also have a couple of lst files in the directory you executed SQL*PLUS from, to review in case of an error. I have provided the same lst files from my session for you to review as well:

dumpblocksys.lst
dumpblockttracker.lst

If you choose to rerun these scripts on the same database, you will get an error on the create sections if you do not drop the user ttracker, tablespace, and the table named tracks. The easiest way to drop both the tablespace and table is:

sys@NEWDB> drop tablespace track including contents and datafiles;

and then follow this by dropping the user ttracker:

sys@NEWDB> drop user ttracker;

Identifying files and blocks

There is more than one way to identify the file number and block number of a particular segment. One way is with the dbms_rowid package. The Oracle-supplied package dbms_rowid will show the absolute block number and relative file number of a table extent. That information comes from both the data file header and the block header.

sys@NEWDB> select dbms_rowid.rowid_relative_fno(rowid) as filenumber,
dbms_rowid.rowid_block_number(rowid) as blocknumber
from tracks;

The next query will search for the data file where those segments or extents are stored. A tablespace is only a logical structure that actually maps to the data file. It is important to understand that only certain objects (extents) are mapped to a physical data file.

sys@NEWDB> select d.file_name, e.segment_name, d.tablespace_name from
dba_data_files d, dba_extents e
where d.tablespace_name=e.tablespace_name and owner ='TTRACKER';

And finally, the following query will verify the file number for the tablespace TRACK data file. We are interested in manipulating the data in the TRACKS table only, which in this test database is file number 8 and block number 135. 135 is actually the starting block number for this table segment.

sys@NEWDB>select name, file# from v$datafile;

Each time you use the dump block command, it will create a trace file in order to track a single transaction across the database structures. This trace contains the following information that we are interested in for this demonstration:

  • Contents of the block for a certain file number and block number
  • List of the actual rows
  • SCN of a particular block
  • Transaction ID for a data block and the corresponding undo block
  • Any locks being held for that transaction
  • Flags indicating the state of the transaction
  • Read Consistency, Fast Commit, and Delayed Block Cleanout behavior

The dump block command can be used in several different ways to extract information. See the following for a few examples as they apply in this test case:

sys@NEWDB> alter system dump datafile '/u01/oradata/NEWDB/
track01_NEWDB.dbf' block 135; -- one block at a time
sys@NEWDB> alter system dump datafile 8 block 135;
-- one block at a time
sys@NEWDB> alter system dump datafile 8 block min 135 block max 142;
--this dumps multiple blocks

There is much more information in these trace files than what we can cover in a book. For more information look for reference materials that refer to Oracle Internals. Refer to the following legend for the trace files.

Legend for Trace Files:

  • Xid—Transaction ID
  • Uba—Undo Byte Address
  • Flag—Transaction flag
    • C--- Committed,
    • ---- Uncommitted
    • --U- Fast commit, delayed block cleanout has not happened
  • Lck—Number of Rows locked
  • Scn—System Change Number

Not all of the transaction flags have been listed here and further research may be required.

Tracking the SCN through trace files

We will be going through four different trace files showing the transaction state at different points. It will be easier to switch from one to the other if you go ahead and open each one in its own window. Start with the output file called dumpblocksys.lst as that contains the starting database SCN number. That comes from the following query:

SYS@NEWDB>SELECT CURRENT_SCN FROM V$DATABASE;

The database SCN will always be ahead of a transaction SCN because the transaction event has already occurred. The transaction SCN is recorded in the accompanying trace file, while the database SCN is recorded in the lst output files.

Farther down in the file dumpblocksys.lst, there are no active transactions as shown by no rows returned when selecting from v$transaction the first time. At the first pause is the point where ttracker starts modifying the table, allowing us to actively track that transaction by dumping the block.

In the next section, different transaction types will be covered, actively demonstrating what is known as Oracle Internals, the undocumented internal structure of Oracle data.

Single Row Insert

Open up the first trace file called NEWDB_ora_8582_SYSDUMP1_SESSION.rtf, as this contains additional comments. The text version of this file (trc) is provided and contains the original trace data with any identifying or sensitive information removed. Note this transaction SCN number, which is then linked to the data from the insert statement as follows:

INSERT INTO TRACKS VALUES (1, 'ATLANTA'); --from dumpblock_ttracker.sql

The hexadecimal number is converted to decimal format by using the calculator mentioned earlier.

SCN of block insert statement = hex scn: 0x0000.001827be = 1,583,038

Switch to the dumpblockttracker.lst output file. Note that the database scn has advanced past the original starting database value and past the first transaction scn (to 1583039) due to the changes made; the tracks table was created and a single row has been inserted.

Switching back to the file NEWDB_ora_8582_SYSDUMP1_SESSION.rtf, farther down the Flag there are dash marks with a 1 in the Lck column. By referring back to the legend earlier in this article, this indicates that the transaction state is uncommitted and there is a single lock. This single lock is important, as the ttracker user can exclusively use this data block until a commit or rollback statement is issued.

Itl Xid Uba Flag Lck
0x01 0x0007.002.0000040c 0x00c000e6.015d.04 ---- 1
0x02 0x0000.000.00000000 0x00000000.0000.00 ---- 0

Notice the Uba (Undo Byte Address) that ties this transaction to the matching SQL undo statement in an undo segment should the end user need to rollback this transaction. Also notice the Itl heading in the above list that stands for Interested Transaction List, which consists of all transactions currently active for that block. In this case, there is only the current single transaction.

The Xid found in NEWDB_ora_8582_SYSDUMP1_SESSION.rtf should match the query from the v$transaction table, as seen in the dumpblockttracker.lst file. The start_scn also matches the SCN found in the trace file NEWDB_ora_8582_SYSDUMP1_SESSION.rtf. Labeling the columns clearly specifies what each one represents. If the column labeled NOUNDO equals YES, then there would be no undo available. See the query as follows:

SYS@NEWDB>SELECT UBAFIL AS UNDOFILENUMBER,UBABLK AS UNDOBLOCKNUMBER,
STATUS,NOUNDO,XID AS TRANSACTIONID,START_SCN FROM V$TRANSACTION;

Farther down in the trace file is the section that contains the actual data. Look for the key words block row dump. The data found in the trackname column is in hexadecimal format, which is converted by referring to http://www.asciitable.com. Look up the Hex (hexadecimal) to find the Char (character) equivalent.

block_row_dump:
tab 0, row 0, @0x1f8a
tl: 14 fb: --H-FL-- lb: 0x1 cc: 2
col 0: [ 2] c1 02
col 1: [ 7] 41 54 4c 41 4e 54 41 = A T L A N T A

Commit of a single row insert

What happens when we finally issue a commit statement? Looking at the NEWDB_ora_8621_SYSDUMP2_SESSION.rtf file, there are several items of interest. Notice the advancement of the SCN after the commit statement was issued from the previous insert statement:

scn: 0x0000.001827c3 --001827c3 = 1,583,043

The flag shown below is now showing as committed by fast commit process, which causes delayed block cleanout until this block is visited by another query, DDL, or DML statement.

Also see that the Uba is the same address as before the commit statement in the previous section (above), as excerpted from NEWDB_ora_8582_SYSDUMP1_SESSION.rtf:

Itl Xid Uba Flag Lck
0x01 0x0007.002.0000040c 0x00c000e6.015d.04 --U- 1
0x02 0x0000.000.00000000 0x00000000.0000.00 ---- 0

The fast commit process is fast because it makes the minimal changes needed to indicate that a commit has occurred. This leaves the lock (see the Lck Flag of 1) even after the commit has finished! Delayed block cleanout happens when the next operation visits this block and refers back to the transaction table to find the details for finishing the cleanup process.

Single row update and corresponding undo block

Continuing with the demonstration, we are modifying the existing Atlanta entry in the tracks table to Daytona by issuing an update statement. In this case, we are dumping both the data block and the undo block in order to compare information across those two database structures. Taken from dumpblocksys.lst, the following information tells us several things.

The sys user only sees the previous data of Atlanta showing read consistency for everyone else; that information is coming from the undo segments. Ttracker sees the entry as Daytona, which has a lock (exclusive view) on the changed data. The select statement below returns the row ATLANTA:

SYS@NEWDB>SELECT * FROM TTRACKER.TRACKS;

There is an active transaction that shows how the undo file number, and undo block number can be determined. Just as we need the file number and block number of a table segment to explore the internals, we will need the file number and block number of the undo segment.

SYS@NEWDB>SELECT UBAFIL AS UNDOFILENUMBER,UBABLK AS UNDOBLOCKNUMBER,
STATUS,NOUNDO,XID AS TRANSACTIONID,START_SCN FROM V$TRANSACTION;

The SCN for this update statement is located in the file NEWDB_ora_8628_SYSDUMP3.rtf.

scn: 0x0000.001827cb -- 001827cb = 1,583,051

Now that we have accessed the same block by issuing an update, the flag now shows as committed in the corresponding trace file. Also notice the lock (Lck) has been removed:

Itl Xid Uba Flag Lck
0x01 0x0007.002.0000040c 0x00c000e6.015d.04 C--- 0
0x02 0x0000.000.00000000 0x00000000.0000.00 ---- 0
col 0: [ 2] c1 02
col 1: [ 7] 44 41 59 54 4f 4e 41 = D A Y T O N A

Comments in the file NEWDB_ora_8628_SYSDUMP3.rtf indicate where the data block dump stops and the undo block dump starts. Farther down in the same file, find the data section labeled as follows. This undo block dump contains the entry for Atlanta. Now, if the end user (ttracker in this case) rolled back the Daytona update, then the ATLANTA row would be restored as shown in the undo block dump:

col 1: [ 7] 41 54 4c 41 4e 54 41 = ATLANTA
frmt: 0x02 chkval: 0xa058 type: 0x02=KTU UNDO BLOCK

Let's continue with the demonstration to show the internal changes when the commit of an updated row in the tracks table occurs.

Commit of a single row update

The associated SCN for this transaction is listed in the trace file and converted as follows:

scn: 0x0000.001827d1 -- 001827d1 = 1,583,057

Note that both transactions in the Itl are listed for this data block: first the commit and then the following update statement. Each statement is uniquely identified by its respective Uba and SCN. Just as before in Commit of a single row insert, the flag U indicates delayed block cleanout until this block is accessed again, and the lock is still at least partially present.

Itl Xid Uba Flag Lck Scn/Fsc
0x01 0x0007.002.0000040c 0x00c000e6.015d.04 C--- 0 scn 0x0000.001827c3
0x02 0x0008.010.000004cf 0x00c0056a.0112.14 --U- 1 fsc 0x0000.001827d1
col 0: [ 2] c1 02
col 1: [ 7] 44 41 59 54 4f 4e 41 = D A Y T O N A

There is usually more than one way to accomplish a task, such as converting hexadecimal information to a readable character string. Earlier, this conversion task was outlined with reference to an online hex to ASCII calculator to do the work for you. There is an Oracle supplied package called utl_raw that can also do the same conversion with the word DAYTONA returned, as follows:

TTRACKER@NEWDB> SELECT utl_raw.cast_to_varchar2(HEXTORAW('444159544f4e41')) 
FROM dual;

The last database SCN recorded in this exercise from the dumpblocksys.lst file is 1583062.

We have traveled the SCN, changing from the initial value of 1583007 to the end value mentioned above as 1583062. By altering a small amount of data and looking at the associated changes within the trace files, we can clearly demonstrate database read consistency, locking mechanisms, and delayed block cleanout. While the concepts manual is a good place to start, a DBA needs to be able to prove what they have read (or heard) with an actual test case.

After accomplishing this task successfully, you should be more comfortable with the internal structures of the database. More advanced scenarios would be to work with more data, add indexes (index segments can be dumped), or multiple schemas. If this type of work really intrigues you, start asking around at conferences. There are lots of people that are used to working with Oracle at the block level. Truly geeky!

Take this exercise a step further and create a database package to make dump blocks more readable. See:
http://dioncho.wordpress.com/2009/07/14/decoding-block-dump-using-utl_raw/.

Summary

This article covered how data moves from one database component to the next; the mechanics or essential architectural infrastructure at the block level; what the data in a datafile, redo, undo, or archivelog actually looks like; and how the database keeps up with SCN changes. Dumping database blocks provides the raw hexadecimal values that convert to the characters that the end user sees at the application level.

If you have read this article you may be interested to view :

Oracle Database 11g – Underground Advice for Database Administrators A real-world DBA survival guide for Oracle 11g database implementations with this Oracle book and eBook
Published: April 2010
eBook Price: $29.99
Book Price: $49.99
See more
Select your format and quantity:

About the Author :


April C. Sims

April Sims is currently the Database Administrator at Southern Utah University and an Oracle Certified Professional: 8i, 9i, and 10g with a master's degree in Business Administration from the University of Texas at Dallas. Involved as a volunteer with the Independent Oracle Users Group for over 7 years, April is currently a Contributing Editor for the IOUG "SELECT" Journal. April is an annual presenter at Oracle OpenWorld, IOUG COLLABORATE, and numerous regional Oracle-related conferences.

Books From Packt

 

Oracle Coherence 3.5
Oracle Coherence 3.5

Oracle SOA Suite 11g R1 Developer's Guide
Oracle SOA Suite 11g R1 Developer's Guide

Oracle SQL Developer 2.1
Oracle SQL Developer 2.1

Oracle Application Express 3.2 – The Essentials and More
Oracle Application Express 3.2 – The Essentials and More

The Business Analyst's Guide to Oracle Hyperion Interactive Reporting 11.1
The Business Analyst's Guide to Oracle Hyperion Interactive Reporting 11.1

Oracle 11g R1 / R2 Real Application Clusters Handbook
Oracle 11g R1 / R2 Real Application Clusters Handbook

Oracle Warehouse Builder 11g: Getting Started
Oracle Warehouse Builder 11g: Getting Started

BPEL PM and OSB operational management with Oracle Enterprise Manager 10g Grid Control
BPEL PM and OSB operational management with Oracle Enterprise Manager 10g Grid Control

 

Code Download and Errata
Packt Anytime, Anywhere
Register Books
Print Upgrades
eBook Downloads
Video Support
Contact Us
Awards Voting Nominations Previous Winners
Judges Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software
Resources
Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software