Pinpointing Bottlenecks for Better Database Access in ASP.Net

Exclusive offer: get 50% off this eBook here
ASP.NET Site Performance Secrets

ASP.NET Site Performance Secrets — Save 50%

Simple and proven techniques to quickly speed up your ASP.NET website

$35.99    $18.00
by Matt Perdeck | October 2010 | .NET Web Development

In many websites, database access is the most expensive part of producing a web page. This article by Matt Perdeck, author of ASP.NET Site Performance Secret, shows how to identify the most common sources of delays and how to resolve them.

This article shows how to pinpoint and prioritize a number of common bottlenecks, so that you can spend your time where it counts the most. These bottlenecks include:

  • Missing indexes
  • Expensive queries
  • Locking
  • Execution plan reuse
  • Fragmentation
  • Memory
  • Disk
  • CPU

 

ASP.NET Site Performance Secrets

ASP.NET Site Performance Secrets

Simple and proven techniques to quickly speed up your ASP.NET website

  • Speed up your ASP.NET website by identifying performance bottlenecks that hold back your site's performance and fixing them
  • Tips and tricks for writing faster code and pinpointing those areas in the code that matter most, thus saving time and energy
  • Drastically reduce page load times
  • Configure and improve compression – the single most important way to improve your site's performance
  • Written in a simple problem-solving manner – with a practical hands-on approach and just the right amount of theory you need to make sense of it all

 

        Read more about this book      

(For more resources on ASP.Net, see here.)

In this section, we'll identify the biggest bottlenecks.

Missing indexes and expensive queries

You can greatly improve the performance of your queries by reducing the number of reads executed by those queries. The more reads you execute, the more potentially you stress the disk, CPU, and memory. Secondly, a query reading a resource normally blocks another query from updating that resource. If the updating query has to wait while holding locks itself, it may then delay a chain of other queries. Finally, unless the entire database fits in memory, each time data is read from disk, other data is evicted from memory. If that data is needed later, it then needs to be read from the disk again.

The most effective way to reduce the number of reads is to create sufficient indexes on your tables. Just as an index in a book, an SQL Server index allows a query to go straight to the table row(s) it needs, rather than having to scan the entire table. Indexes are not a cure-all though—they do incur overhead and slow down updates, so they need to be used wisely.

In this section, we'll see:

  • How to identify missing indexes that would reduce the number of reads in the database
  • How to identify those queries that create the greatest strain, either because they are used very often, or because they are just plain expensive
  • How to identify superfluous indexes that take resources but provide little benefit

Missing indexes

SQL Server allows you to put indexes on table columns, to speed up WHERE and JOIN statements on those columns. When the query optimizer optimizes a query, it stores information about those indexes it would have liked to have used, but weren't available. You can access this information with the Dynamic Management View (DMV) dm_db_missing_index_details (indexesqueries.sql in the code bundle):

select d.name AS DatabaseName, mid.*
from sys.dm_db_missing_index_details mid
join sys.databases d ON mid.database_id=d.database_id

The most important columns returned by this query are:

Column Description
DatabaseName Name of the database this row relates to.
equality_columns Comma-separated list of columns used with the equals operator, such as:
column=value
inequality_columns Comma-separated list of columns used with a comparison operator other than the equals operator, such as:
column>value
included_columns Comma-separated list of columns that could profitably be included in an index.
statement Name of the table where the index is missing.

This information is not persistent—you will lose it after a server restart.

An alternative is to use Database Engine Tuning Advisor, which is included with SQL Server 2008 (except for the Express version). This tool analyzes a trace of database operations and identifies an optimal set of indexes that takes the requirements of all queries into account. It even gives you the SQL statements needed to create the missing indexes it identified.

The first step is to get a trace of database operations during a representative period. If your database is the busiest during business hours, then that is probably when you want to run the trace:

  1. Start SQL Profiler. Click on Start | Programs | Microsoft SQL Server 2008 | Performance Tools | SQL Server Profiler.
  2. In SQL Profiler, click on File | New Trace.
  3. Click on the Events Selection tab.
  4. You want to minimize the number of events captured to reduce the load on the server. Deselect every event, except SQL:BatchCompleted and RPC:Completed. It is those events that contain resource information for each batch, and so are used by Database Engine Tuning Advisor to analyze the workload. Make sure that the TextData column is selected for both the events.
  5. To capture events related only to your database, click on the Column Filters button. Click on DatabaseName in the left column, expand Like in the righthand pane, and enter your database name. Click on OK.

    (Move the mouse over the image to enlarge.)

  6. To further cut down the trace and only trace calls from your website, put a filter on ApplicationName, so only events where this equals ".Net SqlClient Data Provider" will be recorded.
  7. Click on the Run button to start the trace. You will see batch completions scrolling through the window. At any stage, you can click on File | Save or press Ctrl + S. to save the trace to a file.
  8. Save the template so that you don't have to recreate it next time. Click on File | Save As | Trace Template. Fill in a descriptive name and click on OK. Next time you create a new trace by clicking on File | New Trace, you can retrieve the template from the Use the template drop-down.

    Sending all these events to your screen takes a lot of server resources. You probably won't be looking at it all day anyway. The solution is to save your trace as a script and then use that to run a background trace. You'll also be able to reuse the script later on.

  9. Click on File | Export | Script Trace Definition | For SQL Server 2005 – 2008. Save the file with a .sql extension. You can now close SQL Server Profiler, which will also stop the trace.
  10. In SQL Server Management Studio, open the .sql file you just created. Find the string InsertFileNameHere and replace it with the full path of the file where you want the log stored. Leave off the extension; the script will set it to .trc. Press Ctrl + S to save the .sql file.
  11. To start the trace, press F5 to run the .sql file. It will tell you the trace ID of this trace.
  12. To see the status of this trace and any other traces in the system, execute the following command in a query window:

    select * from ::fn_trace_getinfo(default)

    Find the row with property 5 for your trace ID. If the value column in that row is 1, your trace is running. The trace with trace ID 1 is a system trace.

  13. To stop the trace after it has captured a representative period, assuming your trace ID is two, run the following command:

    exec sp_trace_setstatus 2,0

    To restart it, run:

    exec sp_trace_setstatus 2,1

  14. To stop and close it so that you can access the trace file, run:
    exec sp_trace_setstatus 2,0
    exec sp_trace_setstatus 2,2

Now, run Database Engine Tuning Advisor:

  1. Start SQL Profiler. Click on Start | Programs | Microsoft SQL Server 2008 | Performance Tools | Database Engine Tuning Advisor.
  2. In the Workload area, select your trace file. In the Database for workload analysis drop-down, select the first database you want to be analyzed.
  3. Under Select databases and tables to tune, select the databases for which you want index recommendations.
  4. Especially with a big trace, Database Engine Tuning Advisor may take a long time to do its analysis. On the Tuning Options tab, you can tell it when to stop analyzing. This is just a limit; if it is done sooner, it will produce results as soon as it is done.
  5. To start the analysis, click on the Start Analysis button in the toolbar.

Keep in mind that Database Engine Tuning Advisor is just a computer program. Consider its recommendations, but make up your own mind. Be sure to give it a trace with a representative workload, otherwise its recommendations may make things worse rather than better. For example, if you provide a trace that was captured at night when you process few transactions but execute lots of reporting jobs, its advice is going to be skewed towards optimizing reporting, not transactions.

ASP.NET Site Performance Secrets Simple and proven techniques to quickly speed up your ASP.NET website
Published: October 2010
eBook Price: $35.99
Book Price: $59.99
See more
Select your format and quantity:
        Read more about this book      

(For more resources on ASP.Net, see here.)

Expensive queries

If you use SQL Server 2008 or higher, you can use the activity monitor to find the recently-executed expensive queries. In SSMS, right-click on your database server (normally in the top, left corner of the window) and choose Activity Monitor.

You can get a lot more information by using the DMV dm_exec_query_stats. When the query optimizer creates the execution plan for a query, it caches the plan for reuse. Each time a plan is used to execute a query, performance statistics are kept. You can access those statistics with dm_exec_query_stats (indexesqueries.sql in the downloaded code bundle):

SELECT
est.text AS batchtext,
SUBSTRING(est.text, (eqs.statement_start_offset/2)+1,
(CASE eqs.statement_end_offset WHEN -1
THEN DATALENGTH(est.text)
ELSE eqs.statement_end_offset END -
((eqs.statement_start_offset/2) + 1))) AS querytext,
eqs.creation_time, eqs.last_execution_time, eqs.execution_count,
eqs.total_worker_time, eqs.last_worker_time,
eqs.min_worker_time, eqs.max_worker_time,
eqs.total_physical_reads, eqs.last_physical_reads,
eqs.min_physical_reads, eqs.max_physical_reads,
eqs.total_elapsed_time, eqs.last_elapsed_time,
eqs.min_elapsed_time, eqs.max_elapsed_time,
eqs.total_logical_writes, eqs.last_logical_writes,
eqs.min_logical_writes, eqs.max_logical_writes,
eqs.query_plan_hash
FROM
sys.dm_exec_query_stats AS eqs
CROSS APPLY sys.dm_exec_sql_text(eqs.sql_handle) AS est
ORDER BY eqs.total_physical_reads DESC

A limitation of this DMV is that when you run it, not all queries that have run since the last server restart will have a plan in cache. Some plans may have expired due to lack of use. Plans that were very cheap to produce, but not necessarily cheap to run, may not have been stored at all. And if a plan has been recompiled, the statistics only apply for the period since recompilation.

Another limitation is that this query is only suitable for stored procedures. If you use ad hoc queries, the parameters are embedded in the query. This causes the query optimizer to produce a plan for each set of parameters, unless the query has been parameterized.

To get around this, dm_exec_query_stats returns a column query_plan_hash which is the same for each query that has the same execution plan. By aggregating on this column using GROUP BY, you can get aggregate performance data for queries that share the same logic.

The query returns the following information:

Column Description
batchtext Text of the entire batch or stored procedure containing the query.
querytext Text of the actual query.
creation_time Time that the execution plan was created.
last_execution_time Last time the plan was executed.
execution_count Number of times the plan was executed after it was created. This is not the number of times the query itself was executed; its plan may have been recompiled at some stage.
total_worker_time Total amount of CPU time in microseconds that was consumed by executions of this plan since it was created.
last_worker_time CPU time in microseconds that was consumed the last time the plan was executed.
min_worker_time Minimum CPU time in microseconds that this plan has ever consumed during a single execution.
max_worker_time Maximum CPU time in microseconds that this plan has ever consumed during a single execution.
total_physical_reads Total number of physical reads performed by executions of this plan since it was compiled.
last_physical_reads Number of physical reads performed the last time the plan was executed.
min_physical_reads Minimum number of physical reads that this plan has ever performed during a single execution.
max_physical_reads Maximum number of physical reads that this plan has ever performed during a single execution.
total_logical_writes Total number of logical writes performed by executions of this plan since it was compiled.
last_logical_writes Number of logical writes performed the last time the plan was executed.
min_logical_writes Minimum number of logical writes that this plan has ever performed during a single execution.
max_logical_writes Maximum number of logical writes that this plan has ever performed during a single execution.
total_elapsed_time Total elapsed time in microseconds for completed executions of this plan.
last_elapsed_time Elapsed time in microseconds for the most recently completed execution of this plan.
min_elapsed_time Minimum elapsed time in microseconds for any completed execution of this plan.
max_elapsed_time Maximum elapsed time in microseconds for any completed execution of this plan.

An alternative to using dm_exec_query_stats is to analyze the trace you made with SQL Server Profiler. After all, this contains performance data for every completed batch. A batch corresponds to a stored procedure or a query if you use ad hoc queries.

To investigate this a bit further, load the trace file into a table. You can use Profiler to do this:

  1. Start SQL Profiler. Click on Start | Programs | Microsoft SQL Server 2008 | Performance Tools | SQL Server Profiler.
  2. To open the trace file, click on File | Open | Trace File, or press Ctrl + O. If you want, you can now analyze the trace in the profiler.
  3. To save the trace to a table, click on File | Save As | Trace Table. If the table you specify does not yet exist, the profiler will create it.

Alternatively, use fn_trace_gettable, shown as follows (indexesqueries.sql in the downloaded code bundle):

SELECT * INTO newtracetable
FROM ::fn_trace_gettable('c:\trace.trc', default)

The most obvious way to find the most expensive queries or stored procedures is to aggregate the performance data in the table by query or stored procedure, using GROUP BY. However, when you have a look at the TextData column in the table with trace results, you'll find that all queries or stored procedure calls are listed with actual parameter values. To aggregate them, you'll have to filter out those values.

If you send stored procedure calls to the database, good for you. In that case, it isn't too hard to remove the parameters, because they always come after the stored procedure name. In the file processtrace.sql in the downloaded code bundle, you will find an SQL script that does exactly the same thing. It also then aggregates the performance data per stored procedure.

If you send ad hoc queries, removing the variable bits of the queries will be a lot more difficult, because their locations are different for each query. The following resources may make your job a bit easier:

Once you've identified the most expensive queries, you can find out whether adding indexes would speed up their execution.

  1. Open a query window in SSMS.
  2. From the Query menu, choose Include Actual Execution Plan or press Ctrl + M.
  3. Copy an expensive query in the query window and execute it. Above the results pane, you will see a tab Execution plan. Click on that tab.
  4. If the query optimizer found that an index was missing, you will see a message in green.
  5. For more information, right-click in the lower pane and choose Show Execution Plan XML. In the XML, look for the MissingIndexes element.

Unused indexes

A drawback of indexes is that they need to be updated when the data itself is updated, causing delays. They also take storage space. If an index slows down updates but is hardly used for reading, you're better off dropping it.

Use the DMV dm_db_index_usage_stats to get usage information on each index (indexesqueries.sql in downloaded code bundle) as shown:

SELECT d.name, t.name, i.name, ius.*
FROM sys.dm_db_index_usage_stats ius
JOIN sys.databases d ON d.database_id = ius.database_id
JOIN sys.tables t ON t.object_id = ius.object_id
JOIN sys.indexes i ON i.object_id = ius.object_id AND i.index_id =
ius.index_id
ORDER BY user_updates DESC

This gives you the name, table, and database of each index that has seen activity since the last server restart, and the number of updates and reads since the last server restart.

Column user_updates, in particular, shows the number of updates caused by INSERT, UPDATE, or DELETE operations. If this is high in relation to the number of reads, consider dropping the index, as shown in the following code:

DROP INDEX IX_Title ON dbo.Book

You may see clustered indexes being updated.

Locking

In a database with lots of queries executing, some queries may try to access the same resource, such as a table or index. You wouldn't want one query to read a resource while another is updating it; otherwise, you could get inconsistent results.

To stop a query from accessing a resource, SQL Server locks the resource. This will inevitably lead to some delays as queries wait for a lock to be released. To find out whether these delays are excessive, check the following performance counters on the database server with perfmon:

Category: SQLServer:Latches
Total Latch Wait Time (ms) Total wait time in milliseconds for latches in the last second.
Category: SQLServer:Locks
Lock Timeouts/sec Number of lock requests per second that timed out. This includes requests for NOWAIT locks.
Lock Wait Time (ms) Total wait time in milliseconds for locks in the last second.
Number of Deadlocks/sec Number of lock requests per second that resulted in a deadlock.

A high number for Total Latch Wait Time (ms) indicates that SQL Server is waiting too long for its own synchronization mechanism. Lock Timeouts/sec should be zero during normal operation and Lock Wait Time (ms) very low. If they are not, queries keep waiting for too long for the locks to be released.

Finally, Number of Deadlocks/sec should be zero. If not, you have queries waiting on each other to release a lock, preventing either to move forward. SQL Server eventually detects this condition and resolves it by rolling back one of the queries, which means wasted time and wasted work.

Execution plan reuse

Before a query is executed, the SQL Server query optimizer compiles a cost-effective execution plan. This takes many CPU cycles. Because of this, SQL Server caches the execution plan in memory, in the plan cache. It then tries to match incoming queries with those that have already been cached.

In this section, you'll see how to measure how well the plan cache is being used.

Performance counters

Start by checking the following performance counters on the database server with perfmon:

Category: Processor (_Total)
% Processor Time The percentage of elapsed time that the processor is busy.
Category: SQL Server:SQL Statistics
SQL Compilations/sec Number of batch compiles and statement compiles per second. Expected to be very high initially after server startup.
SQL Re-Compilations/sec Number of recompiles per second.

These counters will show high values at server startup as every incoming query needs to be compiled. The plan cache sits in memory, so doesn't survive a restart. During normal operation, you would expect compilations per second to be less than 100, and re-compilations per second to be close to zero.

dm_exec_query_optimizer_info

Alternatively, you could look at the time spent by the server on optimizing queries. Because query optimizations are heavily CPU-bound, almost all the time is spent by a CPU on this.

The Dynamic Management View (DMV) sys.dm_exec_query_optimizer_info gives you the number of query optimizations since the last server restart, and the elapsed time in seconds it took on average to complete them (executionplan.sql in the downloaded code bundle):

SELECT
occurrence AS [Query optimizations since server restart],
value AS [Avg time per optimization in seconds],
occurrence * value AS [Time spend optimizing since server
restart in seconds]
FROM sys.dm_exec_query_optimizer_info
WHERE counter='elapsed time'

Run this query, wait for a while, and then run it again to find the time spent on optimizing in that period. Be sure to measure the time between the runs, so that you can work out what proportion of time the server spends on optimizing queries.

sys.dm_exec_cached_plans

The DMV sys.dm_exec_cached_plans provides information on all execution plans in the plan cache. You can combine this with the DMV sys.dm_exec_sql_text to find out how often the plan for a given query has been reused. If you get little reuse for an otherwise busy query or a stored procedure, you are getting too little benefit out of the plan cache (executionplan.sql in downloaded code bundle):

SELECT ecp.objtype, ecp.usecounts, ecp.size_in_bytes,
REPLACE(REPLACE(est.text, char(13), ''), char(10), ' ') AS querytext
FROM sys.dm_exec_cached_plans ecp
cross apply sys.dm_exec_sql_text(ecp.plan_handle) est
WHERE cacheobjtype='Compiled Plan'

The column objtype is Proc for stored procedures and Adhoc for ad hoc queries, while the field usecounts shows how often a plan has been used. In the Missing indexes and expensive queries section, you saw how to identify busy queries and stored procedures.

Fragmentation

The data and indexes in a database are organized on disk in 8-KB pages. A page is the smallest unit that SQL Server uses to transfer data to or from disk.

When you insert or update data, a page may run out of room. SQL Server then creates another page, and moves half of the contents of the existing page to the new page. That leaves free space not only in the new page, but in the original page as well. That way, if you keep inserting or updating data in the original page, it doesn't split again and again.

This means that after many updates, inserts, and deletes as well, you'll wind up with lots of pages that are partially empty. This takes more disk space than needed, but more importantly also slows down reading, because SQL Server now has to read more pages to access data. The pages may also wind up in a different physical order on disk than the logical order in which SQL Server needs to read them. As a result, instead of simply reading each page sequentially right after each other, it needs to wait for the disk head to reach the next page, hence more delays.

To establish the level of fragmentation for each table and index in your database, use the dm_db_index_physical_stats DMV (fragmentation.sql in the downloaded code bundle):

DECLARE @DatabaseName sysname
SET @DatabaseName = 'mydatabase' --use your own database name

SELECT o.name AS TableName, i.name AS IndexName, ips.index_type_desc,
ips.avg_fragmentation_in_percent, ips.page_count, ips.fragment_
count,
ips.avg_page_space_used_in_percent
FROM sys.dm_db_index_physical_stats(
DB_ID(@DatabaseName),
NULL, NULL, NULL, 'Sampled') ips
JOIN sys.objects o ON ips.object_id = o.object_id
JOIN sys.indexes i ON (ips.object_id = i.object_id) AND (ips.index_id
= i.index_id)
WHERE (ips.page_count >= 7) AND (ips.avg_fragmentation_in_percent >
20)
ORDER BY o.name, i.name

This gives you all the tables and indexes that take over seven pages and that are more than 20 percent fragmented. Tables and indexes taking less than seven pages tend to show high levels of fragmentation because of the way SQL Server organizes them on disk—there is little point in defragmenting them. Also, fragmentation below 20 percent is not really an issue.

When you see index type CLUSTERED INDEX in an entry, it really refers to the actual table, because the table is a part of the clustered index. Index type HEAP refers to a table without a clustered index.

If you find any tables or indexes that are over 20 percent fragmented and take over seven pages.

Memory

To see if lack of memory is slowing down the database server, check the following counters in perfmon:

Category: Memory
Pages/sec When the server runs out of memory, it stores information temporarily on disk, and then later reads it back when needed, which is very expensive. This counter indicates how often this happens.

 

Category: SQL Server:Buffer Manager
Page Life Expectancy Number of seconds a page will stay in the buffer pool without being used. The greater the life expectancy, the greater the chance that SQL Server will be able to get a page from memory instead of having to read it from disk.
Buffer cache hit ratio Percentage of pages that were found in the buffer pool, without having to read from disk.

 

If Pages/sec is consistently high or Page Life Expectancy is consistently low, say below 300, or Buffer cache hit ratio is consistently low, say below 90 percent, SQL Server may not have enough memory. This will lead to excessive disk I/O, causing a greater stress on the CPU and disk.

Disk usage

SQL Server is heavily disk-bound, so solving disk bottlenecks can make a big difference. If you found memory shortages in the previous section, fix those first, because a memory shortage can lead to excessive disk usage in itself. Otherwise check the following counters to see if there is a disk bottleneck for some other reason:

Categories: PhysicalDisk and LogicalDisk
% Disk Time Percentage of elapsed time that the selected disk was busy reading or writing.
Avg. Disk Queue Length Average number of read and write requests queued during the sample interval.
Current Disk Queue Length Current number of requests queued.

If % Disk Time is consistently over 85 percent, the disk system is stressed.

Avg. Disk Queue Length and Current Disk Queue Length refer to the number of tasks that are queued at the disk controller or are being processed. You want to see a counter value of two or less. If you use a RAID array where the controller is attached to several disks, you want to see counter values of two times the number of individual disks or less.

CPU

If you found memory or disk issues in the previous sections, fix those first because they will stress the CPU as well. Otherwise, check the following counters to see whether the CPU is stressed for another reason:

Category: Processor
% Processor Time Proportion of time that the processor is busy.
 
Category: System
Processor Queue Length Number of threads waiting to be processed.

If % Processor Time is consistently over 75 percent, or Processor Queue Length is consistently greater than two, the CPU is probably stressed.

Summary

In this article we saw how to pinpoint performance bottlenecks associated with the database such as missing indexes, expensive queries, and locking. We also looked at execution plan reuse and fragmentation. In addition to this, the possible hardware-related bottlenecks, including the database server's CPU, memory, and disk usage were discussed.

In the next article, Fixing bottlenecks for better Database Access in ASP.Net, we will look at how to actually fix each of the bottlenecks we prioritized in the this article.


Further resources on this subject:


ASP.NET Site Performance Secrets Simple and proven techniques to quickly speed up your ASP.NET website
Published: October 2010
eBook Price: $35.99
Book Price: $59.99
See more
Select your format and quantity:

About the Author :


Matt Perdeck

Matt Perdeck has over 20 years experience developing high-performance software systems, ranging from the largest ATM network in the Netherlands to embedded software in advanced Wide Area Networks. He has a B.Sc. in Computer Science from the University of Technology Twente, the Netherlands, and an M.B.A. from the University of Western Australia and Copenhagen Business School. He has worked in Australia, the Netherlands, Slovakia, and Thailand. He has extensive .NET and SQL Server development experience and has written several .NET articles. As an old-school software engineer, he is passionate about making things faster, simpler, and more efficiently.

Books From Packt


ASP.NET 3.5 Application Architecture and Design
ASP.NET 3.5 Application Architecture and Design

ASP.NET MVC 1.0 Quickly
ASP.NET MVC 1.0 Quickly

.NET Compact Framework 3.5 Data Driven Applications
.NET Compact Framework 3.5 Data Driven Applications

BlackBerry Java Application Development
BlackBerry Java Application Development

ColdFusion 9 Developer Tutorial
ColdFusion 9 Developer Tutorial

The 3CX IP PBX Tutorial
The 3CX IP PBX Tutorial

Kentico CMS 5 Website Development: Beginner's Guide
Kentico CMS 5 Website Development: Beginner's Guide

Unity 3D Game Development by Example Beginner's Guide
Unity 3D Game Development by Example Beginner's Guide


No votes yet

Post new comment

CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.
a
H
H
P
c
C
Enter the code without spaces and pay attention to upper/lower case.
Code Download and Errata
Packt Anytime, Anywhere
Register Books
Print Upgrades
eBook Downloads
Video Support
Contact Us
Awards Voting Nominations Previous Winners
Judges Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software
Resources
Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software