Thursday 25 February 2010

.NET connection Pooling

SQL Server does not keep the connection open after a "close" command, maybe a second or two.    

When you close a connection in .NET, it does NOT close the physical connection to the SQL Server.  It RELEASES it to the connection pool for other .NET applications to reuse.  This is a function of .NET connection pooling.  

SQL Server is NOT refusing connections at 100, the .NET connection pool is defaults to 100 connections.

Please see this write up on .NET connection pooling.  
http://msdn.microsoft.com/en-us/library/aa175863(SQL.80).aspx 

1. Does the processes with status sleeping and command as Awaiting Command are needed for the application even after the operations specific to them are complete?

-- No 

2. Does these processes affect the performance?

-- It will if you have left thousand connections open. Connections need resources.

3. Does killing this process would affect the application connectivity?

-- You shouldn't have to go around killing these processes. I don't think it would affect. More work for you. 

4. Why do we get several processes with status sleeping and command as Awaiting Command? Is this because of the front end programming errors?

 -- Web app doesn't close the connection after the work has been done.

5. What should we do to avoid the increasing number of process with status sleeping and command as Awaiting Command?

-- Your web application needs to be configured or modified so that whenever it needs to do any work on SQL Server it does the following:

1. Opens a connection.

2. Do the SQL work e.g. execute a stored procedure or select some data back.

3. Close the connection.

 
 


 

Tuesday 23 February 2010

PCT FREE/USED

This could be a serious issue for the Oracle professional unless they remember that locally-managed tablespaces with automatic space management ignore any specified values for NEXT and FREELISTS.

Before we explore the details of designing with each of these options, it's important to understand the segment storage options and see how they relate to the tablespace options.  Let's start with a review of the segment storage parameters.

Design for Oracle Segment Storage

Since the earliest days of Oracle, we have the following individual segment parameters to manage the growth of each segment (table, index, IOT) within our Oracle database:

  • PCTFREE—This storage parameter determines when a block can be unlinked from the free list. You must reserve enough room on each data block for existing rows to expand without chaining onto other blocks. The purpose of PCTFREE is to tell Oracle when to remove a block from the object's free list. Since the Oracle default is PCTFREE=10, blocks remain on the free list while they are less than 90 percent full. Once an insert makes the block grow beyond 90 percent full, it is removed from the free list, leaving 10 percent of the block for row expansion. Furthermore, the data block will remain off the free list even after the space drops below 90 percent. Only after subsequent deletes cause the space to fall below the PCTUSED threshold of 40 percent will Oracle put the block back onto the free list. 
     
  • PCTUSED—This storage parameter determines when a block can re-link onto the table free list after DELETE operations. Setting a low value for PCTUSED will result in high performance. A higher value of PCTFREE will result in efficient space reuse but will slow performance. As rows are deleted from a table, the database blocks become eligible to accept new rows. This happens when the amount of space in a database block falls below PCTUSED, and a free list re-link operation is triggered. For example, with PCTUSED=60, all database blocks that have less than 60 percent will be on the free list, as well as other blocks that dropped below PCTUSED and have not yet grown to PCTFREE. Once a block deletes a row and becomes less than 60 percent full, the block goes back on the free list. As rows are deleted, data blocks become available when a block's free space drops below the value of PCTUSED for the table, and Oracle re-links the data block onto the free list chain. As the table has rows inserted into it, it will grow until the space on the block exceeds the threshold PCTFREE, at which time the block is unlinked from the free list.
     
  • FREELISTS—Oracle allows table and indexes to be defined with multiple free lists. All tables and index free lists should be set to the high-water mark of concurrent INSERT or UPDATE activity. Too low a value for free lists will cause poor Oracle performance.

There is a direct trade-off between the setting for PCTUSED and efficient use of storage within the Oracle database. For databases where space is tight and storage within the Oracle data files must be reused immediately, the Oracle database administrator will commonly set PCTUSED to a very high value. This ensures the blocks go on the free list before they are completely empty.

However, the downside to this approach is that every time the data block fills, Oracle must unlink the data block from the free list and incur another I/O to get another free data block to insert new rows. In sum, the DBA must strike a balance between efficient space usage and the amount of I/O in the Oracle database.

Let's begin our discussion by introducing the relationship between object storage parameters and performance. Poor object performance within Oracle occurs in several areas:

  • Slow INSERTs—INSERT operations run slowly and have excessive I/O. This happens when blocks on the free list have room for only a few rows before Oracle is forced to grab another free block. 
     
  • Slow SELECTs—SELECT statements have excessive I/O because of chained rows. This occurs when rows "chain" and fragment onto several data blocks, causing additional I/O to fetch the blocks. 
     
  • Slow UPDATEs—UPDATE statements run very slowly with double the amount of I/O. This happens when updates expand a VARCHAR or BLOB column and Oracle is forced to chain the row contents onto additional data blocks.
     

  • Slow DELETEs—Large DELETE statements run slowly and cause segment header contention. This happens when rows are deleted and the database must re-link the data block onto the free list for the table.

Friday 19 February 2010

Undo Segments NEEDS RECOVERY

Fixing Corrupt System Managed

In Oracle9i and greater releases of Oracle the old rollback segment has been replaced with undo segments and for the most part, have become automatically sized and managed. However, what happens when one of these system created and managed undo segments becomes corrupt? Well, for one thing you will get errors in your alert log similar to the following:

Errors in file /oracle/admin/test/bdump/test2_smon_21466.trc:
ORA-00600: internal error code, arguments: [6006], [1], [], [], [], [], [], []
SMON: mark undo segment 29 as needs recovery

ORACLE Instance test2 (pid = 11) - Error 600 encountered while recovering 
transaction (29, 42) on object 36.

(Note that this will show the same undo segment, transaction, and object number each time it occurs, if the undo segment and transaction number vary, then the problem is with the object number displayed)

In fact, you may get many of these over and over again.

So, what can you do about this?

If you attempt to alter the undo segment with the ALTER ROLLBACK SEGMENT command you will be told in no uncertain terms that this is system managed and can't be altered.

So, here is what you need to do:

  1. Create a new system management undo tablespace:

           SQL> connect / as sysdba

 SQL> create undo tablespace undotbs2 datafile                        '/u02/oracle/oradata/test/undotbs2.dbf' size 500m;

  1. Determine the problem undo segment:

           SQL> select SEGMENT_NAME, STATUS from dba_rollback_segs;

 The problem segment will show a "Needs Recovery" status.

  1. Alter the system to use the new undo tablespace:

 SQL> alter system set undo_tablespace=undotbs2 scope=both;  (Note if you are not using an spfile, omit the scope command) 

  1. If you are using an spfile, create a pfile from it:

          SQL> connect / as sysdba

          SQL> create pfile='/u01/oracle/admin/test/pfile/inittest.ora' from spfile; 

  1. Edit the inittest.ora pfile and add (using the undo segment from our example error):

   *._offline_rollback_segments=" _SYSSMU29$"

   *._corrupt_rollback_segments="_SYSSMU29$" 

  1. Now shutdown your instance, this may require a shutdown abort, but try a shutdown immediate first.
     
  2. Startup using the manual startup command:

    SQL> startup pfile='/u01/oacle/admin/test/pfile=inittest.ora' 

  1. Alter the old undo tablespace offline:

         SQL> alter tablespace undotbs1 offline; 

  1. Drop the offending tablespace:

         SQL> drop tablespace undotbs1 including contents and datafiles; 

  1. Shut down immediate.
     
  2. Edit the inittest.ora file to eliminate the underscore parameters.
     
  3. Restart the instance using the pfile option.
     
  4. Create an spfile from the pfile:

           SQL> create spfile from pfile='/u01/oracle/admin/test/pfile/inittest.ora';

Once step 13 is accomplished the database should be up normally. However, it might be wise to do a full backup and then rebuild using an export and import. Since we had to drop a undo segment that had some possibly active transactions un-applied the database may not be fully consistent.