Deleting Records from a Large Table
If you need to delete a large number of records based upon a certain criteria (ie. you can't use TRUNCATE TABLE) then you may have come across the problem where the transaction log fills up and you may run out of disk space and consequently the DELETE query fails and nothing is deleted.
A DELETE executes as a transaction where it will not commit until the last record is deleted. DELETE physically removes rows one at a time and records each deleted row in the transaction log. If the number of records in a table is small, a straight out DELETE is fine, however, with a large table the DELETE will cause the transaction log to grow. When this happens the system IO performance is degraded. Also, the table will be locked which will affect the application. The person executing the DELETE query will usually panic because what seems to be a simple task takes a long time and there is no visible progress from Query Analyzer. If the transaction log runs out of disk space then the DELETE fails and the transaction rolls back and nothing ends up deleted.
A solution is to use the combination of a WHILE loop and @@ROWCOUNT to delete the records in manageable blocks (or chunks)
When using Simple Recovery, the transaction log will only grow by the number of records you specify (Default 100,000). Once the first block of 100,000 records is deleted, the transaction is committed and the transaction log empties (not physically) and the next 100,000 deletions will fill the transaction log usually by the same amount so the transaction log generally does not physically grow again.
When using Full Recovery, the transaction log will grow so care should be taken that the amount of data deleted will not exhaust the space where your transaction log resides. An option is to temporarily alter the Recovery Model.
Below is a script that you can use to do bulk deletions.
Just change the record\block size and the FROM and WHERE statements to fit your requirements.
Please be aware that during the DELETE operation, the table will be locked, so any process trying to read or write to the table will need to wait most probably until ALL the selected records are deleted. If you are concerned about this you can use WAITFOR to wait however long you like before deleting the next batch of records.