Managing the SQL Server Transaction Log

A transaction log filling up the disk is a classic DBA problem. Learn how to diagnose blocking issues and properly configure your recovery models.

The transaction log (LDF) is an essential component of SQL Server. It records every change made to the database to ensure data integrity and enable crash recovery. Without proper configuration, this file can grow until it fills up your storage.

Diagnosing Disk Usage

To understand the current state of your files, SQL Server provides visual built-in reports.

Right-click on your database > Reports > Standard Reports > Disk Usage to get two pie charts:

  • Left: Data file (MDF) usage.
  • Right: Transaction log (LDF) usage.

If your log is more than 90% full, it’s time to take action.

Using System Queries

For an overview across all databases on the instance, the classic command is:

DBCC SQLPERF(LOGSPACE);

This command shows the size of each log and its usage percentage. For a more detailed view, you can use more comprehensive scripts.

sp_log_space procedure for a detailed audit
USE master;
GO

-----------------------------------------------------------------
-- sp_logspace, replaces DBCC SQLPERF (LOGSPACE) with more
-- information
--
-- rudi@babaluga.com, go ahead license
-----------------------------------------------------------------
CREATE OR ALTER PROCEDURE sp_logspace
    @database SYSNAME = N'%'
AS BEGIN
    SET NOCOUNT ON;

	;WITH cte_vlf AS (
		SELECT 
			ROW_NUMBER() OVER(PARTITION BY d.database_id ORDER BY li.vlf_begin_offset) AS vlfid,
			CAST(PERCENT_RANK() OVER(PARTITION BY d.database_id ORDER BY li.vlf_begin_offset) * 100 as DECIMAL(5,2)) AS pr,
			d.name AS db, 
			li.vlf_sequence_number, 
			li.vlf_active, 
			li.vlf_begin_offset, 
			li.vlf_size_mb
		FROM sys.databases d 
		CROSS APPLY sys.dm_db_log_info(d.database_id) li ),
	cte_active_vlf AS (
		SELECT db, 
			MAX(pr) as [pos]
		FROM cte_vlf
		WHERE vlf_active = 1
		GROUP BY db)
    SELECT
        d.name as [db],
		CEILING(ls.total_log_size_mb) as log_size_MB,
		CEILING(ls.active_log_size_mb) as log_used_MB,
		CEILING(ls.active_log_size_mb / NULLIF(ls.total_log_size_mb, 0) * 100) as [% used],
        NULLIF(d.log_reuse_wait_desc, N'NOTHING') as log_reuse_wait,
        d.recovery_model_desc as recovery_model,
        NULLIF(CAST(ls.log_backup_time as datetime2(0)), '1900-01-01 00:00:00') as last_translog_backup,
        mf.name,
        mf.physical_name,
        CASE mf.max_size
            WHEN 0 THEN 'Fixed'
            WHEN -1 THEN 'Illimited'
            WHEN 268435456 THEN '2 TB'
            ELSE CONCAT((mf.max_size * 8) / 1024, ' MB')
        END AS [max],
        CASE mf.growth
            WHEN 0 THEN 'Fixed'
            ELSE
                CASE mf.is_percent_growth
                    WHEN 1 THEN CONCAT(growth, '%')
                    ELSE CONCAT((mf.growth * 8) / 1024, ' MB')
                END
        END AS [growth],
		ls.total_vlf_count as vlf,
		CAST(ls.log_since_last_checkpoint_mb as decimal(38, 2)) as since_last_checkpoint_mb,
		CAST(ls.log_since_last_log_backup_mb as decimal(38, 2)) as since_last_log_backup_mb,
		CAST(ls.log_recovery_size_mb as decimal(38, 2)) as recovery_size_mb,
		av.pos as [% active position]
    FROM sys.databases d
    JOIN sys.master_files mf ON d.database_id = mf.database_id AND mf.[type] = 1 -- log
        AND mf.state <> 6 -- OFFLINE
    --OUTER APPLY (SELECT COUNT(*) as vlf FROM sys.dm_db_log_info ( d.database_id ) ) li
	CROSS APPLY sys.dm_db_log_stats( d.database_id ) ls
	LEFT JOIN cte_active_vlf av ON av.db = d.name
    WHERE d.name LIKE @database
	AND d.name NOT IN (N'master', N'model')
    ORDER BY [db]
    OPTION (MAXDOP 1);

END;
GO

Why Isn’t the Log Being Cleared?

Log clearing mainly depends on the Recovery Model.

FULL Recovery Mode

In Full recovery model, SQL Server keeps all transactions in the log until they are saved via a transaction log backup (Log Backup). If you don’t schedule these backups (every 5 or 15 minutes for instance), the log will grow indefinitely, even if you perform full database backups.

SIMPLE Recovery Mode

In Simple model, SQL Server automatically clears the log during a checkpoint once transactions are committed and written to disk. This is the recommended mode for development environments or non-critical databases where data loss since the last full backup is acceptable.

-- Switch a database to Simple mode to stop infinite log growth
ALTER DATABASE [MyDatabase] SET RECOVERY SIMPLE;

Identifying the Blocking Cause

If the log remains full despite your efforts, query the sys.databases system view and check the log_reuse_wait_desc column:

ValueMeaningRequired Action
NOTHINGEverything is fineNone
LOG_BACKUPWaiting for backupRun a log backup (in FULL mode)
ACTIVE_TRANSACTIONOpen transactionIdentify and terminate the blocking session
REPLICATIONWaiting for replicationCheck the state of your replication topology
AVAILABILITY_REPLICAAlways On AGCheck the state of your secondary replica

Finding the Culprit Transaction

A “forgotten” transaction (a BEGIN TRAN without a COMMIT) prevents the log from recycling its space. To find the oldest active transaction in the current database, use:

DBCC OPENTRAN;

If you identify a problematic session (e.g., session 54), you can get more details with sp_who2 54 or decide to terminate it:

KILL 54;

Reducing the Physical File Size (Shrink)

Once the cause is resolved (backup performed or model switched to Simple), the log contains free space, but the file on disk doesn’t shrink automatically. To reclaim that space, you need to perform a “Shrink”.

-- Shrink the log file to 100 MB
DBCC SHRINKFILE (N'MyDatabase_Log' , 100);

Conclusion

In production, make sure you have frequent log backups. Outside of production, prefer Simple mode. And in case of emergency, always keep an eye on log_reuse_wait_desc to understand what is holding your disk space.