Friday, July 29, 2011

SQL Server Table Partitioning – A Round Robin Approach

Over the last few weeks I’ve been working on designing a large, partitioned table using SQL Server 2008 R2. This table needs to hold 30 days worth of data, and every day it receives around 15 million new rows. So the total rowcount will be around 30x15M = 450 million rows.

The key design requirement for this table is that it needs to implement a 30 day sliding window, and recycling old partitions needs to be completed rapidly, and without manual intervention. When a new day starts, the corresponding partition should be wiped as quickly as possible to remove the 30 day old data sitting in it.

Step One: An Elegant ‘Hash’ Function

CREATE FUNCTION [dbo].[fnGetPartition](@Day SMALLINT)
RETURNS TINYINT
WITH SCHEMABINDING
AS
BEGIN
    DECLARE @RetVal TINYINT
    --The number of Partitions present
    DECLARE @PartitionCount TINYINT
    SET @PartitionCount = 30
    --Hash the supplied day in to one of the @PartitionCount buckets
    SET @RetVal = @Day % @PartitionCount + 1
    RETURN @RetVal
END
GO

Days are represented by monotonically increasing integers in this system. So the function above takes the current day, finds the modulus 30 of that number and returns an integer between 1 and 30. This is a really neat way to calculate a round-robin partition number for each day. The function maps input to output like this:

Input Output
1 1
2 2
29 29
30 30
31 1
32 2

So day 31 will overwrite day 1, and so forth.


Step Two: Store the Partition Number as a Persisted Computed Column on the Table

The next step is to persist the partition number as a column on the large table. This is a cut-down, simplified version of the table for illustrative purposes:

--Create a test table
CREATE TABLE [dbo].[BigTable]
(
    [Partition] AS dbo.fnPartition(Day) PERSISTED,
    [Day]       SMALLINT        NOT NULL,
    [Symbol]    CHAR(5)     NOT NULL,
    [Ref]       CHAR(15)    NOT NULL,
    [Open]      BIT                     NOT NULL,
    [Closed]    BIT                     NOT NULL,
    [IsEmpty]   BIT                     NOT NULL,
)
ON PRIMARY
GO

This builds a table which has an integer between 1 and 30 stored in the Partition column


Step Three: Create A Partition Function

The next step is to add a partition function. This function maps an input parameter to a partition number. In this scenario, the hard part of figuring out which partition to use has already been completed by the hash function and stored in the [Partition] column. I found this was an easier approach, as there are restrictions on what you can and can’t do inside a partition function.

CREATE PARTITION FUNCTION [pfBigTable](TINYINT)
AS RANGE LEFT FOR VALUES
(
    1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,
    16,17,18,19,20,21,22,23,24,25,26,27,28,29,30
)
GO


Step Four: Create A Partition Scheme

The partition scheme maps the partitions specified in the partition function to their physical storage location. For this scenario, I mapped all 30 partitions to a single filegroup.

CREATE PARTITION SCHEME psBigTable
AS PARTITION pfBigTable
ALL TO (BigFileGroup)
GO 

Step Five: Configure the Table to Use the Partition Scheme

At this stage, the table can be recreated on the partition scheme rather than on the PRIMARY filegroup where it is currently sitting. The ON psBigTable([Partition]) statement tells SQL Server that the [Partition] column is what should be supplied to the partition function in order to decide where to store each row.

CREATE TABLE [dbo].[BigTable]
(
    [Partition] AS dbo.fnPartition(Day) PERSISTED,
    [Day]       SMALLINT        NOT NULL,
    [Symbol]    CHAR(5)     NOT NULL,
    [Ref]       CHAR(15)    NOT NULL,
    [Open]      BIT                     NOT NULL,
    [Closed]    BIT                     NOT NULL,
    [IsEmpty]   BIT                     NOT NULL,
)
ON psBigTable([Partition])
GO

Summary

That’s it for part one. The result is a partitioned table spread across 30 partitions, with a simple hash function in place to ensure that data is redirected to the appropriate partition based on the monotonically increasing [Day] column. This ensures that the buckets get ‘recycled’ every 30 days.

In the next post I’ll discuss partition aligned indexes, and an elegant way to do massive data deletes using partition switching.

1 comment:

Madhan Kumar said...

Hi Ben,

Great Article. I have 1 doubt. How the data will get deleted/cleaned say on partition 1 when the data insertion cycle restarts inserting again on partition 1 after a month?

Thanks
Madhan