Temporal Tables is an important and anticipated feature of upcoming SQL Server 2016, it definitely worth series of blogs and articles which I and other bloggers written recently. One of the aspects last weeks covered on this site is a performance. This time I would like to go even further by enlarging 10x times the dataset and provide brief execution results.
Changes in benchmarking
Benchamrks based on the approach used in previous posts, but with some key differences:
- Number of rows in dbo.Product increased from 1 000 000 to 10 000 000
- DataFiller column datatype narrowed from CHAR(1000) to CHAR(100)
- DMLs Insert/Update batch size increased from 100 000 to 1 000 000 rows. Therefore, history table holds rows of two UDDATE batches and two of DELETEs
Queries remained the same as in previous benchmarks. They also available on a github for tests reproduction.
|Current table||History table||Total|
|Raw data (uncompressed rowstore)||2557||1199||3756|
|Scenario 1: rowstore objects (default)||2556||453||3009|
|Scenario 2: rowstore current and columnstore history||2556||100||2656|
|Scenario 3: columnstore objects||223||98||321|