Seems like an interesting way to solve the issue. I’m just really curious why you even went to the effort to implement it? Firstly there are many known solutions, which often work much better (As you already mentioned in this article). Secondly, given the existence of Hekaton, I would have expected there to be a massively more elegant solution. Not that many people what this issue, in my experience. As such there doesn’t; seem to be a real urge to throw together a solution. Would it not be worth investing the time in something like a Hekaton merge table? I mean imagine a lock-less, latch-less insert system (Oh yeah it exists). Then every (Randomly pulled out of thin air) 512 inserts you merge those into the main table. Once that the merge completes you remove that merge table. In the mean time you create a new merge table. I mean the solution is really quite simple, exists in other database engines (Look at WiredTiger, which MongoDB bought - I really wish a better company had bought it). Merge tables aren’t new or exciting technology. They do however work and have done for a long time. Oh would it also be possible to stop half the data in non-leaf levels moving to new pages with page splits in an identity index please? I don’t see why all non-leaf level pages have to be 50% full until the next index rebuild. Especially because I shouldn’t need to ever rebuild an identity index. Given that would be most clustered indexes that I have that would make quite a difference to maintenance windows. Cool that you’re doing things to make SQL Server better. I would just prefer it to be a bit longer term and more solution oriented rather than sticking plaster oriented.