"If the clustered index is not created with the UNIQUE property, the Database Engine automatically adds a
column to the table. When it is required, the Database Engine automatically adds a uniqueifier value to a row to make each key unique.
This column and its values are used internally and cannot be seen or accessed by users
While it´s unlikely that you will face an issue related with uniqueifiers, we have seen rare cases where customer reaches the uniqueifier limit of 2,147,483,648, generating error 666.
Msg 666, Level 16, State 2, Line 1
The maximum system-generated unique value for a duplicate group was exceeded for index with partition ID <PARTITIONID>. Dropping and re-creating the index may resolve this; otherwise, use another clustering key.
The error message is clear about the actions you should take to resolve the issue.
Drop and recreate the index may resolve the problem if you don´t have 2,147,483,648 records with the same key value. Recreating the index will allow the uniqueifier to be reset, giving you some time to review table design before reaching the limit again.
As of February 2018, the design goal for the storage engine is to not reset uniqueifiers during REBUILDs. As such, rebuild of the index ideally would not reset uniquifiers and issue would continue to occur, while inserting new data with a key value for which the uniquifiers were exhausted. But current engine behavior is different for one specific case, if you use the statement
ALTER INDEX ALL ON <TABLE> REBUILD WITH (ONLINE = ON)
, it will reset the uniqueifiers (across all version starting SQL Server 2005 to SQL Server 2017).
This is something that is not documented and can change in future versions, so our recommendation is that you should review table design to avoid relying on it.
Related to the table design aspect, based on the second recommendation from error message, our question is: “Is a good design to choose a non-unique clustering key that could have several million/billion duplicate key values?”
The short answer is
. Of course, we know that when it comes to table design there is usually not a right or wrong answer, but the
majority of cases should not rely heavily on uniqueifiers.
While most cases will likely have at most hundreds or thousands of duplicated records for a single value and, is straightforward to solve the issue if you get error 666 (having less duplicated rows than the uniqueifier limit), it can cause some downtime when executing the required steps. Consequently, the best course of action
is to review tables that rely on uniqueifiers and proactively work to improve its design
Hope it helps you to have a better understanding about uniqueifiers and to review table design, in order to avoid error 666 in your production environment.