Recently we worked on a SQL CE issue where bulk deletion of the record in the table was taking longer than expected.
In initial troubleshooting this looked to be an issue with deletion tracking but here is what we got in the further analysis
The table from which records are being deleted is the root of multiple multilevel referential constraints. When a record is deleted in the root table, referential Integrity constraint is evaluated for each of the table directly referring to this table. In other words, in each of these tables, Storage engine has to find all the records whose referring column values match with those in the referenced table. Also, this process happens recursively on these tables as well.
In order to locate related records, Storage Engine tries to use indexes if they are present; otherwise it uses a full table scan. This is exactly what was happening in our case: appropriate indexes were missing on some of the tables, so Storage engine was forced to use table scan.
So for example in ‘X’ table has if we have ‘n’ records and Delete effects ‘m’ records and for each of these ‘n’ records ‘m*n’ were being read , which was leading to reading of high number of pages. In our case we were also using the Crypto algorithm which was further complicating the issue
By creating indexes on the referring columns in the referencing tables table scan is avoided thereby significantly reducing the pages to be read. Since the number of pages being read get reduced a lot, cost incurred in encryption algorithms went down significantly too.
For the best delete performance
Ideally indexes on the referring columns should be created throughout the relationship graph;
One should carefully analyze the scenarios before creating indexes, as multiple indexes means increased index maintenance and thus might adversely affect inserts/updates.
Author : Lalitesh (MSFT), SQL Developer Engineer, Microsoft
Reviewed by : Syed(MSFT), SQL Developer Technical Lead, Microsoft