SQL Server System Table's Data should be more durable - by Daniel Adeniji

Status : 

  Won't Fix<br /><br />
		Due to several factors the product team decided to focus its efforts on other items.<br /><br />
		A more detailed explanation for the resolution of this particular item may have been provided in the comments section.

Sign in
to vote
ID 773815 Comments
Status Closed Workarounds
Type Suggestion Repros 0
Opened 12/7/2012 11:51:13 AM
Access Restriction Public


Errors we are running into:

SQL Server detected a logical consistency-based I/O error: incorrect checksum (expected: 0x212ae472; actual: 0x6fe2546c). It occurred during a read of page (3:1747008) in database ID 5 at offset 0x00000355080000 in file 'D:\SQLDATA\DBSales.ndf'.  Additional messages in the SQL Server error log or system event log may provide more detail. This is a severe error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.

Error Description:

One of the strong design principle per-used with Hadoop\HDFS is the concept of "replication factor".  By default it is set to 3:


For the common case, when the replication factor is three, HDFS’s placement policy is to put one replica on one node in the local rack, another on a different node in the local rack, and the last on a different node in a different rack. This policy cuts the inter-rack write traffic which generally improves write performance. The chance of rack failure is far less than that of node failure; this policy does not impact data reliability and availability guarantees. However, it does reduce the aggregate network bandwidth used when reading data since a block is placed in only two unique racks rather than three. With this policy, the replicas of a file do not evenly distribute across the racks. One third of replicas are on one node, two thirds of replicas are on one rack, and the other third are evenly distributed across the remaining racks. This policy improves write performance without compromising data reliability or read performance.

In Microsoft SQL Server identical design principles are currently in-place:

a) Database Clustering (Unique Disk Sub-Systems)
b) Database Mirroring (Shared Disk Sub-system)
c) Replication  (Unique Disk Sub-Systems)
d) Log Shipping (Unique Disk Sub-Systems)
e) DB Backup

Database Mirroring is really pushing the envelope and is able to repair bad problems, by utilizing data from partner nodes.

On Mirrored Systems we have strong impediments that toughen the task of using "DBCC CHECKDB" when trying to repair the passive nodes of DB Mirrored pair.  One can not place the partner mode is single-user nor access the partner DB.

Sign in to post a comment.
Posted by Daniel Adeniji on 5/16/2013 at 11:51 AM
Thanks for looking into this.

I think I was thinking more along the lines of Hadoop NameNode and Secondary Name Node. And, that means that the contents of the system tables should be protected at all cost.

That can include secondary tracking in the same database data files or extending the use of the "Resource" database.

You 're right that having the System Tables replicated will help, but I think that creates its own problems:

a) Arguably, bad stuff from primary can get replicated over
b) More intense changes to the original database mirroring architectural
c) will only serve systems that are replicated

The more the change can occur outside of the db mirroring & etc, the cleaner its design will be. And, hopefully we can better support and test various use cases.

Again, anything the system can do behind the scene to ensure recoverability and make it almost transparent (like the one DB Mirroring currently freely offers) is a BIG win.


Posted by Microsoft on 5/16/2013 at 10:38 AM
Thanks for your feedback.

If I understand correctly, you would like the SQL Server high availability solutions (e.g. Availability Groups) to be able to also replicate system tables, correct?