Home Dashboard Directory Help
Search

"The query processor ran out of stack space during query optimization" during simple DELETE by FBuchan


Status: 

Closed
 as By Design Help for as By Design


27
1
Sign in
to vote
Type: Bug
ID: 172561
Opened: 7/28/2006 6:41:35 AM
Access Restriction: Public
Duplicates: 264460
0
Workaround(s)
view
5
User(s) can reproduce this bug

Description

Attempting to execute a DELETE against a row in a table causes the "The query processor ran out of stack space during query optimization" error to be returned. This appears to occur with any table that has a significant number of integrity impositions, and it is irrelevant whether the subordinate tables contain any data. It is also irrelevant whether those integrity links cascade or do not, or set NULL, etc. Deleteing a single row based on its unique primary key consistently returns the error. The same database in SQL Server 2000 does not have this issue.

What is doubly frustrating about this is that the database, while old and somewhat crusty around the edges, is fairly simple overall, not leaving much room for model changes without essentially starting to drop internal integrity checks. Since these checks work fine for insertion and updates, and the data itself is made much more consistent by the checks, that option isn't of much value.
Details
Sign in to post a comment.
Posted by Tubasi on 7/12/2011 at 10:45 AM
I'm sure this is a BUG and Microsoft Developers should come over it. I hope they can fix it
Posted by Jason Thorn on 2/20/2008 at 1:37 PM
I don't understand why is this issue closed (by design), ... there are reports of this NOT being a problem under SQL 2000. Is someone trying to tell me that there are now more restrictive rules in SQL 2005 regarding foreign keys than there used to be in SQL 2000? That would make upgrading kinda difficult.

Thats exactly the situation I'm in. I have a database that works fine under SQL 2000 however when the IT dept of one of my users moved it to SQL 2005, they could no longer delete certain records. I refuse to accept that I must change the database schema of the database, which works fine in SQL 2000 to eliminate some foreign keys, and with them some domain integrity such that SQL 2005 will work. I would rather remain on SQL 2000!!!

Stating that the limitation is by design is bogus.
Posted by Microsoft on 7/31/2006 at 11:16 AM
The SQL Server Query Processor has some internal limitations that affect the compilation of query plans that modify or delete the key of a primary table referenced by a large number of foreign key constraints. These limitations are explained in the "FOREIGN KEY Constraints" topic of Books Online:

"Number of FOREIGN KEY Constraints in a Table
SQL Server does not have a predefined limit on either the number of FOREIGN KEY constraints a table can contain (which reference other tables), or the number of FOREIGN KEY constraints owned by other tables that reference a specific table. Nevertheless, the actual number of FOREIGN KEY constraints is limited by your hardware configuration and by the design of your database and application. We recommend that a table contain no more than 253 FOREIGN KEY constraints, and that it be referenced by no more than 253 FOREIGN KEY constraints. Consider the cost of enforcing FOREIGN KEY constraints when you design your database and applications"

FBuchan clarified with an email conversation we had that this issue is indeed not a regression from SQL 2000. The schema with the large number of foreign key constraints is new. Hence, we agreed to resolve this bug by design, and open another bug to track a suggestion to raise a warning when creating a very large number of foreign key constraints all referencing the same table.

Thanks
Posted by FBuchan on 7/28/2006 at 4:47 PM
I deserve a bit of a tap on the head for this one. The problem was related to the known limitation issue regarding 250 or so FK references on a table; known not only to the community but even to me.

The database in question is one we have been trying to drag kicking and screaming into the future, and part of the ongoing work introduced an opportunity to improve the row-audit stamps process being used. Previously, the system used a formatted field that indicated who created a row and when, and when it was last edited and by whom. In a refactor this was broken out for easier parsing. The audit stamps are a requirement of the system, but almost never accessed. When examining the length of identifying string data it was decided, by me, that an insertion of the ID of the user making the change would be a better choice that duplicating the identifying information on every row. That in itself was fine, and the split process worked perfectly, but I signed off on a script that also introduced two (2) FK references to the user table for every audit-stamped table in the system (creator id and editor id). These were unnecessary, even for purposes of integrity, because the other requirement of the production system is that no row is ever deleted; rather it is simply marked for deletion and removed form the active row set by the intervening procedural layer. In our development environment, of course, we often clean up data-sets while testing, and the row deletion error cropped up.

So, this is not a bug so much as a limitation that slipped past my brain.

Having corrected the report, though, it might be a handy thing if a future evolution of the engine stored the guidelines for limitations, even if just to report them after a script that might have exceeded them. Had the final output of the script been a warning that the object was referenced in excess of the suggested limitation, it would have been sufficient to entirely avoid mistaking it as a bug.

At least I had an interesting day that reminded me I really shouldn't ever sign off on a big change on a dreary day. ;-)
Sign in to post a workaround.