SQL Server Home
PowerPivot Out of Memory Exception
as By Design
12/21/2009 7:39:26 PM
User(s) can reproduce this bug
When importing data into PowerPivot I often get the following error:
Memory error: Allocation failure : Not enough storage is available to process this command. .
The operation has been cancelled.
Memory is not an issue on the machine I am using as I have 8GB available. It varies how far I can get into the dataset import before this happens. I have never been able to get > 10,000,000 rows before it does, though. Sometimes it happens as early as 1,000,000.
SQL Server 2008 R2 November CTP –“Gemini” Add in
Windows 7 Enterprise
Operating System Language
Steps to Reproduce
Open up PowerPivot and select a large table. In my case I have a table with 150,000,000 rows and 190 columns. All I do is try to import it.
Out of memory exception
to post a comment.
Please enter a comment.
on 1/8/2010 at 11:10 AM
After talking to Denny we understand this was resolved after moving to Excel and PowerPivot 64-bit. We are closing this defect but have opened a separate one to make sure the 32-bit limit is properly documented. Thanks again for you feedback.
on 1/7/2010 at 2:39 PM
Thank you for providing this feedback. We are currently investigating the reported functionality and will provide updates as we have them.
Thank you for assistance,
The Analysis Services Team
on 12/25/2009 at 8:49 AM
First of all, if you're using a 32 bit version of Excel, the maximum amount of memory you can use is between 1 and 2Gb. Moreover, during the process the amount of memory required is larger than the final size of the memory required to store data. Just to make an example, I can exhaust memory (on 32-bit Excel) using a 6 million rows table including an attribute with a unique value for each row. By simply removing that attribute, I'm able to process that table (with not so many columns).
In my experience it is not easy to predict the amount of memory required to process a dataset. It depends on:
- the number of columns
- the cardinality of columns (the number of unique values in a column)
- the relationships between attributes
Instead of having a single table with a lot of different columns, if you have millions of rows it is better to partially normalize data and loading related attributes in separated tables. Having a Kimball star schema is usually a good balance of denormalization.
However, I'd like to get feedback from Microsoft about this.
to post a workaround.
Please enter a workaround.
© 2014 Microsoft