PowerPivot Out of Memory Exception - by Guy Babbitt

Status : 

  By Design<br /><br />
		The product team believes this item works according to its intended design.<br /><br />
		A more detailed explanation for the resolution of this particular item may have been provided in the comments section.


1
0
Sign in
to vote
ID 521608 Comments
Status Closed Workarounds
Type Bug Repros 2
Opened 12/21/2009 7:39:26 PM
Access Restriction Public

Description

When importing data into PowerPivot I often get the following error:

Memory error: Allocation failure : Not enough storage is available to process this command. .

The operation has been cancelled.

Memory is not an issue on the machine I am using as I have 8GB available. It varies how far I can get into the dataset import before this happens. I have never been able to get > 10,000,000 rows before it does, though. Sometimes it happens as early as 1,000,000.
Sign in to post a comment.
Posted by Julie [MSFT] on 1/8/2010 at 11:10 AM
After talking to Denny we understand this was resolved after moving to Excel and PowerPivot 64-bit. We are closing this defect but have opened a separate one to make sure the 32-bit limit is properly documented. Thanks again for you feedback.    
Posted by Kathy [MSFT] on 1/7/2010 at 2:39 PM
Thank you for providing this feedback. We are currently investigating the reported functionality and will provide updates as we have them.

Thank you for assistance,
The Analysis Services Team
Posted by Marco Russo on 12/25/2009 at 8:49 AM
First of all, if you're using a 32 bit version of Excel, the maximum amount of memory you can use is between 1 and 2Gb. Moreover, during the process the amount of memory required is larger than the final size of the memory required to store data. Just to make an example, I can exhaust memory (on 32-bit Excel) using a 6 million rows table including an attribute with a unique value for each row. By simply removing that attribute, I'm able to process that table (with not so many columns).
In my experience it is not easy to predict the amount of memory required to process a dataset. It depends on:
- the number of columns
- the cardinality of columns (the number of unique values in a column)
- the relationships between attributes

Instead of having a single table with a lot of different columns, if you have millions of rows it is better to partially normalize data and loading related attributes in separated tables. Having a Kimball star schema is usually a good balance of denormalization.

However, I'd like to get feedback from Microsoft about this.

Marco