The SQL Server query optimizer often chooses a serial plan (with a significant estimated cost) where a parallel plan would execute much faster.
Often, the desired parallel plan costs only very slightly higher than the serial version.
The root cause can usually be traced back to optimizer model limitations, other simplifying assumptions made by the costing component, unavoidable cardinality estimation errors in moderately complex plans,
or the use of operations that do not allow us to give the optimizer good information about the cardinality and distribution of output values (e.g. CLR functions).
To be clear, the motivation for this suggestion is for cases where all reasonable steps have been taken to ensure the information available to the optimizer is accurate
(e.g. computed columns, multi-column statistics, filtered indexes), and yet costing still estimates a high-cost serial plan option to be cheaper than the parallel alternative.
Examples are often decision-support or data-warehouse queries where SQL Server is running on fast storage, or where the working set of the database is generally expected to be in buffer pool.
These types of scenarios often result in currently unavoidable out-of-model conditions.
Currently, these cases require us to rewrite the query using arcane tricks to artificially raise estimated costs and/or cardinality estimates.
An alternative, though equally unacceptable current workaround is to use undocumented DBCC commands to artificially raise the costing multiplier for CPU costs,
capture the parallel plan produced, and specify that in a plan guide.
It is appreciated that future versions of SQL Server may seek to ameliorate these issues through model improvements,
but it seems there will always be cases where the optimizer makes the wrong choice, even if the reasons are understandable.