Repro shows a situation where a stored procedure with a #temp table first gets a plan compiled for the case where the #temp table has 10 rows. It is then invoked a second time and the #temp table cardinality is 990,000.
Despite the massive increase in number of rows it gets no optimality based recompile and re-uses the nested loops plan from the first invocation rather than the merge join it would use if recompiled. If the primary key is removed from the table definition I do see statistics based recompiles then.
Reversing the order of the procedure calls so it is first ran with cardinality of 990,000 and then with cardinality 10 does lead to a statistics based recompile for both the heap and the clustered case.