As you already noticed, 20GB is not enough to run that query, but what do you expect when you join a 120 million row fact table to several dimensions without any WHERE-condition?
The query needs 112GB spool, but not at the same point in time, whenever there's a "(Last Use)" this specific spool is released after the step. Of course all those GBs are just a guess by the optimizer based on available statistics. And explain only has low or even no confidence, so there might be some missing stats leading to 300 million rows estimated vs. 120 million actual rows.
As you already noticed, 20GB is not enough to run that query, but what do you expect when you join a 120 million row fact table to several dimensions without any WHERE-condition?
The query needs 112GB spool, but not at the same point in time, whenever there's a "(Last Use)" this specific spool is released after the step. Of course all those GBs are just a guess by the optimizer based on available statistics. And explain only has low or even no confidence, so there might be some missing stats leading to 300 million rows estimated vs. 120 million actual rows.
Dieter