The cost of a disk seek cannot be assumed to be relatively constant across servers. SSDs, SANs and fully in-memory databases offer much higher io/s rates than servers with few drives offer. Therefore we need a per-instance or per-database setting to tell the query optimizer how many disk seeks and how many MB this server can read per second. This setting is currently available as an undocumented per-session setting (not usable in production).
This setting could surface as the following sql code:
alter database xyz set sequential_io_rate = 120 --250mb/s
alter database xyz set random_io_rate = 10000 --ssd can read 10k small blocks per second
This feature could also be used to tell the query optimizer that the database is expected to be fully in memory at all times:
alter database xyz set sequential_io_rate = 1000 --1gb/s
alter database xyz set random_io_rate = 100000 --ram can probably provide at least 100k "seeks" per second including overhead
The value of this feature lies in the fact that the optimizer can now choose the optimal plan considering the actual hardware and not some kind of "canonical server hardware". With SSDs plans with many seeks become more viable.