I think some "ideas" about the workflow are different when working on GPU based applications.
The main idea of displacement referres to an overall low memoryprint when working with the actually scene, while add detail for the process of rendering since it will eat up your performance anyway.
However if the memory needs 100 meg for 1 mil polys, that already is quite a lot to put the detail where you need it. Sure its easy to populate the scene with too much polys like using archviz stuff (trees, grass or plants), but this is a rendering app and not a sculpting software that needs a quick response on deformations.
I always need to work with what the software is capable of and fortunatly that is waaaaay beyond the conditions like 10 years before ("oh look, it got light support in openGL..". So instead of using the normalmap i made in zbrush, i simply throw in the highres model itself, can't get better than this !
So the question is, do i need a support for displacement when im capable of loading +10 mil polygons (without instancing) ? Sure its neat, as you don't need a software to create the 10 mil mesh, as well as a format that is capable of storing it easily. But i think technically you will have the exact same memoryprint on the gpu when you want to render the displacement on a poly base (again without the use of fancy instancing tricks) or load the mesh as is.
Correct me if im wrong.
Hardware tesselation may improve the scene dynamically if i move around my camera, but on the other hand i plan shots mostly before actually rendering it, so there is already some optimization.
Don't get me wrong, i like to have optional choices, but working with what you have is the way you need to work more often than you wish for

Besides a limitation can also be inspiring !
Just my two cents for the night
