There's a large difference between the workflows, primarily due to the difference between using subdivisions at render time versus subdivisions in c4d. The c4d displacement deformer requires your geometry to already be subdivided to a sufficient degree and doesn't support render-time displacement, which means for high detail displacement the performance of your scene (navigating around the viewport, animating in the viewport) can be drastically impacted if you've got millions of polygons on your object. Sometimes to an unworkable degree in the case of photorealism displacement subjects such as highly textured objects like procedurally generated limestone bricks (a project I was working on).
If you're subdividing using the subd generator without baking that down, and you're subdividing a lot, it's really painful to work with your scene because it recalculates every time you move around the viewport or change other things in your scene. So generally you're forced to bake down the subdivisions, and now it's harder to work with your mesh from a modelling perspective if tweaks are required - forcing you to go back to a low poly copy to make adjustments, re-calculate subdivisions, re-adjust tags, and so forth. Not an ideal workflow.
In my particular use case I ran into this problem (32m tri's were needed for my super close-up shots, wouldn't have been practical to work with this in the viewport). I needed to create different displacement for the face of my object versus the rest of the object. Doing it using c4d subd/displacement would have meant that my objects face displacement is all managed through the c4d shaders on the deformer as well which makes it more difficult to work with from a material/texturing perspective as it's separate from my Octane material. For my project I had to create UVs and use those to create masks in photoshop for my displacement, and use vertex displacement mixers to mask out areas. Would have been a lot easier if I could have used a vertex map for the mask instead of having to UV map & create mask images.
It's also faster to use octane's subdivisions in general, as noted in your documentation:
Note that the Octane Object tag and the Displacement node subdivision will do the subdivision on the GPU, which will improve the time it takes to load the asset onto the GPU — as opposed to the Cinema SubD generator, which uses the CPU to subdivide and will result in quite a bit more data being sent over to the GPU over the bus, and that will take some time in comparison.