Recently I tweeted a test still I rendered of Beeple's awesome Zero-Day short film.
Here's the link to the cube map version (for OrbX):
http://bit.ly/1O37R9l
I just wanted to run through some experimentation I've been doing and see if anyone else has been exploring similar processes. One issue I was having, was since "post effects" don't render properly into panoramic scenes, I've been looking for a clean way to composite without causing seams to appear in the cubemap renders.
At first my approach was to create a complex comp structure in AE to restitch the proper sides of each face as I'm working with it, then reassemble back into the proper strip. This seamed overly complicated and would never allow you to see everything together at once.
My next thought was to figure out what the 18k resolution actually equates too. A rough estimation got me to about 5k per eye of 16x9 pixels. So I created this Beeple render by rendering each eye spherically mapped at 5k. I then did glows and custom color correction masked into the right areas. The final step was just to bring the comped frames back into Octane for a quick skydome render to convert them into the cubemaps.
Has anyone done similar experimentation along these lines? Is there a more straight forward way of doing this?
My next challenge is to apply this all to animation. Otoy seems open to helping out with the rendering process if we can use this as a sort of show piece for OrbXs new animation abilities. That would be so cool!
Compositing w Cube Maps + Animation
The 18K cube maps translate to 6144 x 3072 per eye in long/lat spherical pano format. If you do top/bottom rendering, you can fit both eyes in a 6144x6144 square image and in theory not lose resolution relative to the cube map, and even use post effects (but it will be 25% more pixels/time to render).
While we haven't done a cube map -> spherical pano converter (yet -we'd like to import cube maps into the environment mode in V3), you can easily do the reverse and bring in a spherical pano for one view/eye as an env node and render it into a cube map from Octane.
Noe that the ORBX player can use spherical panos just as well as cube maps, but quality is not as good, and the file size/memory is 25% more. That is not ideal for animation or navigable scenes.
For the 3.0 ORBX player, Hayssam is exploring fast local post processing in VR similar to Icenhancer (http://icelaglace.com/).
While we haven't done a cube map -> spherical pano converter (yet -we'd like to import cube maps into the environment mode in V3), you can easily do the reverse and bring in a spherical pano for one view/eye as an env node and render it into a cube map from Octane.
Noe that the ORBX player can use spherical panos just as well as cube maps, but quality is not as good, and the file size/memory is 25% more. That is not ideal for animation or navigable scenes.
For the 3.0 ORBX player, Hayssam is exploring fast local post processing in VR similar to Icenhancer (http://icelaglace.com/).
Thanks that's super helpful! In order for the OrbX Player to recognize spherical renders do I need to adjust the file name or edit the JSON file? I haven't experimented with this yet, but it would be really helpful as we're often doing lower res previews before our final animations (And until now been jumping into Milk VR which is not ideal).
Actually rendering each eye separately has worked to our advantage so far since we're doing local renders and this lets us use double the GPUs (because of the 12 GPU limit). But its good to know a proper format for combining into one image as well.
The other hurdle in figuring out Beeple's animation is that last I checked, c4d cameras didn't translate to Octane standalone very well, so if we were to use ORC at all, we would need to reanimate his camera (not too crazy in this case), or figure out a way to bring across the data. I do believe the newest version of the c4d plugin has some kind of cloud render button which probably means that issue will be resolved soon?
Actually rendering each eye separately has worked to our advantage so far since we're doing local renders and this lets us use double the GPUs (because of the 12 GPU limit). But its good to know a proper format for combining into one image as well.
The other hurdle in figuring out Beeple's animation is that last I checked, c4d cameras didn't translate to Octane standalone very well, so if we were to use ORC at all, we would need to reanimate his camera (not too crazy in this case), or figure out a way to bring across the data. I do believe the newest version of the c4d plugin has some kind of cloud render button which probably means that issue will be resolved soon?
Hi,
the equirectangular pano images are recognised directly by ORBX, no need to edit the json file
About the c4d camera export, for now, export all the cameras(animated or not) via alembic c4d exporter: in standalone only verify the correct scale to match the alembic ones: then use this scripted graph to transform an abc node into a thin lens or panoramic camera: ciao beppe
the equirectangular pano images are recognised directly by ORBX, no need to edit the json file

About the c4d camera export, for now, export all the cameras(animated or not) via alembic c4d exporter: in standalone only verify the correct scale to match the alembic ones: then use this scripted graph to transform an abc node into a thin lens or panoramic camera: ciao beppe
Maybe you guys have some info on this as well.. I've heard conflicted numbers on how much time we can actually play back a full 18k cubemap animations. I was told 250 frames at one point but then you guys tweeted about doing 15 secs at 60 fps. I assume some of this discrepancy is due to the phone model being used as well. is it possible to get a breakdown of what those limitations are or is that all still being figured out?
Also of interest would be, does there need to be a pause after that animation plays, and if so how many seconds? Or if we're keeping the same animation in the buffer can we loop it seamlessly? I know you guys are working on documentation, but any insight would be super helpful.
Also of interest would be, does there need to be a pause after that animation plays, and if so how many seconds? Or if we're keeping the same animation in the buffer can we loop it seamlessly? I know you guys are working on documentation, but any insight would be super helpful.
We have tested about 500 frames on the S6, but stability becomes an issue as Android GPU memory usage isn't always reported back to us correctly. The Note 4 is limited to about 250 frames.bubimude wrote:Maybe you guys have some info on this as well.. I've heard conflicted numbers on how much time we can actually play back a full 18k cubemap animations. I was told 250 frames at one point but then you guys tweeted about doing 15 secs at 60 fps. I assume some of this discrepancy is due to the phone model being used as well. is it possible to get a breakdown of what those limitations are or is that all still being figured out?
Also of interest would be, does there need to be a pause after that animation plays, and if so how many seconds? Or if we're keeping the same animation in the buffer can we loop it seamlessly? I know you guys are working on documentation, but any insight would be super helpful.
We can also stream from storage for navigable cube map paths. We load just what is needed when you move to another node the frame and the constraints on GPU memory are much less strict. This is what we do in the batcave, and it allows for unlimited size scenes (as long as you have the storage and don't mind the download).
We have a way to get 18K cube map video streaming from local storage at 60 fps with very little buffering , but that is on the to do list for the 3.0 app, which we have just started on now.
We haven't gotten our hands on the S6+/Note 5 yet, but both should be able to buffer just about 800 frames of continuous 18K stereo cube maps.bubimude wrote:Awesome so many questions answered. You guys rock! I was hoping we could use the natural loop point in Beeple's scene, but that would put is to about 800 frames at 60 fps.
Could try at 30 fps but there's quite a bit of movement.
We have an idea that may get 2x compression and enable 1600 frames to be buffered on those devices (and make the ORBX files 2x smaller to download/store). By then we'll probably have streaming working off local storage as well.