Page 1 of 2
OTOY unveils holographic video
Posted: Tue Aug 12, 2014 3:59 am
by Nuge
http://render.otoy.com/newsblog/?p=547
Wow, just need a light saber and my dreams have come true!!!
Re: OTOY unveils holographic video
Posted: Tue Aug 12, 2014 5:17 am
by Goldorak
Scenes exported in this format render instantly on any GPU (even mobile). The offline rendering time is proportional to the navigable volume, in increments of ~ 1 cubic foot. This is also where cloud rendering comes in handy.
Re: OTOY unveils holographic video
Posted: Tue Aug 12, 2014 7:11 am
by Vue2Octane
Impressive idea.
This is basically a
lightfield camera realised in 3D software.
This is the camera that is simulated. It also allows 3D from one shot (in Octane one render).
https://www.lytro.com
Re: OTOY unveils holographic video
Posted: Tue Aug 12, 2014 7:30 am
by Goldorak
Vue2Octane wrote:Impressive idea.
This is basically a
lightfield camera realised in 3D software.
This is the camera that is simulated. It also allows 3D from one shot (in Octane one render).
https://www.lytro.com
Correct. Lytro has a very small grid size / capture volume, but an ORBX
lightfield can be any size. You can scale up in units of about 10 cm cube. GPU LF rendering w/ Octane scales really nicely, just like 2D rendering. It is a good fit for the cloud service. Also, compared to traditional baking, this is better - you capture reflections/refractions, SSS, etc, not just 'splotchy' GI.
We can embed info channel kernel/render pass data into the LF pixels (by default depth is included). Lot's of possibilities...especially when you combine this with some of the other announcements coming out of Siggraph this week...
Re: OTOY unveils holographic video
Posted: Tue Aug 12, 2014 8:18 am
by Vue2Octane
You guys really know what you are doing!
I looked into LFcameras myself, also designing one in Zemax Optical Design Software at some point.
It is quite nice looking at how the Octane team adds lots of game changers to the render world.
Not just the same technique slighlty refined or modified, but real game changers.
Adapting the lightfield concept to a render engine is just smart!
Re: OTOY unveils holographic video
Posted: Tue Aug 12, 2014 10:37 am
by Rikk The Gaijin
Re: OTOY unveils holographic video
Posted: Tue Aug 12, 2014 11:57 am
by linvanchene
edited and removed by user
Re: OTOY unveils holographic video
Posted: Tue Aug 12, 2014 12:07 pm
by r-username
Very nice and glad to be using otoy software.
Is there a time frame for a beta version of standalone with rift type renders.
I have a fire phone with "dynamic perspective", can we render content for this device?
Re: OTOY unveils holographic video
Posted: Tue Aug 12, 2014 4:18 pm
by MOSFET
@Goldorak: Do you plan on having an online demo of this tech? This announcement may have had more impact if you had one available to the public at the same time as the press release.
Re: OTOY unveils holographic video
Posted: Tue Aug 12, 2014 5:44 pm
by Goldorak
linvanchene wrote:This is amazing news!
But I am a bit confused about some points:
In the blog its says this was presented on a mobile phone.
Was that a prototype with a special display?
How do those displays know from which angle we are looking at it?
or
Does the display always show all angles of the frame?
This seems like a lot of data to process.
What is the file size of one frame?
What minimum RAM should devices have?
You have two options - have the cloud service decode the
lightfield in HD and stream it down with depth and an LF mipmap (for quick reprojection - also plugs into time warping on the Oculus).
OR
Send down the scene as an ORBX LF volume, for local or offline viewing using OpenGL ES (or WebGL + ORBX.js). The size can be 16 Mb to 16 GB depending on the volume.
The ORBX LF codec is still early in development, but the size is getting smaller the further we develop it and user more info from the infochannel kernel to compress the LF more . At medium quality, a 1 foot LF view cube is about 8x larger than a hi res 2D surface PNG @650 dpi.
If you are on the cloud, you can keep streaming in more LF cubes as you move through the scene. If you are on a mobile device with OpenGLES 3, the idea is you download an LF cube, and view the volume locally rather than look at a 2D picture. It it should fit into the device memory in that case, but larger, full res volumes should be streamed from the cloud (you can could cache a mip map of the current LF cube from the stream itself)