http://render.otoy.com/newsblog/?p=547
Wow, just need a light saber and my dreams have come true!!!
OTOY unveils holographic video
Forum rules
Please add your OS and Hardware Configuration in your signature, it makes it easier for us to help you analyze problems. Example: Win 7 64 | Geforce GTX680 | i7 3770 | 16GB
Please add your OS and Hardware Configuration in your signature, it makes it easier for us to help you analyze problems. Example: Win 7 64 | Geforce GTX680 | i7 3770 | 16GB
Scenes exported in this format render instantly on any GPU (even mobile). The offline rendering time is proportional to the navigable volume, in increments of ~ 1 cubic foot. This is also where cloud rendering comes in handy.
- Vue2Octane
- Posts: 88
- Joined: Thu Jun 26, 2014 8:16 am
Impressive idea.
This is basically a lightfield camera realised in 3D software.
This is the camera that is simulated. It also allows 3D from one shot (in Octane one render).
https://www.lytro.com
This is basically a lightfield camera realised in 3D software.
This is the camera that is simulated. It also allows 3D from one shot (in Octane one render).
https://www.lytro.com
Correct. Lytro has a very small grid size / capture volume, but an ORBX lightfield can be any size. You can scale up in units of about 10 cm cube. GPU LF rendering w/ Octane scales really nicely, just like 2D rendering. It is a good fit for the cloud service. Also, compared to traditional baking, this is better - you capture reflections/refractions, SSS, etc, not just 'splotchy' GI.Vue2Octane wrote:Impressive idea.
This is basically a lightfield camera realised in 3D software.
This is the camera that is simulated. It also allows 3D from one shot (in Octane one render).
https://www.lytro.com
We can embed info channel kernel/render pass data into the LF pixels (by default depth is included). Lot's of possibilities...especially when you combine this with some of the other announcements coming out of Siggraph this week...
- Vue2Octane
- Posts: 88
- Joined: Thu Jun 26, 2014 8:16 am
You guys really know what you are doing!
I looked into LFcameras myself, also designing one in Zemax Optical Design Software at some point.
It is quite nice looking at how the Octane team adds lots of game changers to the render world.
Not just the same technique slighlty refined or modified, but real game changers.
Adapting the lightfield concept to a render engine is just smart!
I looked into LFcameras myself, also designing one in Zemax Optical Design Software at some point.
It is quite nice looking at how the Octane team adds lots of game changers to the render world.
Not just the same technique slighlty refined or modified, but real game changers.
Adapting the lightfield concept to a render engine is just smart!
- linvanchene
- Posts: 783
- Joined: Mon Mar 25, 2013 10:58 pm
- Location: Switzerland
edited and removed by user
Last edited by linvanchene on Mon Oct 20, 2014 3:27 pm, edited 1 time in total.
- r-username
- Posts: 217
- Joined: Thu Nov 24, 2011 3:39 pm
Very nice and glad to be using otoy software.
Is there a time frame for a beta version of standalone with rift type renders.
I have a fire phone with "dynamic perspective", can we render content for this device?
Is there a time frame for a beta version of standalone with rift type renders.
I have a fire phone with "dynamic perspective", can we render content for this device?
You have two options - have the cloud service decode the lightfield in HD and stream it down with depth and an LF mipmap (for quick reprojection - also plugs into time warping on the Oculus).linvanchene wrote:This is amazing news!
But I am a bit confused about some points:
In the blog its says this was presented on a mobile phone.
Was that a prototype with a special display?
How do those displays know from which angle we are looking at it?
or
Does the display always show all angles of the frame?
This seems like a lot of data to process.
What is the file size of one frame?
What minimum RAM should devices have?
OR
Send down the scene as an ORBX LF volume, for local or offline viewing using OpenGL ES (or WebGL + ORBX.js). The size can be 16 Mb to 16 GB depending on the volume.
The ORBX LF codec is still early in development, but the size is getting smaller the further we develop it and user more info from the infochannel kernel to compress the LF more . At medium quality, a 1 foot LF view cube is about 8x larger than a hi res 2D surface PNG @650 dpi.
If you are on the cloud, you can keep streaming in more LF cubes as you move through the scene. If you are on a mobile device with OpenGLES 3, the idea is you download an LF cube, and view the volume locally rather than look at a 2D picture. It it should fit into the device memory in that case, but larger, full res volumes should be streamed from the cloud (you can could cache a mip map of the current LF cube from the stream itself)