This is amazing news!
But I am a bit confused about some points:
In the blog its says this was presented on a mobile phone.
Was that a prototype with a special display?
How do those displays know from which angle we are looking at it?
Does the display always show all angles of the frame?
This seems like a lot of data to process.
What is the file size of one frame?
What minimum RAM should devices have?
You have two options - have the cloud service decode the lightfield
in HD and stream it down with depth and an LF mipmap (for quick reprojection - also plugs into time warping on the Oculus).
Send down the scene as an ORBX LF volume, for local or offline viewing using OpenGL ES (or WebGL + ORBX.js). The size can be 16 Mb to 16 GB depending on the volume.
The ORBX LF codec is still early in development, but the size is getting smaller the further we develop it and user more info from the infochannel kernel to compress the LF more . At medium quality, a 1 foot LF view cube is about 8x larger than a hi res 2D surface PNG @650 dpi.
If you are on the cloud, you can keep streaming in more LF cubes as you move through the scene. If you are on a mobile device with OpenGLES 3, the idea is you download an LF cube, and view the volume locally rather than look at a 2D picture. It it should fit into the device memory in that case, but larger, full res volumes should be streamed from the cloud (you can could cache a mip map of the current LF cube from the stream itself)