i'm sure, most of you have already noticed that: http://www.lytro.com/ - not a totally new concept, but the first commercial product.
now i'm curious if this principle can somehow be adopted for a raytracer; i don't know enough about the mathematical basics, but to me it looks like a lightfield imager could be possible without adding overhead to the rendering process? even if dof control is a lot easier in a renderer than if taking real photos, it would of course be nice to control the dof in the final image...
light field camera
Forum rules
Please add your OS and Hardware Configuration in your signature, it makes it easier for us to help you analyze problems. Example: Win 7 64 | Geforce GTX680 | i7 3770 | 16GB
Please add your OS and Hardware Configuration in your signature, it makes it easier for us to help you analyze problems. Example: Win 7 64 | Geforce GTX680 | i7 3770 | 16GB
- gabrielefx
- Posts: 1701
- Joined: Wed Sep 28, 2011 2:00 pm
the Lytro camera creates a zbuffer map calculating the distances with light rays, then apply a de-focus filter. This is a post effects in the chip or pc software.t_3 wrote:i'm sure, most of you have already noticed that: http://www.lytro.com/ - not a totally new concept, but the first commercial product.
now i'm curious if this principle can somehow be adopted for a raytracer; i don't know enough about the mathematical basics, but to me it looks like a lightfield imager could be possible without adding overhead to the rendering process? even if dof control is a lot easier in a renderer than if taking real photos, it would of course be nice to control the dof in the final image...
When Octane will be able to save the zbuffer channel we will apply this effect too with Photoshop or After Effects.
quad Titan Kepler 6GB + quad Titan X Pascal 12GB + quad GTX1080 8GB + dual GTX1080Ti 11GB
mh, yes and nogabrielefx wrote:the Lytro camera creates a zbuffer map calculating the distances with light rays, then apply a de-focus filter. This is a post effects in the chip or pc software.t_3 wrote:i'm sure, most of you have already noticed that: http://www.lytro.com/ - not a totally new concept, but the first commercial product.
now i'm curious if this principle can somehow be adopted for a raytracer; i don't know enough about the mathematical basics, but to me it looks like a lightfield imager could be possible without adding overhead to the rendering process? even if dof control is a lot easier in a renderer than if taking real photos, it would of course be nice to control the dof in the final image...
When Octane will be able to save the zbuffer channel we will apply this effect too with Photoshop or After Effects.

the lytro (or other light field cameras already built, i.e. by adobe) don't just apply post-blurring to the image, it is a bit more complex: http://www.lytro.com/renng-thesis.pdf (warning: a tough read

also to apply blurring only by z-depth can't reproduce real world dof (like octane already - and perfectly - delivers), where background object details become "visible" even if fully covered by foreground objects. of course post-dof by z-depth is sufficient for many situations, but if you try to recreate real world photography it (imo) fails and tends to make images look unnatural...
„The obvious is that which is never seen until someone expresses it simply ‟
1x i7 2600K @5.0 (Asrock Z77), 16GB, 2x Asus GTX Titan 6GB @1200/3100/6200
2x i7 2600K @4.5 (P8Z68 -V P), 12GB, 1x EVGA GTX 580 3GB @0900/2200/4400
1x i7 2600K @5.0 (Asrock Z77), 16GB, 2x Asus GTX Titan 6GB @1200/3100/6200
2x i7 2600K @4.5 (P8Z68 -V P), 12GB, 1x EVGA GTX 580 3GB @0900/2200/4400