Is DLSS tech possible with octane?
Posted: Sun Sep 06, 2020 6:22 pm
I know I'm delving deep into gaming territory here, but according to Jensen Huang, DLSS takes in a high resolution previous frame combined with information from a trained neural network to output high resolution current frame, within the context of animation, of course.
Now I understand that training neural networks isn't possible for the array of scenes that artists are able to generate, and there is no 'general' looking scene. But isn't there a way octane(or brigade) can eventually leverage the technology, even if it took rendering a couple of frames at very high resolution so that the rest can be rendered like a breeze at a lower resolution and just up-scaled?
I'm only asking as currently the AI de-noising feature is only decent enough for stills as the artifacts from it are not viable for animation unless you aim for a lot of samples/px, at which point you may question its usefulness, especially in relation to other solutions out there (like neatvideo) which deliver what I would consider a higher quality output for the input given.
And perhaps, going off on a bit of a tangent, would training a temporal de-noiser not be a viable solution to the problem? Like let's be honest, the majority of frames in a sequence are largely identical, with little changing elements, so would it not make sense to use information from previous frames to speed up current ones? And I'm not talking about irradiance caching either, but rather an AI solution which is 'content' aware.
Now I understand that training neural networks isn't possible for the array of scenes that artists are able to generate, and there is no 'general' looking scene. But isn't there a way octane(or brigade) can eventually leverage the technology, even if it took rendering a couple of frames at very high resolution so that the rest can be rendered like a breeze at a lower resolution and just up-scaled?
I'm only asking as currently the AI de-noising feature is only decent enough for stills as the artifacts from it are not viable for animation unless you aim for a lot of samples/px, at which point you may question its usefulness, especially in relation to other solutions out there (like neatvideo) which deliver what I would consider a higher quality output for the input given.
And perhaps, going off on a bit of a tangent, would training a temporal de-noiser not be a viable solution to the problem? Like let's be honest, the majority of frames in a sequence are largely identical, with little changing elements, so would it not make sense to use information from previous frames to speed up current ones? And I'm not talking about irradiance caching either, but rather an AI solution which is 'content' aware.