I am noobish on ORBX, but what jumped out to me, and let me know if I understand correctly, is that you will be able to use a video as a texture? So, for example, I could have a mesh of a seagull with a skin/texture of a video of seagulls flying around on a beach, with the accompanying sounds?
If this is the case, I had some ideas I wonder could be possible...
(1) will elements be morphable, ie bump map, such that said elements would able to react to different hue elements of said video, i.e. white would, in real time, morph a bump map outwards, while darker colors slightly morph inwards. I imagine you could have a music video moving a mesh such that it would 'dance' to the music, almost MIDI-like.
(2) Which leads me to the 2nd idea, could MIDI (as in musical instrument digital interface) be implemented into ORBX, such that a mesh render's properties (ie bump map, lighting, etc...) could be manipulated by pressing keys on a keyboard/MIDI musical instrument if real-time?
(3) Which leads me to the 3rd idea, could ORBX be a coduit of computation, such that another program could, for whatever it is doing, actively modify the mesh render by 'speaking to it' in real-time? I would imagine a speech recognition program (like Google search using the microphone, for example) receiving my vocal of the word 'dog', and then the ORBX would receive the computation from Google Search that ORBX would search the internet for a 'dog' image, or 'dog' mesh, and it would change the mesh to a dog. Basically you could say a word and the mesh would change to what you said.
(4) which leads me to a 4th idea, would be could ORBX run its own AI computation such that it would real-time respond to the user. Thus, an interactive hologram render!