Attached to this post is a scripted node graph that acts as a dynamic fisheye camera. You can use it to position and orient a camera in the scene and get a fisheye image out of it. No scripting is required.
It uses a baking camera and thus works on one of the Octane 3.0 alpha, 3.0 alpha 4 at the time of this post.
Note that this is a side project that I worked on to become more familiar with the baking camera and scripted node graphs.
You can configure the field of view on the fly, including going higher than 180°, and select between a few mapping functions.
Here is a 270° fisheye of the well-known Sponza scene, using the stereographic mapping function.
How to use it
Drag and drop the orbx file on your scene graph to import the node. The node should sit between the main geometry and the render target. It has two outputs, a geometry and a camera, and takes as input the scene geometry, a transform to manipulate the camera and some parameters like the field of view and the fisheye function.
The "mesh" input is to connect your scene geometry.
The "camera transform" input is used to place and orient the camera in space. It's not possible to directly use the mouse on the viewport to manipulate the camera, you have to use the sliders/text boxes. If you are rendering small geometry you may want to scale down the camera to avoid it intersecting the scene.
The "field of view" input is used to control the field of view projected on the final image. It defaults to 180° and some artifacts may appear depending on the mapping function if you go in the very high ranges towards 360°.
The "fisheye function" input lets you select between 3 mappings ("Stereographic", "Equidistant" and "Equisolid") that alters the way the ray directions are converted to image coordinates. They really start to diverge above 180° and some are better suited to display very high field of views. The default is "Equidistant", which should actually give a similar result as using a thin lens camera at 1.0 distortion for a square image of the same field of view.
The "camera subdiv" input controls the tessellation level of the spherical camera surface. The default is 3. Higher values can be necessary if you render very large targets at high field of view. The trade-off is that changing the field of view on the fly becomes slower. You may stick with 3 until you find the perfect framing and switch to more for the final render.
How it works
If you are interested in the internals or want to modify the script, here is a short description of how it works.
Internally the scene geometry is combined with a pseudo-spherical mesh that acts as the camera surface. The mesh is created procedurally by subdividing an icosahedron. The UVs for the vertices are computed on the fly based on the provided field of view and mapping function.
The baking camera is configured on "baking position": all rays are shot from this unique position out towards the camera mesh surface. This position and the camera mesh are kept in sync through the user-exposed 3D transform.
Since it uses a complete sphere, we can go over 180°. The corners of the image are also naturally filled. As the UVs are generated in the script, the field of view and mapping function can be changed dynamically.
In some circumstances the edges of the polygons making up the sphere start to produce discontinuities in the output. This is where the subdivision level comes in. Here are wireframe views of the camera surface at the various subdivision levels.
That's it. Please do not hesitate to report issues or to comment on how to improve it!

Thanks,
Joan.