So, I've been holding off from upgrading my local renderfarm pending a decision to go CPU or GPU based with new gear.
I recently saw a video of octane (the blue semi truck motion blur example), and that really put the hooks in me to go GPU and use octane as the render engine.
Primary 3D app is C4D. Present working setup is MacPro based (some current render nodes are upgraded 1,1 and 2,1 machines, some are 3,1, and the workstations are 3,1 or 5,1).
Questions:
1) is the c4d plug in only needed on the workstation, and render-only slaves just need the standalone - or do the slaves also need the 3D app's plugin ?
2) is the amount of system RAM on the slave machines relevant to rendering?
3) I've been reading a lot about texture quantity limitations - is this just the number of maps? So a material that has a color map,am bump map, a normals map, an alpha map, and a displacement map. Would take up five texture positions? If the material was re-using some textures (for instance, alpha was in the color png, and the bump and displacement maps where in one file, with bump in red and displace in green, would that reduce the number of texture slots taken up? (I.e. Is it by texture map file?)
4) is the number of textures limitation something that will be addressed in the near future as the newer cards from nVidia have ways around this?
5) are you considering a different license plan for render-only nodes?
6) are there any issues using older MacPros with 32bit EFI As render nodes? (I.e. They'd be running 10.6.8 to 10.7). A have a few of these machines that I believe I can put a flashed 780 in, or possibly titan, so the only new hardware I need is the GPUs, and then the licenses for octane.
7) regarding GPU card RAM - let's say a 3 GB card like a 780, does that simply mean that all textures and the project total 3 GB or less? Or is there significant overhead to consider in addition to the texture map sizes? (I.e. Let's say you have a JPEG or ping texture - are we only concerned about that texture's compressed file size, or the size of the texture map as uncompressed bitmap?
8) are there any issues mixing OSes between slaves and workstations? For instance if I setup a 4x GPU slave running windows, but the workstations are all MacPros?
9) do you have a published feature roadmap? I wonder about some things, such as the ability to use bump and displacement at the same time.
A comment on bump vs displacement:
I saw in another post that you can't have both a bump map and a displacement map. Someone from Otoy stated that "you don't need both since they do the same thing", and I disagree.
Bump maps and Displacement maps do similar but not identical things, and both are needed, and often at the same time. Bump maps are more useful for very small details like surface imperfections, grime, skin texture, etc. these fine details are things you'll typically NOT want in your displacement map, which is more likely to be used for more substantial geometry. And substantial geometry needs separate control from fine surface texture and fine detail.
As an example, if you had a dented car, the dent would be in the displacement map, and the paint chips and scrapes would be in the bump map. Or in a human face, the skin texture would be in bump, and moles, scars, skin folds and wrinkles would be in displacement.
10) to sum up, a few of my main concerns relate to texture limitations, and GPU RAM needs, and predicting or managing assets to fit on a card - this then drives the "what card should I get" question. I'm either getting 780 or titan, or waiting for a maxwell GPU like an 880 to become available.
Thank you for your time.
Andy
Pre purchase questions.
Forum rules
For new users: this forum is moderated. Your first post will appear only after it has been reviewed by a moderator, so it will not show up immediately.
This is necessary to avoid this forum being flooded by spam.
For new users: this forum is moderated. Your first post will appear only after it has been reviewed by a moderator, so it will not show up immediately.
This is necessary to avoid this forum being flooded by spam.
If your going to use the 2.0 network rendering feature then the slave machines only need a standalone license. If your doing something like team render, where the actual C4D plugin is needed for each machine then you will need a full combo (Standalone + plugin) for each machine.Myndex wrote: 1) is the c4d plug in only needed on the workstation, and render-only slaves just need the standalone - or do the slaves also need the 3D app's plugin ?
AFAIK not really, not to rendering speed, however it will most likely need RAM for the scene, although scene compilation will be done on the master machine so you can probably get away with a lot less system RAM on the slave machines.Myndex wrote: 2) is the amount of system RAM on the slave machines relevant to rendering?
We have removed the texture count limitation in the latest versions of octane, so you are only limited by the amount of VRAM you have.Myndex wrote: 3) I've been reading a lot about texture quantity limitations - is this just the number of maps? So a material that has a color map,am bump map, a normals map, an alpha map, and a displacement map. Would take up five texture positions? If the material was re-using some textures (for instance, alpha was in the color png, and the bump and displacement maps where in one file, with bump in red and displace in green, would that reduce the number of texture slots taken up? (I.e. Is it by texture map file?)
See above.Myndex wrote: 4) is the number of textures limitation something that will be addressed in the near future as the newer cards from nVidia have ways around this?
AFAIK, yes it is being considered, but I don't have details or specifics I can tell you, sorry.Myndex wrote: 5) are you considering a different license plan for render-only nodes?
No sorry, this will not work because 32bit mac support was dropped for 2.0Myndex wrote: 6) are there any issues using older MacPros with 32bit EFI As render nodes? (I.e. They'd be running 10.6.8 to 10.7). A have a few of these machines that I believe I can put a flashed 780 in, or possibly titan, so the only new hardware I need is the GPUs, and then the licenses for octane.
Textures are stored uncompressed on the GPU for octane. You will also need space for geometry and the render target (higher resolution renders require more VRAM)Myndex wrote: 7) regarding GPU card RAM - let's say a 3 GB card like a 780, does that simply mean that all textures and the project total 3 GB or less? Or is there significant overhead to consider in addition to the texture map sizes? (I.e. Let's say you have a JPEG or ping texture - are we only concerned about that texture's compressed file size, or the size of the texture map as uncompressed bitmap?
That's fine, however the masters still need at least one GPU that is octane capable.Myndex wrote: 8) are there any issues mixing OSes between slaves and workstations? For instance if I setup a 4x GPU slave running windows, but the workstations are all MacPros?
Thanks for the feedback, no we generally don't publish a conclusive roadmap. Sometimes we will announce WIP features in the news and announcement forum but generally our roadmaps are not guaranteed public.Myndex wrote: 9) do you have a published feature roadmap? I wonder about some things, such as the ability to use bump and displacement at the same time.
A comment on bump vs displacement:
I saw in another post that you can't have both a bump map and a displacement map. Someone from Otoy stated that "you don't need both since they do the same thing", and I disagree.
Bump maps and Displacement maps do similar but not identical things, and both are needed, and often at the same time. Bump maps are more useful for very small details like surface imperfections, grime, skin texture, etc. these fine details are things you'll typically NOT want in your displacement map, which is more likely to be used for more substantial geometry. And substantial geometry needs separate control from fine surface texture and fine detail.
As an example, if you had a dented car, the dent would be in the displacement map, and the paint chips and scrapes would be in the bump map. Or in a human face, the skin texture would be in bump, and moles, scars, skin folds and wrinkles would be in displacement.
Textures, geometry and render target are the main users of vram. You can get 6GB 780's now (used to be 3GB only) which are a good value for money and seem to be a popular alternative to 6GB Titan's but if you want more than 6GB VRAM you will need a high end quadro or Tesla.Myndex wrote:
10) to sum up, a few of my main concerns relate to texture limitations, and GPU RAM needs, and predicting or managing assets to fit on a card - this then drives the "what card should I get" question. I'm either getting 780 or titan, or waiting for a maxwell GPU like an 880 to become available.
Hope that helps!
Thank you for the detailed reply, it was quite helpful.
A couple follow up questions:
Regarding texture memory:
1) are textures loaded into memory as 32 bit per channel? Or at the native bit-per-channel of the image?
2) are there any issues mixing bit depths in a material? For instance, a 16 bit single channel (greyscale) displacement map, 32 bit RGB color map, and an 8 bit single channel alpha map?
3) can textures be 16 bit float (as in some HDRI images)? Or always integer?
4) do you support 10 bit images?
5) are there any issues using image maps of different resolutions within a material? (I.e a 6000x3000 color map and a 2000x1000 alpha map).
6) do procedural maps (like noise) get rastered out into texture maps that take up memory the way regular images do?
The reason I ask is one project I am currently working on has a number of image maps that are 21,000 x 10,500, and in 16 or 32bpc. At 16 bit that's 1.7 GB. Even a 12gb tesla card ($4100!) could soon be filled with such textures.
So, really my question comes down to materials management, and tricks to get the most efficient use of memory. I'd think one trick would be using different bit depths on an as-needed basis.
Some other associated materials questions/thoughts:
When building a scene, there may be a lot of areas that are not always "seen" directly by the camera. It seems it might be useful to be able to change resolution of image maps in materials for those parts that are only indirectly contributing to the image for that frame.
Considering the memory limitation, it seems being able to switch (by dissolve?) to a lower resolution of a material may be warranted to keep the total texture memory down for any particular frame. Is there a means to handle this, such as with a visibility tag (so that materials set to visibility 0% are not loaded into texture memory)?
Enough on textures - have you benchmarked the new GTX 980, and if so, how does it compare to the 780 and Titan? Do you have benchmarks posted anywhere?
Cheers
Andy
A couple follow up questions:
Regarding texture memory:
1) are textures loaded into memory as 32 bit per channel? Or at the native bit-per-channel of the image?
2) are there any issues mixing bit depths in a material? For instance, a 16 bit single channel (greyscale) displacement map, 32 bit RGB color map, and an 8 bit single channel alpha map?
3) can textures be 16 bit float (as in some HDRI images)? Or always integer?
4) do you support 10 bit images?
5) are there any issues using image maps of different resolutions within a material? (I.e a 6000x3000 color map and a 2000x1000 alpha map).
6) do procedural maps (like noise) get rastered out into texture maps that take up memory the way regular images do?
The reason I ask is one project I am currently working on has a number of image maps that are 21,000 x 10,500, and in 16 or 32bpc. At 16 bit that's 1.7 GB. Even a 12gb tesla card ($4100!) could soon be filled with such textures.
So, really my question comes down to materials management, and tricks to get the most efficient use of memory. I'd think one trick would be using different bit depths on an as-needed basis.
Some other associated materials questions/thoughts:
When building a scene, there may be a lot of areas that are not always "seen" directly by the camera. It seems it might be useful to be able to change resolution of image maps in materials for those parts that are only indirectly contributing to the image for that frame.
Considering the memory limitation, it seems being able to switch (by dissolve?) to a lower resolution of a material may be warranted to keep the total texture memory down for any particular frame. Is there a means to handle this, such as with a visibility tag (so that materials set to visibility 0% are not loaded into texture memory)?
Enough on textures - have you benchmarked the new GTX 980, and if so, how does it compare to the 780 and Titan? Do you have benchmarks posted anywhere?
Cheers
Andy
