I’m not a Houdini expert by any means, but I’ve been working with it for about 5 or 6 months now, focused on creating volumes and pyro. Once you’ve managed to create your fx, rendering can be really difficult. I had frames rendering for an hour and a half, which I have been able to bring down to 8 minutes with some settings optimizations. There are a lot of options and it can be tough to figure out how to go about it. In this lesson, I’ll share how I’m doing it and if you have any tips, I would love to learn. You can email me through the contact link above or add a comment to this page.
I create all my pyro and volumes in different hip files. So even if I started things in one file, eventually, I will copy and paste nodes into separate files from which I can cache out the results. Just working with these files can be unwieldy, but with a little planning, it can be a lot easier.
Switch the viewport update to manual. This way the viewport does not update on every frame, and go to the start frame of your simulation or scene.
In the Pyro Sim node, you can set your division size as needed. It is good to test with a higher division size (lower resolution), until you get the kind of shape and movement that you want. At that time, you can keep auto update on and play back in the timeline for quick tests. When you are ready to cache out a higher resolution sim with more detail, you can set update to manual, then carefully lower the division size (small changes make for much larger files and sim times).
Now double click on your pyro import node. Select the pyro import fields node. Uncheck load from disk, set your frame range in the save file tab. Set an output path and filename. The $F3 in the file name puts the frame number with padding ie 001, 003 and the number of digits. The bgeo.sc at the end is to save bgeo files, and the sc I believe is a compressed file. Set your frame range. Depending on what you are doing, you can set that third number, the increment. For creating space nebulas, I only needed a single frame, so I could save disk space by not saving every frame. I could save every 10th frame, though to get each of those frames, it still has to simulate every single frame in between as well. In the end, I could choose the frame I liked best and use that.
Once you like your settings, click on save to disk. The simulation will run and your files will be written out to disk. This can take time, depending on how detailed and long your sim is. I run these over night or when I will be away for several hours. Once it is done, you can click load from disk, to load these files in and render them to check them.
I actually have a separate hip file for loading all the different caches, adding lights and rendering. This way, I don’t have a scene file that has any pyro solvers or volume generation going on. So in your fresh hip file, you can create a file node, double click on it and set the file path to the cache you wrote out. Name the file node at the obj level and you can add a material.
For cloud volumes, just set the division size for resolution and create the file cache node to save out a cache. You only need one frame, unless it is animated.
Finally, we get to the rendering optimization. In the out tab, you can create as many render nodes as you like. You can have fast render nodes with low quality settings and final render nodes with higher quality.
Set the the viewport update to manual, so that things don’t update every time you change a parameter. In the render view, set the render node, camera, keep preview checked and uncheck Auto-update.
Set your render output path in the images tab.
It is also useful to break your scene up into different layers here. I do this by assigning specific objects to each render node in the objects tab.
If you don’t need shadows, uncheck shadows in your lights, or use the mask to only have that light cast shadows for specific objects (like the object field in the render node). You can have lights not affect the viewport. I’ve been told that depth map shadows can be faster than raytraced shadows. These are all options to experiment with. The fewer lights that affect the volume or pyro, and the fewer shadow casting ones on top of that, the more efficient, and I believe less noise filled your renderings.
Now lets get into render settings. You may find yourself going back and forth, reaching your simulation at lower resolution (higher division number), then adjusting lights, then changing settings back and forth to get things optimized. You will do this for pretty much every scene you ever render. With some experience, you’ll get optimized quicker, but just be prepared.
We’ll go to the sampling tab in our render node. If you don’t need pbr, render with raytracing. You can even try micropolygon rendering if that suits your purpose and renders faster for this particular render pass. I’ve hilited the main parameters I tweaked. At this stage, I advise you render at your full resolution. In the render window, you can press shift and left mouse button drag to create a render region, so it only renders that portion. This is great for quick testing. We want to balance render speed and quality. How low can we take the settings and still get the results we want?
There are other sites with specific details on what each parameter does and how they affect on another. For my purposes, just understand that these parameters are sampling, or processing each pixel multiple times, until they get a final result. Some of these parameters will actually multiply others, so a slight increase or decrease in these parameters can lead to a dramatic difference in render times and quality. We are trying to get a noise free result with as little processing on the part of the renderer as possible.
Start with low numbers, like 4 and 4. As far as I know, you want to keep both these values identical to each other.
Min Ray samples
You can set this to 1.
Max ray sample
Start with around 5 to 8.
This one can make a huge difference on render speeds. Keep this as high as possible. I start with a value of 1.
I just set this to 4.
This is another parameter that has a huge affect on render speed. Set this as low as possible. I set it to 0.01.
Volume Shadow Quality
Again, I would set this low to start, like 0.01.
Check this if your volumes have transparency or opacity (I’m guessing they would).
This, from what I understood from reading up on it, determines how many samples within the volume are taken during the render. You can start off real low like 4. This is a parameter that you can crank up really high without impacting render speeds much, so it is a good way to reduce noise.
Now the balancing act.
In the Render view, with preview checked, just hit the Render button each time you have adjusted settings. Preview will gradually refine the image as it renders. Watch the percentage. In the beginning, you may come away with a noisey rendering. So first, we’ll increase parameters to get rid of the noise. Maybe increase your pixel samples gradually, up to 12. As you render, look to see if the image is refining all the way through. You may find that you have pixel samples at 12, but the quality stays about the same after 70% rendering. To me, this means I should leave this setting, or even lower it until I get that quality level towards 90% or at the end of the render, because the extra processing is not yielding results.
If there is still noise, I will crank up stochastic samples to see if I can get rid of it. Or, I will crank it up and lower pixel samples by 1 or 2. If things are rendering so slow that it is hard to test at all, quickly lower volume quality and increase noise level so that you can get quick iterations.
Work with pixel samples, and stochastic samples to get rid of as much noise as possible. reduce or increase max samples sparingly. Gradually bump up noise level to as high a number as you can go. Then adjust volume quality as low as possible. These two parameters can help get rid of noise, but it is best if you can remove noise with pixel samples and stochastic samples before relying on them as they seem to increase render times the most.
You can also connect your render nodes and output to an hq_render node to do queue rendering. This way it renders frame one of all passes, then frame two of all passes etc. At any point, you can load these passes into a compositing program and see how it all looks put together. I recommend Digital Fusion, which you can get for free thanks to Black Magic Design.
I hope this helps. It helped me get render times for my images from 1 hour and 30 minutes to 8 minutes per frame. When rendering animations, that is a huge difference. Once you have settings that look good, you’ll want to render 3 to 5 consecutive frames at full resolution, to playback and test for noise that seemed fine in a still but causes trouble in motion.