This paragraph details all the tools available in REDsdk to customize the feedback of the software ray-tracer. We'll review the pure ray-tracer feedback modes, the way multiple cameras are being composited in software and how bucket rendering order can be customized.

Progressive refinement feedback modes

As detailed here: Hardware vs. software rendering, ray-tracing delivers a full image by progressive refinement. So this is the first information we get out of REDsdk while rendering in software: a progressively refined image. REDsdk has several built-in mechanisms to deliver a feedback to users. The feedback mode of the software ray-tracer is chosen at the time RED::IWindow::FrameTracing is being called. Mainly we have:

Each feedback mode has been designed for a specific use case. For photo-realistic images, using RED::FTF_BY_BLOCKS based feedback style is often better as each pixel is very costly to compute, and it's important to get an overall feedback on the image as soon as possible.

For real-time scenes rendered in software (in cloud based environments with no GPU acceleration, or using virtualization machines,...) a block based feedback is not really usable due to the lack of visual understanding of the manipulated model: interaction and comprehension of the visualized model are key in real-time. Each pixel is easy to compute as there are usually no fancy ray-tracing options set here. So a RED::FTF_BY_SURFACE_LEVEL_0 to 5 feedback mode is generally appreciated. Here's below an example of a typical CAD assembly, visualized with block feedback and surface feedback:

Comparison of block and surface feedback for an industrial model

What we can see here, is that blocks are still present in surface mode, but these are relimited by the contouring information of the visible surfaces.

However, when applied to a complete, complex scene, rendered with a lot of ray-tracing options, the surface level mode may take longer than the block mode to display the first image:

Same comparison with a rendering model

Here, after a few seconds we do have a complete image if we're in block feedback mode, and we're still figuring out contours of visible geometries if we're in surface modes. Because the surface mode analyzes every pixel image and shades all visible surfaces, it can take more time to fill the entire image for a first time. After this first pass, it'll catch up over the block mode and both images will complete in the same final time.

For instant pre-visualization feedback, path-tracing is a good choice as the rendering starts immediately without waiting for any additional structure to be computed (GI cache for example). However, don't forget that path-tracing does not support all the REDsdk features as described here: Path-tracing.

Same scene rendered with path-tracing feedback.

Here the result is noisier than previously but we immediately get the right feeling about the overall image rendering and can still move interactively in the model.

Multiple camera compositing

One important feature in REDsdk is the capability we have to assemble a rendering pipeline made of several cameras. Setup details can be found there: Defining a rendering pipeline using VRLs. In hardware rendering, this full pipeline of VRLs and cameras is processed and rendered last to first. In software rendering, the same mechanism applies, and the rendering also occurs from the last VRL and camera to the first.

Then, due to the different feedback of software ray-traced rendering compared to hardware rendering this implies that a scene with multiple cameras which is rendered in pure software mode will appear one camera after the other: the engine will not provide a block (or surface) feedback globally for all cameras that are in the window to render. No, it'll render one camera and display it with progressive refinement. Then it'll render another camera and display it the same way too.

Let's assume we have two cameras: one with a background model and the other one with a scene, and that we do produce the same image with both rendering solutions. Then, the hardware image will be returned complete, as a whole, after a call to RED::IWindow::FrameDrawing. In software, the background camera will be processed first; its image will be finished first; then the scene camera will be processed and blended with the background results.

Hardware vs. software camera compositing

Consequently, the final user feedback can be very different here, even if the image is identical in the end. Please note that the example above can be easily solved in software, simply in setting up a background image rather than using a background camera that visualizes a background image.

Customizing the bucket rendering order

By default REDsdk is really boring: it draws the image from the top to the bottom. Period. If you prefer to see a fancy Hilbert curve instead, or nice concentric circles on screen, then the RED::ISoftBucket interface is yours!

This interface allows you to 'feed' the ray-tracer with regions in the image to process. It can be used to customize the rendering order of buckets in REDsdk, for each level of refinement. Therefore, the software image can be refreshed in an application friendly manner.

Note that the application must take care to feed enough buckets to the ray-tracer, otherwise, some calculating threads may idle.