Mixing hardware and software rendering

The choice of one or another rendering methods in REDsdk is done at the viewpoint level and by choosing the appropriate rendering method for the window. If we are to render in software, then we'll need to call RED::IWindow::FrameTracing (or any FrameTracingXXX method). This method will ensure that scenes can be rendered in software. Then, to render in hardware too, simultaneously, we'll also need to ensure that REDsdk can process GPU images. Consequently, REDsdk needs to be started in hybrid mode: Hardware or software startup).

Then, the determination of whether a viewpoint will be processed in software ray-tracing or using the GPU will be made based on options enabled for that viewpoint:

During each call to RED::IWindow::FrameTracing, REDsdk will:

  1. Further refine the CPU image (the rendering of this camera will continue until finished)
  2. Render the GPU camera image.
  3. Blend the two resulting images according to the camera setup in the VRL (which one is front or back, etc...)