Rendering large images

Rendering wide posters at 300 dots-per-inch resolution requires the capability to generate *very* large images. These images may not fit in the system's memory, hence the need to generate them by tiles or by sub regions.

Large images require asymmetric cameras

The first operation to undertake to render a large image is to split it in smaller parts and then to render these parts separately. This is achieved by setting up asymmetric cameras. A full tutorial describe this process here: Rendering large images.

Asymmetric cameras are set using:

Large images and global illumination

One other problem that arise with large images is the need to generate one unique global illumination solution for the entire image. Each image tile can't use its own global illumination solution otherwise global illumination artifacts will show up at the frontier of each tile, because two tiles will not have correlated global illumination informations (the global illumination signal IS interpolated).

Consequently, large images that are to be rendered using global illumination must use a pre-generated global illumination solution. This is achieved using RED::IWindow::FrameTracingIIC for the generation of the GI solution and then using RED::IViewpointRenderList::SetViewpointGICache, for each tile rendered in the final image.

Consolidation of tiles luminance

Similarly, a problem arise with the post-processing of tiles in a large images. The REDsdk tone mapping system uses luminance informations extracted from the image that was just calculated (see Tone mapping). Consequently, there's no reason for two tiles in the image to tonemap in a coherent way. That's why, in the case of a large image, each tile needs to be rendered first, and then global luminance informations need to be merged together to figure out the global image luminance parameters. Finally, each tile can be tone mapped on its own using the consolidated luminance values.