From the course: V-Ray Next: Unreal Engine Rendering

Image sampling explained

From the course: V-Ray Next: Unreal Engine Rendering

Start my 1-month free trial

Image sampling explained

- [Instructor] When working with a ray trace render engine such as V-ray, something that can come in very handy is having a basic understanding of the technology that is in play whenever we go ahead and take a render, given that such an understanding can certainly benefit the image creation work that we do. To collect all of the information that is needed in order to create a final render of a 3D environment then, such as the shape of geometric objects in the scene, the surface properties of the materials that have been applied, lighting and shadow information et cetera, the render engine needs to be able to probe our scene so as to know exactly what ought to be drawn in each of the pixels that will ultimately make up the final image. The million dollar question of course is how this gathering of information, or scene probing, is accomplished. Well, in general terms, the ray trace version of the process goes like this: from the rendering camera in the scene, the render engine shoots a number of rays through an internal frame buffer or grid which represents each of the pixels in the final image. These primary rays, sometimes referred to as eye or camera rays, are shot into the 3D environment in order to trace or bounce their way through the scene, usually with a predetermined limit as to the number of times they can bounce being applied. As they go, the rays sample or gather information from the objects that they encounter, and so as they come into contact with geometry, they test for and collect a wide range of information such as diffuse color values, specular reflectivity levels, and so on. They also at every point of contact send out shadow rays, that trace a straight line from that point toward any direct light sources that may be found in the environment. This helps determine whether or not the surface point should be rendered as being in direct light or in shadow. The primary rays will also on contact with a surface make an evaluation as to whether or not secondary rays will be required at this point. These coming into play when the material has properties such as refraction, blurred reflections, subsurface scattering, and so on, enabled. Once a user specified amount of sampling or information gathering has been accomplished, all of the data sofar collected gets returned along each ray's travel path, fed into the render engine for evaluation and averaging, and then gets drawn in the renderer's frame buffer window as a final pixel color value. Now of course, this is an extremely simplified overview of what is in reality an incredibly complex process that can potentially involve millions, even billions of rays being cast and traced throughout a scene. This simplified understanding is enough though for us to work with here, and will definitely be helpful when we have sampling or image-quality decisions that need to be made within the project's pipeline.

Contents