Become familiar with the basic terminology, such as vertexes, pixels, textures, shaders, and a Render target. Go over a simplified 3D pipeline where vertex processing, rasterization and pixel processing are touched on.
- The graphics pipeline can be broken down into these main stages. We have our inputs, which give us a basis on how to do vertex processing, followed by rasterization, then pixel processing, and finally the output to the render target. The following slides we'll dive deeper into what each of these are. Here we have some definitions for graphics. A primitive is a building block, like a line or a triangle, which is made up of vertices. A vertex is a point in 3D space. It has position on the x, y, and z axises.
A pixel is point in 2D space and is a building block for images, and goes on the x and y axis. Rendering is when these primitives, like triangles, are converted into pixels. Rasterization is making the graphics image look good enough so that the human eye believes it's there. Texturing is taking an image like a bitmap, and painting it to an object in 3D space. A shader is a program that runs right on the graphics card. It takes inputs and produces outputs. Render target is the back buffer that eventually gets displayed to the screen for your viewing.
And lastly, DirectX is the name of the 3D acceleration Microsoft API. There are others, like OpenGL, which is another popular API for graphics programming. This is a 3D pipeline from 10,000 feet. We have data following through these stages. And at the end, you get an image on the display screen. For vertex processing, we start with a triangle, and all of the vertices are positioned along the x, y, and z axises. Rasterization takes that triangle and turns it into a bunch of pixels, which becomes boxes.
Then there is the pixel processing where the boxes get colored. After the boxes are painted, it is put onto the render target which is the back buffer that eventually gets displayed to the screen for your viewing. The application does all of the set up, and handles the buffers, the shaders, and defines how the hardware should render. After the application performs all of the setup, it signals for rendering to happen, which is referred to as a draw call. Once the draw call signals for rending to happen, the whole process is repeated for the next frame.
Every object you see in graphics can be broken down into a bunch of triangles. Graphics is all about triangles, because a triangle is the most basic shape. Even a circle can be drawn with a bunch of triangles. Hardware loves triangles, because the math involved with triangles is simpler than dealing with curves. Each vertex that makes up these many triangles has attributes like the position of XYZ, and the color like red, green, or blue. Vertex processing is where the vertex shader comes in. It's a shader program that tells the GPU what to do with the vertices and how to transform it into world space, the space the game resides in.
Shaders are compiled on the CPU, so the time to first frame render might be slow. The next phase in the 3D pipeline is the rasterization, or the transition from vertices to pixels is done. If we move to the next slide, we can get a picture of this. Rasterization is where we find all of the pixels inside of a triangle, and where multisample anti-aliasing comes in. Multisample anti-aliasing is a technique used to decrease the visual effect of seeing pixels in a staircase-like manner. It smooths the edges of a triangle.
This technique is very costly, however, and we might need to decide if it's worth keeping when we look at the system resource utilization. This technique can be used in the last stage of the 3D pipeline, which is where the pixel processing happens. A pixel shader program is run once per pixel, or multiple times when anti-aliasing is involved. This is where texture coordinates and lighting calculations are performed. So in our game scene, when we see the light shining through the windows, the shadows we see are colored in by these shader programs.
Speaking of texture coordinates, when it comes to texturing think of an image that is like wallpaper, and it is wrapped around a 3D object instead of being drawn in with multiple triangles. Texturing takes up a lot of space, and is usually the culprit when it comes to high memory consumption. We might have to adjust some of the texturing options inside Unity if the memory usage is too high. Remember, texturing is like a wallpaper image being wrapped around a 3D object. It's more detailed, but the trade-off is it consumes much more memory.
There are different methods tied to how a texture coordinate is mapped to a pixel coordinate. These are the different techniques used for texture filtering, and each one has its own pros and cons. The important concept we want to keep in mind as we analyze the game scene, is that sampling is very costly in terms of memory. It uses a lot of memory, and can cause games that max out to show disappearances in the scene, anomalies. The last stage of the pipeline is the pixel output, where everything is painted to the render target, which is the back buffer soon to be presented to the display screen for the user.
This is where the depth buffer comes in that keeps everything in perspective. Objects are at a certain distance behind another object. The Z buffer gives the scene depth and the illusion that our first-person camera is walking through a room. If there is a large render target, this can cause saturation in the write memory bandwidth. The larger the render target is, the more writes are required to the Z buffer in a 3D scene. So I know we just did a crash course on the 3D pipeline, but let's do a quick recap of the stages. We had the vertex processing, which handles all of the inputs given to it by the application, like primitives to use.
Then we moved to the rasterization stage, where it translates those vertices into pixels. Then it's the pixel processing, where things like texturing can occur before writing the pixel to the render target.