RSS_Button

Realtime Shading with HLSL



Introduction


For those who don’t know what HLSL (High Level Shader Language) is its a proprietary shading language made by Microsoft for use with Direct3D API.

Autodesk FBX (FilmBox) format is currently the standard format for game assets, including most games now developed for the XBOX 360.

Note : Since i originally come from using 3d artist tools, i will be referencing the common simplicities that are taken for granted. other than starting out with a blank pad in the game programming world 🙂

 


Matrices


  • Before we get into shading we need something to shade.. namely a model, and its not as simple as saying file>Open.. in the game programming world, new objects need to be drawn onto the viewable space, which includes iterating through every separate piece of geometry that a single character is made of. generally speaking, when u need to draw something in 2d or 3d, u obviously need something to draw it on u cant draw in thin air, so u need to build the space on which u can draw. to achieve this we need 3 matrices namely the world, view, and projection matrix.

World Matrix – Is basically defining the origin of the world-space, a point from which every other object can form a relation, or be relative too.

View Matrix –  is the position of the camera in 3d space i.e XYZ values, the camera target and a camera up vector which defines the up director for the camera.

Projection Matrix – is the behavior of a world matrix to the view matrix, simply speaking its the calculation between the world as seen through the camera, which could be orthogonal or perspective, properties like the field of view, aspect ratio ( view-port width / view-port height ), near and Far Clip Plane Distance.

Getting these three things in place will let you successfully load up any kind of model for basic shading purposes.

  • A Background on shading in the computer world, is that all graphical data is vector based data, meaning that if this data was to be rendered as is, u would get a huge block of continuously flowing numbers, Yes exactly like the matrix, exceptions that it would be green.. so to see this data in the form that we humans find more appealing, that is through pictures, this vector needs to be converted to pixel based information, a process now days commonly known as rasterization, in doing so the processor read every vector data and calculates the equivalent pixel based value, to show the final shaded model.

 


HLSL Shading Model


Now we can get down to the actual HLSL Shading Model, a standard file structure consists of 4 major parts the Input(Incoming Vertex Data), Vertex Shader Functions, Output, Pixel Shader Functions, Techniques.

Input(Incoming Vertex Data) – This is where all the incoming vertex data is stored for further calculation, this includes data from other textures, models, cameras, matrices etc. it is basically a storage block ( Buffer ) for migrating data.

Vertex Shader Functions – This is where all the various methods for processing the input data reside, with a combination of these functions, a variety of results can be archived, like calculating object normal space, or normals in relation to light, basically all kind of vector data.

Output(Outgoing Vertex Data)  – This is again a buffer for vertex data, the only difference is that this data has already been worked on by the vertex shader functions, and is now ready to go to the next level which is processing by the pixel shader class.

Pixel Shader Functions – This is where all the calculated vector data, is re-calculated with pixel based logic, so that the real-time renderer can finally get the needed values to display the required object in worldspace.

Techniques – A technique is a function where a specific vector and pixel rendering function can be put together as a combination of different varieties of rendering. like if u had an object that required normal vector rendering and another did not require that logic, then u could write 2 techniques, where 1 would have a function for calculating normal data and the other would be for stuff that didn’t need normal calculations.

 


DirectX ShadingModel Pipeline


The Microsoft Direct3D 10 API defines a process to convert a group of vertices, textures, buffers, and state into an image on the screen. This process is described as a rendering pipeline with several distinct stages.

The different stages of the Direct3D 10 pipeline are:

  1. Input Assembler: Reads in vertex data from an application supplied vertex buffer and feeds them down the pipeline.
  2. Vertex Shader: Performs operations on a single vertex at a time, such as transformations, skinning, or lighting.
  3. Geometry Shader: Processes entire primitives such as triangles, points, or lines. Given a primitive, this stage discards it, or generates one or more new primitives.
  4. Stream Output: Can write out the previous stage’s results to memory. This is useful to recirculate data back into the pipeline.
  5. Rasterizer: Converts primitives into pixels, feeding these pixels into the pixel shader. The Rasterizer may also perform other tasks such as clipping what is not visible, or interpolating vertex data into per-pixel data.
  6. Pixel Shader: Determines the final pixel color to be written to the render target and can also calculate a depth value to be written to the depth buffer.
  7. Output Merger: Merges various types of output data (pixel shader values, alpha blending, depth/stencil…) to build the final result.

 


Shader Logic


Pixel shader (abbreviation PS) is a shader program, normally executed on the graphics processing unit. In OpenGL, it is referred to as the fragment shader  .A pixel shader computes color (and optionally more attributes) of each rendered pixel – so the pixel shader defines how the pixels ultimately look. Pixel shaders range from very simple ones (e.g. always output the same color) to simple (e.g. read a color from a texture, apply lighting value) to complex ones that simulate bump mapping, shadows, specular highlights, translucency and other complex phenomena.  The pixel shader is executed for each pixel rendered, and independently from the other pixels. Taken in isolation, a pixel shader alone can’t produce very complex effects, because it operates only on a single pixel, without any knowledge of scene’s geometry or neighboring pixels. In Stream processing terms, pixel shader is a computation kernel function.  In addition to color of the pixel, the pixel shader can also alter the depth of the pixel (for Z-buffering), or output more than one color if multiple Render Targets are active.
Vertex shader (abbreviation VS) is a shader program, normally executed on the Graphics processing unit.A vertex shader is a graphics processing function used to add special effects to objects in a 3D environment.

Theory of Shaders


Standard Shader – The major prequiste for textures is UV which is available to us in the FBX model, but that information needs to be imported into the Shader Logic, that is go by iterating through the texture co-ordinates and wrapping it around the vector data and continuously transforming that with the skin transform data, so as to always keep the data live and updated with new current snapshots of the model. Then that vector data needs to be sent to the pixel shader, so that the vector is then compared with the texture file, to produced the write colors in the write space.

Diffuse Shading / Lighting – Diffuse shading is based on a principle that a surface is brightest when its facing the light and it tapers into darkness as is faces away from the light, basically if we normalize a vector, we end up keeping its direction, but the magnitude gets set to a value of 1, and this makes it a unit vector, the dot product of two unit vectors equals to the cos of the angle between the two vectors, —  dot( unit vector , unit vector ) = cos ( angle )  —    so with this we can determine if the surface is facing the light based on the result of the dot product, a value of 1 means that it is directly facing the light and a value of -1 is that its in total darkness. so 1 is the brightest and -1 is the darkest area. the cos(angle) is a floating point value.

Normal Map Shading – Normal Mapping works on the principle that we do not need to use the normals from the vertices of a mesh, like in a generic vertex shader, in this case we can use the normals information stored in an image. Normals contain XYZ information, and a normal map stores theses XYZ co-ordinates in the form of RGB color data, so in a normal shader, it basically overwrites the low number of normals derived from the mesh, which a higher number which is then derived from the normal map. The normal map can be generated from a high resolution object and can be used to shade a low detail mesh. normal maps are calculated in tangent space.

Specular Shading – For specularity, light is reflected from a surface to the camera or eye, it is always dependent on these two factors, which is the reflected light and the placement of the camera, in shader terms this would be the light reflection direction and the view(eye) vector, calculating reflected light can be a very intensive method, since it is depended on various bounce levels, so James F. Blinn (1977) devised something called as a Half Vector. The Half Vector is the vector which is between the light and the eye, and the angle between the vector and the normal, is the same as the angle between the reflected light and the eye. This process is much quicker to calculate. For more information on Half vector u can see Models of light reflection for computer synthesized pictures by  James F. Blinn. but that’s not enough, specular depends on how smooth of rough an object really is, and that’s where the specular exponent calculation comes in.

Reflection Shading – For reflection, we need a reflection vector, which can be achieved by calculating what the camera is seeing based of the normals. This then will reflect a specified texture data in form of a cube map, so as to fake the surroundings of the world, in real-time. On a more complex level these maps are also  generated in real-time so as to continuously be updated with the environment that the object resides in.

Simple Shadows – For basic shadows, we need to make a shadow matrix that will draw a flattened version fo the object on a ground plane, the object can then be shaded with a constant color, so that it can look like a shadow on a ground plane. This can be quite an overkill at times, since it is basically making an instance of the character, and it still needs to calculate another set of vertices for the shadow.

Stencil Buffer Shadows – The Flexibility of the stencil mesh is not to draw the shadow as a flattened mesh but to draw its shape like a silhouettes. the shadow matrix will still calculate a flattened version of the object, but when the shadow needs to be draw, it will go through a stencil mode, in which the pre-processor will check to see if there is overlapping of shadow pixels or no, so as to always keep only one value to a shadow pixel at any given time. This is also a better option for alpha blending, since there is no overlapping of information, it would not tend to jitter in real-time.


Most Related Posts

  • No Related Posts Found




  • vizman says:
    (October 22, 2008 at 4:50 am)

    Dude,

    Thanks for the effort, and tackling the entire process of 3D from soup to nuts in one blog is quite a mountain to climb! I don’t mean to be rude, but it (the article) would be much better if it had proper spelling & grammar. Otherwise an interesting read.

    viz

  • Bill Bartmann says:
    (September 3, 2009 at 9:38 pm)

    Excellent site, keep up the good work

  • asdasd says:
    (December 23, 2010 at 4:14 am)

    Nobody uses FBX for game assets, that would be stupid. It’s far too unwieldy. They use it as an intermediary, this is entirely different.

    That comment is total XNA-creep isn’t it?.

    • Crunk says:
      (December 23, 2010 at 9:26 am)

      well yeah this is xna related.. back in the days when this post was written i was using it quite widely.. and seems like now it needs to be updated.. 🙂