WebGPU Cloth Simulation 05 - Computer Graphic Fundamentals
Computer Graphic Fundamentals
Computer Graphic Approaches
1. Physically Based Rendering
Physically Based Rendering (PBR) is a computer graphics approach that aims to render images in a way that accurately simulates the physical interaction of light with surfaces. This method strives to achieve photo-realism by adhering to the physical laws of light behavior PBR is popular in real-time applications such as video games because of its ability to provide consistent, predictable and realistic results under various lighting conditions.
a. Reflection and Material Properties
- Albedo: The diffuse color of a surface, representing the color that the surface reflects under bright, white light.
- Metalness: A binary-like value where a surface is either metallic or non-metallic, affecting how it reflects light.
- Roughness: This describes how rough or smooth a surface is, affecting the sharpness of reflections and how light diffuses across the surface.
b. Standardization Across Different Lighting Conditions
c. Fresnel
The Fresnel effect describes how the amount of light reflected by a surface varies depending on the angle of incidence — that is, the angle at which the light hits the surface. Named after the French physicist Augustin-Jean Fresnel, this phenomenon is crucial for rendering realistic images in computer graphics, especially when simulating materials like water, glass, and metals.
d. Schlick's Approximation
A common approach to implementing the Fresnel effect in rendering engines is Schlick's approximation, a formula that simplifies the calculation of reflectance as a function of angle. The formula is expressed as:
e. Energy Conservation
PBR models adhere to the principle of energy conservation, meaning that the amount of light that comes out of a surface cannot exceed the amount of light that goes into it. For instance, a very rough surface will not produce sharp, clear reflections like a smooth surface because it scatters light in many directions.
3D Textures (Specular/Glossiness Workflow)
This approach is one of the primary methods used in material definition and shading in computer graphics, especially within the context of Physically-Based Rendering (PBR).
- Base Color Map (gives the color)
A base color map (sometimes just called a color map or diffuse map) is a texture that stores the primary colors of a material as seen under neutral lighting. The base color map is applied directly to the surface of the 3D model. In modern physically based rendering (PBR) workflows, the base color map strictly defines the color of the material and does not incorporate lighting effects which are handled by other components of the rendering engine.

- Normal Map (enhances the detail)
A normal map is used to simulate fine details of surface textures without using additional polygons. It stores vectors (usually encoded as RGB values) that alter the surface normals of the object, which are vectors perpendicular to the surface used to calculate light reflections and shading.

- Specular Map (controls the reflectivity)
A specular map is used to define the shininess and reflectivity of different parts of a material's surface. It adjusts how much light is reflected specularly (i.e., with a mirror-like quality) rather than diffusely (i.e., scattered in many directions)
Reference:
https://marmoset.co/posts/basic-theory-of-physically-based-rendering/
2. Ray Tracing
Ray tracing is a technique for generating images by simulating the way rays of light travel in the real world. It calculates the color of pixels by tracing the path of light as pixels in an image plane and simulating the effects when it encounters virtual objects. Ray tracing can produce highly realistic images with accurate shadows, reflections, refractions, and global illumination. However, it is computationally intensive.
This is how Ray Tracing work:
a. Ray Casting
The basic operation in ray racing involves casting rays form the eye(camera) through each pixel on the image plane into the scene. The renderer then calculates what these rays intersect first in the scene.
b. Shading and Lighting
Once an intersection is found, the surface properties at the point of intersection - such as color, texture, and material are used to compute the local color.
c. Recursion for Reflection and Refraction
Ray tracing is inherently recursive. For reflective and transparent materials, rays are spawned when they hit surfaces, tracing the paths that reflected or refracted rays would take. This recursion continues until a termination condition is met, which might be a maximum recursion depth or when the contribution of additional rays becomes negligibly small.
3. Rasterization
Rasterization is the most common technique used in real-time graphics, especially in video games. It converts 3D models into a 2D image quickly by processing vertices and edges of polygons. This approach is highly efficient and is well-suited to the real-time demands of interactive applications.
Key features of Rasterization:
a. Vertex Processing
The first step in the rasterization pipeline is vertex processing, where each vertex of a 3D object is transformed according to the camera's perspective and the object's position and orientation in the world. This stage typically involves transformations like translation, rotation, and scaling, all handled by the GPU.
b. Primitive Assembly
Once vertices are processed, they are assembled into geometric primitives (usually triangles). These triangles are what the GPU uses to represent 3D objects. c. Triangle Rasterization
Each triangle is then rasterized, which involves determining which pixels (or samples) on the screen are covered by the triangle. This step converts the vector data (triangles) into a raster image—a grid of pixels. d. Pixel Shading (Fragment Shading)
After rasterization, each pixel covered by a triangle undergoes pixel shading. This stage determines the color of each pixel based on various factors like lighting, the material properties of the object, and textures. Shaders, small programs that run on the GPU, are used to perform complex calculations for lighting and effects. e. Z-buffering (Depth Testing)
To handle the visibility of objects (which object is in front of which), rasterization uses a Z-buffer. Each pixel has a depth value, and only the closest pixel to the camera is kept in the final image. This prevents objects further away from being drawn over closer objects.Normals
1. Normal
Normal refers to the vector that points directly away from the surface of contact at the point where two objects collide. This vector is perpendicular (or "normal") to the surface of collision and plays a vital role in calculating how objects should react after colliding.2. Tangent Normal
Tangent Normal comes into play primarily when using normal maps. A normal map stores normals as RGB values, where each color component corresponds to a direction vector. These normals are not in world or object space but in tangent space.Tangent space is a coordinate system that adjusts with the surface of the mesh. It is defined at each vertex and oriented according to the vertex's normal (pointing away from the surface) and tangent (aligned with the direction of texture coordinates, typically along the "horizontal" texture axis).
The normals stored in the normal map (tangent normals) represent deviations from the vertex's base normal, allowing for the simulation of complex surface details that affect how light interacts with the surface. These deviations are typically used to create the appearance of bumps, dents, and other textures without altering the actual geometry of the mesh.
3. Tangent-Bitangent-Normal (TBN) Matrix
Components of the TBN Matrix:
- Normal (N):
The normal vector represents the perpendicular direction to the surface at a given point. In the context of a normal map, this vector points "out" of the surface and is used as the reference direction for the other two vectors.
- Tangent (T):
The tangent vector lies along the surface of the model and is typically aligned with the direction of the horizontal texture coordinate (u-axis). This alignment allows the tangent to indicate the direction of texture wrapping around the model.
- Bitangent (B) or Binormal:
The bitangent vector is perpendicular to both the normal and the tangent vectors. It usually aligns with the vertical texture coordinate (v-axis), completing a right-handed coordinate system.
Model-View-Projection Matrix
The Model-View-Project (MVP) matrix is used to transform the coordinates of objects from their local coordinate spaces into a coordinate space that can be displayed on a screen.
1. Model Matrix
The Model Matrix transforms vertices from an object's local coordinate space (also known as model space) to world space. In model space, the origin is typically considered to be the center of the object, and measurements are made relative to this point. The model matrix moves the object to its position in the 3D scene, scales it according to its intended size, and rotates it to the desired orientation.2. View Matrix
The view matrix transforms vertices from world space into view space or camera space. It positions and orients everything in the scene relative to the position and orientation of the camera. Essentially, it defines the position and angle from which the viewer (or the virtual camera) is observing the scene. The view matrix is often described as a camera moving in the opposite direction of how you would move in the scene, which simulates the camera's point of view.3. Projection Matrix
After vertices are transformed into view space, the projection matrix is applied to project the 3D coordinates into the 2D coordinate system of the screen. There are typically two types of projection used:a. Perspective projection
This simulates the depth perception of the human eye, making objects appear smaller as they are farther from the camera. It gives a realistic depth feeling to the rendered scene.
b. Orthographic projection
b. Orthographic projection
This does not account for perspective. Objects are the same size regardless of their distance from the camera, which is useful for technical or architectural visualizations where measurements need to be uniform across the view.
Camera
1. Arcball Camera (Trackball Camera)
The Arcball Camera is typically used to orbit around a central object or a scene, providing a spherical rotation around it. This type of camera is popular in applications where a user needs to examine an object from multiple angles, such as in 3D modeling software or inspection tools.
2. WASD Camera (First-Person Shooter Camera)
The WASD Camera, also known as a free camera or a first-person shooter (FPS) camera, is common in video games and simulation applications. It allows the user to navigate through the environment more freely, similar to walking or flying through the space.Barycentric Coordinates
A barycentric coordinate system is a coordinate system in which the location of a point is specified by reference to a simplex. The barycentric coordinates of a point can be interpreted as masses placed at the vertices of the simplex, such that the point is the center of mass (or barycenter) of these masses. These masses can be zero or negative; they are all positive if and only if the point is inside the simplex.
๐ = uA+vB+wC
Properties
- Sum to One: The weights u, v, and w always sum to one:u+v+w=1
- Non-negative Weights: For ๐P to be inside the triangle, all weights should be non-negative:
u≥0,v≥0,w≥0
- Vertex Representation: If ๐P coincides with one of the vertices, the corresponding weight is 1, and the others are 0:
If P=A, then (u,v,w)=(1,0,0)
If P=B, then (u,v,w)=(0,1,0)
If P=C, then (u,v,w)=(0,0,1)
If P=B, then (u,v,w)=(0,1,0)
If P=C, then (u,v,w)=(0,0,1)
Simplex
A simplex is a generalization of the notion of a triangle or tetrahedron to arbitrary dimensions.
- a 0-dimensional simplex is a point,
- a 1-dimensional simplex is a line segment,- a 2-dimensional simplex is a triangle,
- a 3-dimensional simplex is a tetrahedron, and
- a 4-dimensional simplex is a 5-cell.















Comments
Post a Comment