11+ +mapping
11+ +mapping
Christoph Garth
Scientific Visualization Lab
Motivation
Up until now all objects we considered and modeled were more or less smooth and had
few surface details - in contrast to real world surfaces.
In the beginning, only texture mapping was used to display surface details. By using
shader-based rendering, nearly arbitrary techniques can be employed.
Tv = 64
In general, texels are sampled over the interval [0, 1] in
texture coordinates. Thus, the query is independent of
texture resolution.
Multiplication with the width/height yields the v
coordinates of the pixel in the texture grid.
u
General Idea: During shading of a pixel, surface properties (color, but also other
properties) are read from the texture and used in calculating fragment color
The texture itself is a map from the unit square into colors:
The mapping of surface points into the texture domain (texture space) is specified by
assigning each point on the surface (u, v) coordinates in [0, 1]2 .
A (u, v)-tuple is assigned to each surface point of the table, and from this, the color can
be read from the texture at the (u, v) position.
z
θ = cos−1
r
(
x
cos−1
r y≥0
φ = atan2(x, y) =
−1 x
− cos r y<0
⇒ u = θ/π, v = φ/(2π)
attribute position V0,x V0,y V0,z V1,x V1,y V1,z V2,x V2,y V2,z
attribute normal N0,x N0,y N0,z N1,x N1,y N1,z N2,x N2,y N2,z
index buffer
0 1 2
(u1 , v1 ) p1
(u2 , v2 )
(u0 , v0 )
p0
u
• nearest neighbor
• bilinear
texel
v
u
sample point (u, v)
f (x, y) = (1 − β) fj + β fj+1
= (1 − β) ((1 − α) fi,j + α fi+1,j ) + β ((1 − α) fi,j+1 + α fi+1,j+1 )
x − xi
α= ∈ [0, 1]
1−β
xi+1 − xi
y − yi
β= ∈ [0, 1]
yi+1 − yi
β
α 1−α
fi,j fj fi+1,j
// texture "variable"
uniform sampler2D imageTex;
// texture coordinate varying, interpolated automatically
in vec2 uv;
// output color
out vec4 fragColor;
void main()
{
fragColor = texture(imageTex, uv);
}
The per-vertex texture coordinate uv is emitted by the vertex shader and automatically
interpolated across the primitives. The texture function automatically interpolates the
texture in a mode set by the application.
Observe:
B B B B D B B D
• Goal: represent small surface height variations, make the surface look “more” 3D.
• Idea: do not change the geometry of the surface, but manipulate the normal vectors
during evaluation of the illumination model.
• Simulation of surface irregularities on actual even/“simple” surfaces is created only
by changing the normal vectors of the geometry
• Reminder: The Phong lighting model does not use any information about the
geometry aside from the normals
The surface deformation is generally assumed to be very small relative to the object size.
Displacement mapping:
• A height field is mapped onto a surface which displaces points into the direction of
the surface normal at that point (e. g. in the vertex shader)
• Silhouette looks correct but the amount of polygons has to be high in order to
appropriately approximate the details.
Only the base mesh and displacement texture are stored and sent to the GPU; mesh
subdivision (tesselation) and displacement are computed on-the-fly, during rendering.
Procedural textures:
With ray tracing this is easy to achieve, how about rasterization pipelines?
Environment mapping:
Rendering of reflections with the help of textures
Idea: render environment of a reflecting object onto one or more textures, from which the
reflecting surface appearance can be rendererd
Improvements:
• Cube mapping
• Dual-paraboloid mapping (complete environment stored in two textures)
The reflection directions are those of two paraboloids, which induces less distortion
than (hemi-)spheres.
Computer Graphics – Mapping Techniques– Environment Mapping 1–36
Cube maps
• From the center of a
(virtual) cube, six images
are computed, one for the
view through each face
• Cube map textures are
directly supported by
OpenGL
• Higher costs:
6 textures are needed
Note: Environment mapping can only reproduce paths of the form LDSE; multiple
reflections are not possible.
Computer Graphics – Mapping Techniques– Environment Mapping 1–41
Recap
Mapping techniques provide flexibility in modeling detailed surface appearance and form
the basis for practically all commercially used computer graphic techniques (games, FX).
Different types of mapping techniques can be combined and used on the same object:
texture mapping, bump mapping, displacement mapping, environment mapping, . . .
Care must be taken when determining the mapping from object coordinates to texture
space (assigning texture coordinates).