0% found this document useful (0 votes)
14 views

Texture

Uploaded by

evan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Texture

Uploaded by

evan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

✬ ✩

CPSC 3710 Computer Graphics University of Lethbridge

Texture

• Instead of having each surface as a solid colour, we can attach “texture” to the
surface.

• Texture mapping refers to putting an image onto a surface.


• e.g. putting brick or wood grain patterns onto a surface
• There are also other application as well.

✫ ✪
Texture 1 – 21 Howard Cheng
✬ ✩
CPSC 3710 Computer Graphics University of Lethbridge

Digital Images

• Textures are often specified by digital images.


• A digital image is a rectangular array (N × M ) of pixel values.
• For grayscale (also called monochromatic or luminance) images, each pixel is a
value in [0, 255] (8-bit images). Black is 0, white is 255.

• For colour images, each pixel is typically specified by a vector of (R, G, B) values
in [0, 255] (24-bit images).

• Images may be stored in different formats (e.g. GIF, TIFF, PNG, PDF, JPEG).
• Most formats perform some data compression, some are lossy (e.g. JPEG).
• OpenGL: need external libraries to load image data.

✫ ✪
Texture 2 – 21 Howard Cheng
✬ ✩
CPSC 3710 Computer Graphics University of Lethbridge

Texture Mapping

• For realistic rendering of objects, texture needs to be applied to surfaces.


• Texture mapping: uses an image to influence the colour of a fragment.
• Done inside the fragment shader.

✫ ✪
Texture 3 – 21 Howard Cheng
✬ ✩
CPSC 3710 Computer Graphics University of Lethbridge

Texture Mapping

• Textures are patterns that may repeat periodically.


• Textures can be specified in one, two, three, or four dimensions.
• We will only look at 2D texture mapping.

✫ ✪
Texture 4 – 21 Howard Cheng
✬ ✩
CPSC 3710 Computer Graphics University of Lethbridge

2D Texture Mapping

• Texture is a 2D image (can be loaded or generated by code).


• It is an array of texture elements (texels).
• The texture can be thought of as an array T (s, t) where s and t are texture
coordinates. We assume all texture coordinates are real numbers in [0, 1].

• A texture map is a set of functions that map coordinates on the surface to texture
coordinates:
(s, t) = (s(x, y, z, w), t(x, y, z, w))

• Conceptually: for each fragment on the object, the texture map tells us which texel
should be used.

✫ ✪
Texture 5 – 21 Howard Cheng
✬ ✩
CPSC 3710 Computer Graphics University of Lethbridge

2D Texture Mapping

• One particular issue: it may be that (s, t) is “in between” texels. Which texel should
we use?

• Simplest: point sampling (GL_NEAREST)—use the nearest one. Can lead to


visible aliasing effects.

• More complicated: linear filtering (GL_LINEAR)—interpolates between close


texels.

✫ ✪
Texture 6 – 21 Howard Cheng
✬ ✩
CPSC 3710 Computer Graphics University of Lethbridge

2D Texture Mapping

• If the texture coordinates are outside of [0, 1], we can


– GL_REPEAT: repeat the pattern periodically
– GL_MIRRORED_REPEAT: repeat the pattern periodically but reflect each time
– GL_CLAMP_TO_EDGE: coordinates clamped to 0 and 1.
– GL_CLAMP_TO_BORDER: coordinates outside are given a “border” colour.

• These can be specified independently for s and t.


• Specified with glTexParameteri.

✫ ✪
Texture 7 – 21 Howard Cheng
✬ ✩
CPSC 3710 Computer Graphics University of Lethbridge

Mipmapping

• Depending on the distance of the object to the viewer, the size of a fragment may be
much larger or smaller than a texel.

• If a texel is larger than one pixel, minification is needed.


• If a texel is smaller than one pixel, magnification is needed.
• This can be controlled using GL_TEXTURE_MIN_FILTER and
GL_TEXTURE_MAG_FILTER using point sampling or linear filtering, but there
is a different way.

• Mipmapping: generates a set of texture arrays from the original, at different


resolutions (glGenerateMipmap).

• Use GL_LINEAR_MIPMAP_LINEAR for minification to automatically use the


“right size”. Second parameter is to interpolate amongst the different resolutions.
✫ ✪
Texture 8 – 21 Howard Cheng
✬ ✩
CPSC 3710 Computer Graphics University of Lethbridge

OpenGL Texture Mapping Setup

• Call glGenTextures and glBindTexture.


• Set up wrapping, minification and magnification parameters.
• Load image and set texture with glTexImage2D.
• Call glGenerateMipmap to generate mipmaps.
• Provide a 2 dimensional input atrribute in the vertex shader for the texture
coordinates.

• Load the texture coordinates of each vertex using a VertexAttribArray.

✫ ✪
Texture 9 – 21 Howard Cheng
✬ ✩
CPSC 3710 Computer Graphics University of Lethbridge

OpenGL Texture Mapping Setup

• In vertex shader, accepts texture coordinates and perform needed calculations


(usually none) and pass the texture coordinates to fragment shader (interpolated for
each fragment).

• In fragment shader, there is automatically a uniform sampler2D parameter.


• In fragment shader, the function texture(uTextureMap, vTexCoord)
will return the texel value using the appropriate sampling selected.

• The colour returned can be used to determine the colour of the fragment (possibly
together with lighting information).

✫ ✪
Texture 10 – 21 Howard Cheng
✬ ✩
CPSC 3710 Computer Graphics University of Lethbridge

3D Texture Mapping

• Sometimes attaching 2D textures to each surface can look unrealistic (e.g. edge
between surfaces)

• Instead, we can define 3D textures for entire object (e.g. wood grain, stone, etc.).
• There will be three texture coordinates (s, t, r).
• The effect will be similar to “carving” an object out of a textured material.

✫ ✪
Texture 11 – 21 Howard Cheng
✬ ✩
CPSC 3710 Computer Graphics University of Lethbridge

Environment/Reflection Map

• How do we render a scene when there is a highly reflective surface (e.g. mirror)?
• We cannot render the mirror without knowing the rest of the scene.
• Texture mapping can be used to make a good approximation.

✫ ✪
Texture 12 – 21 Howard Cheng
✬ ✩
CPSC 3710 Computer Graphics University of Lethbridge

Environment/Reflection Map

• Looking at a mirror: the angle of reflection is known given the viewer position.
• We can first pretend to render the scene without the mirror.
• Place the camera at the centre of the mirror, looking towards the normal vector of
the mirror.

• Render the scene. This is what the mirror “sees”.


• Use the scene as a texture to map onto the surface of the camera.
• Problems:
– rendering first pass without the mirror may be unrealistic

– what should be the projection plane in the first rendering?

• More advanced techniques are needed to solve these problems.


✫ ✪
Texture 13 – 21 Howard Cheng
✬ ✩
CPSC 3710 Computer Graphics University of Lethbridge

Cube Mapping

• A cube map contains 6 2D textures arranged into the faces of a cube.


• Imagine a unit cube with the centre at the origin.
• Instead of sampling a texel with face ID and texture coordinates, we can sample with
a direction vector from the origin (to see where the vector intersects the cube).

• Create by using glBindTexture with GL_TEXTURE_CUBE_MAP.


• Load the faces with glTexImage2D with parameters such as
GL_TEXTURE_CUBE_MAP_POSITIVE_X and
GL_TEXTURE_CUBE_MAP_NEGATIVE_X.
• Can set up wrapping, minification and magnification parameters. Need R wrapping
parameter.

• Should clamp to edge for wrapping.


✫ ✪
Texture 14 – 21 Howard Cheng
✬ ✩
CPSC 3710 Computer Graphics University of Lethbridge

Cube Mapping

• Sampler in fragment shader is samplerCube, which allows texture to be


sampled with a direction vector.

✫ ✪
Texture 15 – 21 Howard Cheng
✬ ✩
CPSC 3710 Computer Graphics University of Lethbridge

Cube Mapping: Applications

• Skybox: put texture of the background environment on the faces.


• Make an artifical box object VAO and use the texture to display the background
• Care needs to be taken so depth testing will allow foreground objects to be drawn on
top of environment.

• Care needs to be taken with the size of the box vs. size of objects, and how the
environment changes when the camera moves.

✫ ✪
Texture 16 – 21 Howard Cheng
✬ ✩
CPSC 3710 Computer Graphics University of Lethbridge

Cube Mapping: Applications

• Reflection: if a surface is reflective, we can compute the eye-to-fragment vector and


reflect it about the normal of the surface.

• The reflected vector can then be used to sample a cube map to see the reflection.
• Refraction: light bends (e.g. looking into a pool of water).
• We can apply the same principle once we computed the refracted direction (use
physics).

• Reflection of other objects: pretend the surface is a camera and render the scene
into a texture buffer, then use texture mapping.

✫ ✪
Texture 17 – 21 Howard Cheng
✬ ✩
CPSC 3710 Computer Graphics University of Lethbridge

Bump Mapping

• Sometimes we want a surface to appear non-smooth with little “bumps”. e.g. the
peel of an orange is not flat.

• The small variations are hard to model geometrically.


• However, lighting variation can be used to “fool” the viewer: if the normal vectors are
perturbed, it will appear that surface is “bumpy”.

• This is applied in the fragment shader.

✫ ✪
Texture 18 – 21 Howard Cheng
✬ ✩
CPSC 3710 Computer Graphics University of Lethbridge

Bump Mapping

• The perturbations to the normal vectors can be stored as a texture called a normal
map.

• The texture normally stores RGB values in [0, 1]. Need to rescale normal vectors
(x, y, z) in [−1, 1]
• The normal map is used as the normal vectors before lighting calculations in
fragment shader.

✫ ✪
Texture 19 – 21 Howard Cheng
✬ ✩
CPSC 3710 Computer Graphics University of Lethbridge

Bump Mapping

• Normal maps assume surface is in xy -plane with normal vector pointing in positive
z direction.
• Define a new coordinate system called tangent space (local coordinate so surface is
in xy -plane and normal vector point in positive z direction.

• Compute transformation matrix from tangent to world space, and transform normal
vector to world space (fragment shader), or

• Apply inverse transformation on lighting parameters from world to tangent space


(vertex shader)

✫ ✪
Texture 20 – 21 Howard Cheng
✬ ✩
CPSC 3710 Computer Graphics University of Lethbridge

Parallax Occlusion Mapping

• For more realistic rendering of depth of a flat surface based on texture, we have to
adjust the sampled texture as well.

• This is because the texture coordinate we see may be slightly shifted based on
height of object.

• A depth map is passed in as a texture.


• If depth is 0, then we sample texture as before.
• But if depth is positive, we have to move further (based on the depth) from the light
ray and shift the texture coordinates.

• This can be computed using trigonometry.


• Provide a more realistic rendering.
✫ ✪
Texture 21 – 21 Howard Cheng

You might also like