0% found this document useful (0 votes)
0 views

Lecture 1_ Introduction to Computer Graphics

The document provides an introduction to computer graphics, defining key concepts such as rendering, modeling, and the components of graphics systems. It discusses the historical milestones in the field, various applications driving computer graphics, and the traditional graphics pipeline process. Additionally, it covers topics like image processing, animation, and display technology, highlighting the evolution and significance of computer graphics in various industries.

Uploaded by

mosesdray15
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views

Lecture 1_ Introduction to Computer Graphics

The document provides an introduction to computer graphics, defining key concepts such as rendering, modeling, and the components of graphics systems. It discusses the historical milestones in the field, various applications driving computer graphics, and the traditional graphics pipeline process. Additionally, it covers topics like image processing, animation, and display technology, highlighting the evolution and significance of computer graphics in various industries.

Uploaded by

mosesdray15
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 28

LESSON 1:

Introduction to Computer
Graphics
What is Computer Graphics?
• A graphic is an image or visual representation of an object.
• Computer Graphics is an art of drawing pictures on computer
screens.
• Creation, Manipulation, and Storage of geometric objects
(modeling) and their images (rendering)
• Typical graphics system comprises of a host computer with
support of fast processor, large memory, frame buffer and
• Display devices (colour monitors),
• Input devices (mouse, keyboard, joystick, touch screen, trackball)
• Output devices (LCD panels, laser printers, colour printers. Plotters
etc.)
• Interfacing devices such as, video I/O, TV interface etc.
Graphics Definitions
• Point: a location in space, 2D or 3D sometimes denotes
one pixel
• Line: straight path connecting two points
• Vertex: point in 3D
• Edge: line in 3D connecting two vertices
• Polygon/Face/Facet: arbitrary shape formed by connected
vertices, fundamental unit of 3D computer graphics
• Mesh : set of connected polygons forming a surface (or
object)

19/05/2025 Lecture 1 3
Graphics Definitions
• Rendering : is the process of generating an image from a 2D or
3D model (or models in what collectively could be called a
scene file), by means of computer programs. Also, the results
of such a model can be called a rendering.

• Framebuffer : a large block of memory that stores the color


values for all the pixels on the screen.

19/05/2025 Lecture 1 4
What drives computer graphics?
• Movie Industry
• Leaders in quality and artistry
• Big budgets and tight schedules
• Defines our expectations

• Game Industry
• The newest driving force in CG due to volume and
Profit
• Focus on interactivity
• Cost effective solutions
What drives computer graphics?
• Medical Imaging and Scientific Visualization
• Tools for teaching and diagnosis
• No cheating or tricks allowed
• New data representations and modalities
• Drive issues of precision and correctness
• Focus on presentation and interpretation of data
• Construction of models from acquired data

Nanomanipulator, UNC Joe Kniss, Utah Gordon Kindelman, Utah


What drives computer graphics?
• Computer Aided Design
• Drives the high end of the hardware market
• Integration of computing and display resources
• Reduced design cycles results in faster systems in a shorter timeframe

• Education and Training


• Models of physical, financial, social systems
• Comprehension of complex systems

• Computer Art
• Fine and commercial art
• Performance Art
• Aesthetic Computing
Historical Milestones
• 1960’s:
• Early theoretical development, mainly limited to research and military
• 1962: Sketchpad (Ivan Sutherland)
• 1970’s:
• ‘Traditional’ graphics pipeline developed
• Driven by money from military simulation and automotive design industries
• 1980’s:
• Many important core algorithms developed
• Visual quality improved driven by demands from entertainment (movie) industry
• 1985: Rendering Equation (James Kajiya)
• 1990’s:
• Advanced algorithms developed as graphics theory matured
• Broader focus on animation, data acquisition, NPR, physics…
• 1995: Photon Mapping (Henrik Jensen)
• 2000’s:
• Photoreal rendering evolves to the point of being able to render convincing images of
arbitrarily complex scenes on consumer hardware
• Merging of computer graphics and computer vision
• Cheap graphics hardware with vast capabilities, driven largely by video game industry
Image Processing
• Some computer graphics operations involve
manipulating 2D images (bitmaps)
• Image processing applies directly to the pixel grid and
includes operations such as color correction, scaling,
blurring, sharpening, etc.
• Common example include digital photo processing
and digital ‘painting’ programs (Adobe Photoshop…)
Image Synthesis

• Image synthesis or image generation refers more to


the construction of images from scratch, rather than
processing of existing images

• Synthesis of a 2D image from a 3D scene description


is more commonly called rendering
Photoreal Rendering
• Photoreal rendering refers to rendering a 3D scene in a realistic
way

• Modern photoreal rendering algorithms are essentially a


physically based simulation of light propagation and scattering
throughout a 3D environment

• This means that there is a ‘correct’ image that should be


generated, given an input data set. This allows the subject of
photoreal rendering to have a strong theoretical basis (namely,
the science of optics)

• Most modern photoreal rendering algorithms are based on the


classic ray tracing algorithm, that traces the path of individual
light rays starting from the eye and working backwards to the
light sources.
Non-Photoreal Rendering

• Non-photoreal rendering (NPR) refers to rendering


images in other ways…

• Sometimes, this is done to achieve aesthetic goals


such as artificial water colors, pencil sketches, paint
brushstrokes…

• Other times, the goal is to maximize the


communication of visual information, as in scientific
and medical visualization
NPR
Computer Vision
• Computer vision is sometimes considered as a separate
discipline from computer graphics, although they share
many things in common

• A central goal in computer vision is to take a set of 2D


images (usually from a video or set of photos) and infer
from that a 3D description of what is being viewed

• This is a very different process than rendering, and is


more of a form of artificial intelligence
Animation
• An animation is just a sequence of individual images

• Basically, the subject of computer animation focuses on how


things change over time. Usually, this refers to motion, but
can also refer to other properties changing over time.

• Physical simulation is a very powerful tool in computer


animation and can be used to generate believable animations
of rigid objects, deformable objects, gasses, liquids, fracture,
particle effects, and even explosions and fire

• Computer animation also includes a large number of


techniques specifically developed to manipulate virtual
characters
Modeling
• Modeling refers to the techniques involved with creating, scanning,
editing, and manipulating 3D geometric data
• It is done by a human user with an interactive editing program
• More complex objects, such as trees, can be constructed with automatic
procedural modeling algorithms
• 3D models are also often acquired from real world objects using laser
scanning or computer vision techniques
• Modeling also includes the use of curved surfaces and other higher order
primitives, which are often converted into triangles using various
tessellation algorithms
• It also includes mesh reconstruction for surface simplification
• Modeling makes heavy use of computational geometry
Display Technology
• Vector display (cathode ray tube)

• Raster displays
• CRT (cathode ray tube)
• LCD (liquid crystal display)
• TFT (thin film transistor)
• OLED (organic light emitting diode)
• Light valve
• Plasma
• HDR (high dynamic range: TFT / white LED hybrid)

• Film
• Print
Raster Graphics
• Modern graphics displays are raster based
• They display a grid of pixels, where each pixel color can be set
independently
• Individual pixels are usually formed from smaller red, green,
and blue subpixels. If you look very closely at a TV screen or
computer monitor, you will notice the pattern of subpixels
• Older style vector displays didn’t display a grid of pixels, but
instead drew lines directly with an electron beam
• Raster graphics are also sometimes called bitmapped graphics
Interlacing
• Older video formats (NTSC, PAL) and some HD formats (1080i)
use a technique called interlacing
• With this technique, the image is actually displayed twice, once
showing the odd scanlines, and once showing the even
scanlines (slightly offset)
• This is a trick for achieving higher vertical resolution at the
expense of frame rate (cuts effective frame rate in half)
• The two different displayed images are called fields
• NTSC video, for example, is 720 x 480 at 30 frames per second,
but is really 720 x 240 at 60 fields per second
• Interlacing is an important issue to consider when working with
video, especially in animation as in TV effects and video games
• Computer monitors are generally not interlaced
Framebuffer
• The framebuffer refers to the memory dedicated to storing the
image
• It would generally be a 2D array of pixels, where each pixel
stores a color (Note: pixel = picture element)
• Color is typically stored as a 24 bit RGB value. This offers 8 bits
(256 levels) for red, green, and blue, for a total of 16,777,216
different colors
• Very often, additional data is stored per pixel such as depth (z),
or other info
• A framebuffer can just be a block of main memory, but many
graphics systems have dedicated framebuffer memory with a
direct connection to video scan-out hardware and other special
features
Primitives
• Complex scenes are usually built up from simpler objects
• Objects are built from individual primitives
• The most common and general purpose 3D primitive is the
triangle
• Points and lines are also useful primitives

• Four basic output primitives (or elements) for drawing pictures:


• Polyline
• Filled POLYGONS (Regions)
• Ellipse (Arc)
• Text
• Raster IMAGE
Traditional Graphics Pipeline

• In the traditional graphics pipeline, each primitive is


processed through the following steps:
1. Transformation
2. Lighting
3. Clipping
4. Scan conversion
5. Pixel processing
1. Transformation

• The transformation process refers to the linear


transformation from 3D space to a 2D viewing space

• Each vertex position must be transformed from it’s


defining object space to the device coordinates
(pixel space)

• This often involves a combination of rotations,


translations, scales, and perspective
transformations
2. Lighting
• Lighting operations are applied to each vertex to
compute its color

• In more advanced rendering, lighting operations are


computed per pixel, rather than per vertex

• A variety of light types can be defined such as point


lights, directional lights, spot lights, etc.

• More advanced lighting operations can account for


shadows, reflections, translucency, and a wide variety
of optical effects
3. Clipping
• Some triangles will be completely visible on the screen,
while others may be completely out of view
• Some may intersect the side of the screen and require
special handling
• The camera’s viewable space forms a volume called the
view volume. Triangles that intersect the boundary of the
view volume must be clipped.
• The related process of culling refers to the determination
of which primitives are completely invisible
• The output of the clipping/culling process is a set of visible
triangles that lie within the dimensions of the display
device
4. Scan Conversion

• The scan conversion (or rasterization) process takes 2D


triangles as input and outputs the exact pixels covered
by the triangle

• Per-vertex data, such as color, is interpolated across the


triangle, so each pixel may have a unique color
5. Pixel Processing
• The output of the scan conversion process is a bunch of individual
xy pixels, plus additional data per pixel such as interpolated depth
(z), color, or other information
• The pixel processing stage includes the operations that take place
per pixel to compute the final color that gets rendered into the
framebuffer
• Usually, the zbuffer technique is used to make sure that a pixel is
rendered only if it is not blocked by an existing surface
• Other processing, such as texturing and transparency operations
happen per pixel
• In some systems, the entire lighting process is computed per pixel,
instead of per vertex
Scene Rendering
• With the traditional zbuffered graphics pipeline, triangles
can be rendered in any order without affecting the final
image
• Often, complex effects such as transparency, actually do
depend on the rendering order, and so may require
additional care
• Still, it makes a nice basic approach, and it’s the approach
taken by OpenGL and built into many modern hardware
graphics boards
• There are more advanced rendering algorithms (scan line,
ray tracing, etc.) that don’t render triangles one at a time,
and require the entire scene to be processed.

You might also like