0% found this document useful (0 votes)
0 views76 pages

18586

The document provides information about various textbooks related to computer graphics, programming, and game development, including titles by Lee Stemkoski and Michael Pascale. It highlights the importance of computer graphics in modern applications, including data visualization, medical imaging, and entertainment. The document also outlines the content structure of a specific book, 'Developing Graphics Frameworks with Python and OpenGL,' which teaches readers how to create interactive 3D graphics using Python and OpenGL.

Uploaded by

leviejahicnv
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views76 pages

18586

The document provides information about various textbooks related to computer graphics, programming, and game development, including titles by Lee Stemkoski and Michael Pascale. It highlights the importance of computer graphics in modern applications, including data visualization, medical imaging, and entertainment. The document also outlines the content structure of a specific book, 'Developing Graphics Frameworks with Python and OpenGL,' which teaches readers how to create interactive 3D graphics using Python and OpenGL.

Uploaded by

leviejahicnv
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 76

Developing Graphics Frameworks with Python and

OpenGL 1st edition by Lee Stemkoski, Michael


Pascale ISBN 0367721805 Â 978-0367721800
download
https://ebookball.com/product/developing-graphics-frameworks-
with-python-and-opengl-1st-edition-by-lee-stemkoski-michael-
pascale-isbn-0367721805-978-0367721800-20288/

Instantly Access and Download Textbook at https://ebookball.com


Get Your Digital Files Instantly: PDF, ePub, MOBI and More
Quick Digital Downloads: PDF, ePub, MOBI and Other Formats

3 D Computer graphics Mathematical introduction with OpenGL 1st


Edition by Medhat Rahim

https://ebookball.com/product/3-d-computer-graphics-mathematical-
introduction-with-opengl-1st-edition-by-medhat-rahim-19808/

3D Computer Graphics A Mathematical Introduction with OpenGL 1st


Edition by Samuel R Buss ISBN 0521821037 9780521821032

https://ebookball.com/product/3d-computer-graphics-a-
mathematical-introduction-with-opengl-1st-edition-by-samuel-r-
buss-isbn-0521821037-9780521821032-14672/

Interactive Computer Graphics A Top Down Approach With Shader Based


Opengl 6th Edition by Edward Angel, Dave Shreiner ISBN 0132545233
9780132545235

https://ebookball.com/product/interactive-computer-graphics-a-
top-down-approach-with-shader-based-opengl-6th-edition-by-edward-
angel-dave-shreiner-isbn-0132545233-9780132545235-10844/

(Ebook PDF) Beginning Java Game Development & LibGDX 1st edition by
Lee Stemkoski 1484215001 9781484215005 full chapters

https://ebookball.com/product/ebook-pdf-beginning-java-game-
development-libgdx-1st-edition-by-lee-
stemkoski-1484215001-9781484215005-full-chapters-22644/
(Ebook PDF) Learn OpenGL ES For Mobile Game and Graphics Development
1st edition by Prateek Mehta 1430250542 9781430250548 full chapters

https://ebookball.com/product/ebook-pdf-learn-opengl-es-for-
mobile-game-and-graphics-development-1st-edition-by-prateek-
mehta-1430250542-9781430250548-full-chapters-22624/

Data Structures and Algorithms in Python 1st Edition by Michael


Goodrich, Roberto Tamassia, Michael Goldwasser ISBN 9781118476734
1118476735

https://ebookball.com/product/data-structures-and-algorithms-in-
python-1st-edition-by-michael-goodrich-roberto-tamassia-michael-
goldwasser-isbn-9781118476734-1118476735-15762/

Engineering Design and Graphics With Solidworks 2016 1st edition by


James Bethune ISBN 013450769X 9780134507699

https://ebookball.com/product/engineering-design-and-graphics-
with-solidworks-2016-1st-edition-by-james-bethune-
isbn-013450769x-9780134507699-18212/

Delphi Graphics and Game Programming Exposed with DirectX For versions
5 0 7 0 Table of Contents 1st Edition by Delphi Graphics

https://ebookball.com/product/delphi-graphics-and-game-
programming-exposed-with-directx-for-versions-5-0-7-0-table-of-
contents-1st-edition-by-delphi-graphics-11194/

Machine Learning for Cybersecurity Cookbook Over 80 recipes on how to


implement machine learning algorithms for building security systems
using Python 1st edition by Emmanuel Tsukerman 9781838556341
1838556346
https://ebookball.com/product/machine-learning-for-cybersecurity-
cookbook-over-80-recipes-on-how-to-implement-machine-learning-
algorithms-for-building-security-systems-using-python-1st-
edition-by-emmanuel-tsukerman-9781838556341-1/
Developing Graphics
Frameworks with Python
and OpenGL
Developing Graphics
Frameworks with Python
and OpenGL

Lee Stemkoski
Michael Pascale
First edition published 2022
by CRC Press
6000 Broken Sound Parkway NW, Suite 300, Boca Raton, FL 33487-2742

and by CRC Press


2 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN

© 2022 Lee Stemkoski and Michael Pascale

CRC Press is an imprint of Taylor & Francis Group, LLC

Reasonable efforts have been made to publish reliable data and information, but the author and
publisher cannot assume responsibility for the validity of all materials or the consequences of
their use. The authors and publishers have attempted to trace the copyright holders of all material
reproduced in this publication and apologize to copyright holders if permission to publish in this
form has not been obtained. If any copyright material has not been acknowledged please write
and let us know so we may rectify in any future reprint.

“The Open Access version of this book, available at www.taylorfrancis.com, has been made
­available under a Creative Commons Attribution-Non Commercial-No Derivatives 4.0 license”

Trademark notice: Product or corporate names may be trademarks or registered trademarks and
are used only for identification and explanation without intent to infringe.

Library of Congress Cataloging‑in‑Publication Data


Names: Stemkoski, Lee, author. | Pascale, Michael, author.
Title: Developing graphics frameworks with Python and OpenGL /
Lee Stemkoski, Michael Pascale.
Description: First edition. | Boca Raton : CRC Press, 2021. |
Includes bibliographical references and index.
Identifiers: LCCN 2021002036 | ISBN 9780367721800 (hardback) |
ISBN 9781003181378 (ebook)
Subjects: LCSH: OpenGL. | Computer graphics—Computer programs. |
Python (Computer program language) | Computer graphics—Mathematics.
Classification: LCC T385 .S7549 2021 | DDC 006.6—dc23
LC record available at https://lccn.loc.gov/2021002036

ISBN: 978-0-367-72180-0 (hbk)


ISBN: 978-1-032-02146-1 (pbk)
ISBN: 978-1-003-18137-8 (ebk)

DOI: 10.1201/9781003181378

Typeset in Minion Pro


by codeMantra
Contents

Authors, ix

CHAPTER 1 ◾ INTRODUCTION TO COMPUTER GRAPHICS 1


1.1 CORE CONCEPTS AND VOCABULARY 2
1.2 THE GRAPHICS PIPELINE 8
1.2.1 Application Stage 9
1.2.2 Geometry Processing 10
1.2.3 Rasterization 12
1.2.4 Pixel Processing 14
1.3 SETTING UP A DEVELOPMENT ENVIRONMENT 17
1.3.1 Installing Python 17
1.3.2 Python Packages 19
1.3.3 Sublime Text 21
1.4 SUMMARY AND NEXT STEPS 23

CHAPTER 2 ◾ INTRODUCTION TO PYGAME AND OPENGL 25


2.1 CREATING WINDOWS WITH PYGAME 25
2.2 DRAWING A POINT 32
2.2.1 OpenGL Shading Language 32
2.2.2 Compiling GPU Programs 36
2.2.3 Rendering in the Application 42
2.3 DRAWING SHAPES 46
2.3.1 Using Vertex Bufers 46
2.3.2 An Attribute Class 49

v
vi ◾ Contents

2.3.3 Hexagons, Triangles, and Squares 51


2.3.4 Passing Data between Shaders 59
2.4 WORKING WITH UNIFORM DATA 64
2.4.1 Introduction to Uniforms 64
2.4.2 A Uniform Class 65
2.4.3 Applications and Animations 67
2.5 ADDING INTERACTIVITY 77
2.5.1 Keyboard Input with Pygame 77
2.5.2 Incorporating with Graphics Programs 80
2.6 SUMMARY AND NEXT STEPS 81

CHAPTER 3 ◾ MATRIX ALGEBRA AND TRANSFORMATIONS 83


3.1 INTRODUCTION TO VECTORS AND MATRICES 83
3.1.1 Vector Defnitions and Operations 84
3.1.2 Linear Transformations and Matrices 88
3.1.3 Vectors and Matrices in Higher Dimensions 98
3.2 GEOMETRIC TRANSFORMATIONS 102
3.2.1 Scaling 102
3.2.2 Rotation 103
3.2.3 Translation 109
3.2.4 Projections 112
3.2.5 Local Transformations 119
3.3 A MATRIX CLASS 123
3.4 INCORPORATING WITH GRAPHICS PROGRAMS 125
3.5 SUMMARY AND NEXT STEPS 132

CHAPTER 4 ◾ A SCENE GRAPH FRAMEWORK 133


4.1 OVERVIEW OF CLASS STRUCTURE 136
4.2 3D OBJECTS 138
4.2.1 Scene and Group 141
4.2.2 Camera 142
4.2.3 Mesh 143
4.3 GEOMETRY OBJECTS 144
Contents ◾ vii

4.3.1 Rectangles 145


4.3.2 Boxes 147
4.3.3 Polygons 150
4.3.4 Parametric Surfaces and Planes 153
4.3.5 Spheres and Related Surfaces 156
4.3.6 Cylinders and Related Surfaces 158
4.4 MATERIAL OBJECTS 164
4.4.1 Base Class 165
4.4.2 Basic Materials 166
4.5 RENDERING SCENES WITH THE FRAMEWORK 172
4.6 CUSTOM GEOMETRY AND MATERIAL OBJECTS 177
4.7 EXTRA COMPONENTS 184
4.7.1 Axes and Grids 185
4.7.2 Movement Rig 188
4.8 SUMMARY AND NEXT STEPS 192

CHAPTER 5 ◾ TEXTURES 193


5.1 A TEXTURE CLASS 194
5.2 TEXTURE COORDINATES 201
5.2.1 Rectangles 202
5.2.2 Boxes 202
5.2.3 Polygons 203
5.2.4 Parametric Surfaces 204
5.3 USING TEXTURES IN SHADERS 206
5.4 RENDERING SCENES WITH TEXTURES 212
5.5 ANIMATED EFFECTS WITH CUSTOM SHADERS 215
5.6 PROCEDURALLY GENERATED TEXTURES 221
5.7 USING TEXT IN SCENES 228
5.7.1 Rendering Text Images 228
5.7.2 Billboarding 232
5.7.2.1 Look-At Matrix 232
5.7.2.2 Sprite Material 236
5.7.3 Heads-Up Displays and Orthogonal Cameras 241
viii ◾ Contents

5.8 RENDERING SCENES TO TEXTURES 247


5.9 POSTPROCESSING 254
5.10 SUMMARY AND NEXT STEPS 265

CHAPTER 6 ◾ LIGHT AND SHADOW 267


6.1 INTRODUCTION TO LIGHTING 268
6.2 LIGHT CLASSES 271
6.3 NORMAL VECTORS 274
6.3.1 Rectangles 274
6.3.2 Boxes 275
6.3.3 Polygons 276
6.3.4 Parametric Surfaces 276
6.4 USING LIGHTS IN SHADERS 280
6.4.1 Structs and Uniforms 280
6.4.2 Light-Based Materials 282
6.5 RENDERING SCENES WITH LIGHTS 291
6.6 EXTRA COMPONENTS 295
6.7 BUMP MAPPING 298
6.8 BLOOM AND GLOW EFFECTS 302
6.9 SHADOWS 312
6.9.1 Teoretical Background 312
6.9.2 Adding Shadows to the Framework 317
6.10 SUMMARY AND NEXT STEPS 328

INDEX, 331
Authors

Lee Stemkoski is a professor of mathematics and computer science.


He earned his Ph.D. in mathematics from Dartmouth College in 2006 and
has been teaching at the college level since. His specialties are computer
graphics, video game development, and virtual and augmented reality
programming.

Michael Pascale is a sofware engineer interested in the foundations of


computer science, programming languages, and emerging technologies.
He earned his B.S. in Computer Science from Adelphi University in 2019.
He strongly supports open source sofware and open access educational
resources.

ix
CHAPTER 1

Introduction to
Computer Graphics

T he importance of computer graphics in modern society is


illustrated by the great quantity and variety of applications and their
impact on our daily lives. Computer graphics can be two-dimensional (2D)
or three-dimensional (3D), animated, and interactive. Tey are used in
data visualization to identify patterns and relationships, and also in scien-
tifc visualization, enabling researchers to model, explore, and understand
natural phenomena. Computer graphics are used for medical applications,
such as magnetic resonance imaging (MRI) and computed tomography
(CT) scans, and architectural applications, such as creating blueprints or
virtual models. Tey enable the creation of tools such as training simu-
lators and sofware for computer-aided engineering and design. Many
aspects of the entertainment industry make use of computer graphics to
some extent: movies may use them for creating special efects, generat-
ing photorealistic characters, or rendering entire flms, while video games
are primarily interactive graphics-based experiences. Recent advances in
computer graphics hardware and sofware have even helped virtual reality
and augmented reality technology enter the consumer market.
Te feld of computer graphics is continuously advancing, fnding new
applications, and increasing in importance. For all these reasons, combined
with the inherent appeal of working in a highly visual medium, the feld
of computer graphics is an exciting area to learn about, experiment with,
and work in. In this book, you’ll learn how to create a robust framework

DOI: 10.1201/9781003181378-1 1
2 ◾ Developing Graphics Frameworks with Python and OpenGL

capable of rendering and animating interactive three-dimensional scenes


using modern graphics programming techniques.
Before diving into programming and code, you’ll frst need to learn
about the core concepts and vocabulary in computer graphics. Tese ideas
will be revisited repeatedly throughout this book, and so it may help to
periodically review parts of this chapter to keep the overall process in
mind. In the second half of this chapter, you’ll learn how to install the
necessary sofware and set up your development environment.

1.1 CORE CONCEPTS AND VOCABULARY


Our primary goal is to generate two-dimensional images of three-
dimensional scenes; this process is called rendering the scene. Scenes
may contain two- and three-dimensional objects, from simple geometric
shapes such as boxes and spheres, to complex models representing real-
world or imaginary objects such as teapots or alien lifeforms. Tese objects
may simply appear to be a single color, or their appearance may be afected
by textures (images applied to surfaces), light sources that result in shading
(the darkness of an object not in direct light) and shadows (the silhouette
of one object's shape on the surface of another object), or environmen-
tal properties such as fog. Scenes are rendered from the point of view of
a virtual camera, whose relative position and orientation in the scene,
together with its intrinsic properties such as angle of view and depth of
feld, determine which objects will be visible or partially obscured by
other objects when the scene is rendered. A 3D scene containing multiple
shaded objects and a virtual camera is illustrated in Figure 1.1. Te region
contained within the truncated pyramid shape outlined in white (called a
frustum) indicates the space visible to the camera. In Figure 1.1, this region
completely contains the red and green cubes, but only contains part of the
blue sphere, and the yellow cylinder lies completely outside of this region.
Te results of rendering the scene in Figure 1.1 are shown in Figure 1.2.
From a more technical, lower-level perspective, rendering a scene
produces a raster—an array of pixels (picture elements) which will be
displayed on a screen, arranged in a two-dimensional grid. Pixels are typi-
cally extremely small; zooming in on an image can illustrate the presence
of individual pixels, as shown in Figure 1.3.
On modern computer systems, pixels specify colors using triples of
foating-point numbers between 0 and 1 to represent the amount of red,
green, and blue light present in a color; a value of 0 represents no amount
of that color is present, while a value of 1 represents that color is displayed
Introduction to Computer Graphics ◾ 3

FIGURE 1.1 Tree-dimensional scene with geometric objects, viewing region


(white outline) and virtual camera (lower right).

FIGURE 1.2 Results of rendering the scene from Figure 1.1

FIGURE 1.3 Zooming in on an image to illustrate individual pixels.


4 ◾ Developing Graphics Frameworks with Python and OpenGL

FIGURE 1.4 Various colors and their corresponding (R, G, B) values.

at full (100%) intensity. Tese three colors are typically used since photore-
ceptors in the human eye take in those particular colors. Te triple (1, 0, 0)
represents red, (0, 1, 0) represents green, and (0, 0, 1) represents blue. Black
and white are represented by (0, 0, 0) and (1, 1, 1), respectively. Additional
colors and their corresponding triples of values specifying the amounts of
red, green, and blue (ofen called RGB values) are illustrated in Figure 1.4.
Te quality of an image depends in part on its resolution (the number of
pixels in the raster) and precision (the number of bits used for each pixel).
As each bit has two possible values (0 or 1), the number of colors that can
be expressed with N-bit precision is 2 N . For example, early video game
8
consoles with 8-bit graphics were able to display 2 = 256 diferent colors.
Monochrome displays could be said to have 1-bit graphics, while modern
displays ofen feature “high color” (16-bit, 65,536 color) or “true color”
(24-bit, more than 16 million colors) graphics. Figure 1.5 illustrates the
same image rendered with high precision but diferent resolutions, while
Figure 1.6 illustrates the same image rendered with high resolution but
diferent precision levels.
In computer science, a bufer (or data bufer, or bufer memory) is a part
of a computer's memory that serves as temporary storage for data while
it is being moved from one location to another. Pixel data is stored in a
region of memory called the framebufer. A framebufer may contain mul-
tiple bufers that store diferent types of data for each pixel. At a minimum,
the framebufer must contain a color bufer, which stores RGB values.
When rendering a 3D scene, the framebufer must also contain a depth
bufer, which stores distances from points on scene objects to the virtual
camera. Depth values are used to determine whether the various points
on each object are in front of or behind other objects (from the camera’s
perspective), and thus whether they will be visiblewhen the scene is ren-
dered. If one scene object obscures another and a transparency efect is
Introduction to Computer Graphics ◾ 5

FIGURE 1.5 A single image rendered with diferent resolutions.

FIGURE 1.6 A single image rendered with diferent precisions.


6 ◾ Developing Graphics Frameworks with Python and OpenGL

desired, the renderer makes use of alpha values: foating-point numbers


between 0 and 1 that specifes how overlapping colors should be blended
together; the value 0 indicates a fully transparent color, while the value
1 indicates a fully opaque color. Alpha values are also stored in the color
bufer along with RGB color values; the combined data is ofen referred to
as RGBA color values. Finally, framebufers may contain a bufer called
a stencil bufer, which may be used to store values used in generating
advanced efects, such as shadows, refections, or portal rendering.
In addition to rendering three-dimensional scenes, another goal in
computer graphics is to create animated scenes. Animations consist of a
sequence of images displayed in quick enough succession that the viewer
interprets the objects in the images to be continuously moving or chang-
ing in appearance. Each image that is displayed is called a frame. Te
speed at which these images appear is called the frame rate and is mea-
sured in frames per second (FPS). Te standard frame rate for movies and
television is 24 FPS. Computer monitors typically display graphics at 60
FPS. For virtual reality simulations, developers aim to attain 90 FPS, as
lower frame rates may cause disorientation and other negative side efects
in users. Since computer graphics must render these images in real time,
ofen in response to user interaction, it is vital that computers be able to
do so quickly.
In the early 1990s, computers relied on the central processing unit (CPU)
circuitry to perform the calculations needed for graphics. As real-time 3D
graphics became increasingly common in video game platforms (including
arcades, gaming consoles, and personal computers), there was increased
demand for specialized hardware for rendering these graphics. Tis led to
the development of the graphics processing unit (GPU), a term coined by the
Sony Corporation that referred to the circuitry in their PlayStation video
game console, released in 1994. Te Sony GPU performed graphics-related
computational tasks including managing a framebufer, drawing polygons
with textures, and shading and transparency efects. Te term GPU was
popularized by the NVidia Corporation in 1999 with their release of the
GeForce 256, a single-chip processor that performed geometric transfor-
mations and lighting calculations in addition to the rendering computa-
tions performed by earlier hardware implementations. NVidia was the frst
company to produce a GPU capable of being programmed by developers:
each geometric vertex could be processed by a short program, as could
every rendered pixel, before the resulting image was displayed on screen.
Tis processor, the GeForce 3, was introduced in 2001 and was also used
Introduction to Computer Graphics ◾ 7

in the Xbox video game console. In general, GPUs feature a highly parallel
structure that enables them to be more efcient than CPUs for rendering
computer graphics. As computer technology advances, so does the quality
of the graphics that can be rendered; modern systems are able to produce
real-time photorealistic graphics at high resolutions.
Programs that are run by GPUs are called shaders, initially so named
because they were used for shading efects, but now used to perform many
diferent computations required in the rendering process. Just as there are
many high-level programming languages (such as Java, JavaScript, and
Python) used to develop CPU-based applications, there are many shader
programming languages. Each shader language implements an application
programming interface (API), which defnes a set of commands, functions,
and protocols that can be used to interact with an external system—in this
case, the GPU. Some APIs and their corresponding shader languages include

• Te DirectX API and High-Level Shading Language (HLSL), used on


Microsof platforms, including the Xbox game console
• Te Metal API and Metal Shading Language, which runs on modern
Mac computers, iPhones, and iPads
• Te OpenGL (Open Graphics Library) API and OpenGL Shading
Language (GLSL), a cross-platform library.

Tis book will focus on OpenGL, as it is the most widely adopted graphics
API. As a cross-platform library, visual results will be consistent on any
supported operating system. Furthermore, OpenGL can be used in con-
cert with a variety of high-level languages using bindings: sofware librar-
ies that bridge two programming languages, enabling functions from one
language to be used in another. For example, some bindings to OpenGL
include

• JOGL (https://jogamp.org/jogl/www/) for Java


• WebGL (https://www.khronos.org/webgl/) for JavaScript
• PyOpenGL (http://pyopengl.sourceforge.net/) for Python

Te initial version of OpenGL was released by Silicon Graphics, Inc. (SGI)


in 1992 and has been managed by the Khronos Group since 2006. Te
Khronos Group is a non-proft technology consortium, whose members
8 ◾ Developing Graphics Frameworks with Python and OpenGL

include graphics card manufacturers and general technology companies.


New versions of the OpenGL specifcation are released regularly to support
new features and functions. In this book, you will learn about many of the
OpenGL functions that allow you to take advantage of the graphics capa-
bilities of the GPU and render some truly impressive three-dimensional
scenes. Te steps involved in this rendering process are described in detail
in the sections that follow.

1.2 THE GRAPHICS PIPELINE


A graphics pipeline is an abstract model that describes a sequence of steps
needed to render a three-dimensional scene. Pipelining allows a compu-
tational task to be split into subtasks, each of which can be worked on
in parallel, similar to an assembly line for manufacturing products in a
factory, which increases overall efciency. Graphics pipelines increase
the efciency of the rendering process, enabling images to be displayed
at faster rates. Multiple pipeline models are possible; the one described
in this section is commonly used for rendering real-time graphics using
OpenGL, which consists of four stages (illustrated by Figure 1.7):

• Application Stage: initializing the window where rendered graphics


will be displayed; sending data to the GPU
• Geometry Processing: determining the position of each vertex of the
geometric shapes to be rendered, implemented by a program called
a vertex shader
• Rasterization: determining which pixels correspond to the geometric
shapes to be rendered
• Pixel Processing: determining the color of each pixel in the rendered
image, involving a program called a fragment shader

Each of these stages is described in more detail in the sections that follow;
the next chapter contains code that will begin to implement many of the
processes described here.

FIGURE 1.7 Te graphics pipeline.


Introduction to Computer Graphics ◾ 9

1.2.1 Application Stage


Te application stage primarily involves processes that run on the CPU.
One of the frst tasks is to create a window where the rendered graphics
will be displayed. When working with OpenGL, this can be accomplished
using a variety of programming languages. Te window (or a canvas-like
object within the window) must be initialized so that the graphics are read
from the GPU framebufer. In the case of animated or interactive appli-
cations, the main application contains a loop that re-renders the scene
repeatedly, typically aiming for a rate of 60 FPS. Other processes that may
be handled by the CPU include monitoring hardware for user input events,
or running algorithms for tasks such as physics simulation and collision
detection.
Another class of tasks performed by the application includes read-
ing data required for the rendering process and sending it to the GPU.
Tis data may include vertex attributes (which describe the appearance
of the geometric shapes being rendered), images that will be applied to
surfaces, and source code for the vertex shader and fragment shader pro-
grams (which will be used later on during the graphics pipeline). OpenGL
describes the functions that can be used to transmit this data to the GPU;
these functions are accessed through the bindings of the programming
language used to write the application. Vertex attribute data is stored in
GPU memory bufers called vertex bufer objects (VBOs), while images
that will be used as textures are stored in texture bufers. It is important
to note that this stored data is not initially assigned to any particular pro-
gram variables; these associations are specifed later. Finally, source code
for the vertex shader and fragment shader programs needs to be sent to
the GPU, compiled, and loaded. If needed, bufer data can be updated dur-
ing the application's main loop, and additional data can be sent to shader
programs as well.
Once the necessary data has been sent to the GPU, before rendering
can take place, the application needs to specify the associations between
attribute data stored in VBOs and attribute variables in the vertex shader
program. A single geometric shape may have multiple attributes for each
vertex (such as position and color), and the corresponding data is streamed
from bufers to variables in the shader during the rendering process. It
is also frequently necessary to work with many sets of such associations:
there may be multiple geometric shapes (with data stored in diferent buf-
fers) that are rendered by the same shader program, or each shape may be
rendered by a diferent shader program. Tese sets of associations can be
10 ◾ Developing Graphics Frameworks with Python and OpenGL

FIGURE 1.8 Wireframe meshes representing a sphere and a teapot.

conveniently managed by using vertex array objects (VAOs), which store


this information and can be activated and deactivated as needed during
the rendering process.

1.2.2 Geometry Processing


In computer graphics, the shape of a geometric object is defned by a mesh:
a collection of points that are grouped into lines or triangles, as illustrated
in Figure 1.8.
In addition to the overall shape of an object, additional information
may be required to describe how the object should be rendered. Te prop-
erties or attributes that are specifc to rendering each individual point are
grouped together into a data structure called a vertex. At a minimum, a
vertex must contain the three-dimensional position of the corresponding
point. Additional data contained by a vertex ofen includes

• a color to be used when rendering the point


• texture coordinates (or UV coordinates), which indicate a point in an
image that is mapped to the vertex
• a normal vector, which indicates the direction perpendicular to a
surface and is typically used in lighting calculations

Figure 1.9 illustrates diferent renderings of a sphere that make use of these
attributes. Additional vertex attributes may be defned as needed.
During the geometry processing stage, the vertex shader is applied to
each of the vertices; each attribute variable in the shader receives data
from a bufer according to previously specifed associations. Te pri-
mary purpose of the vertex shader is to determine the fnal position of
Introduction to Computer Graphics ◾ 11

FIGURE 1.9 Diferent renderings of a sphere: wireframe, vertex colors, texture,


and with lighting efects.

FIGURE 1.10 One scene rendered from multiple camera locations and angles.

each point being rendered, which is typically calculated from a series of


transformations:

• the collection of points defning the intrinsic shape of an object may


be translated, rotated, and scaled so that the object appears to have
a particular location, orientation, and size with respect to a virtual
three-dimensional world. Tis process is called the model transfor-
mation; coordinates expressed from this frame of reference are said
to be in world space
• there may be a virtual camera with its own position and orientation
in the virtual world. In order to render the world from the camera’s
point of view, the coordinates of each object in the world must be
converted to a frame of reference relative to the camera itself. Tis
process is called the view transformation, and coordinates in this
context are said to be in view space (or camera space, or eye space).
Te efect of the placement of the virtual camera on the rendered
image is illustrated in Figure 1.10
• the set of points in the world considered to be visible, occupying
either a box-shaped or frustum-shaped region, must be scaled to and
aligned with the space rendered by OpenGL: a cube-shaped region
consisting of all points whose coordinates are between −1 and 1.
12 ◾ Developing Graphics Frameworks with Python and OpenGL

FIGURE 1.11 A series of cubes rendered with orthogonal projection (a) and
perspective projection (b).

Te position of each point returned by the vertex shader is assumed


to be expressed in this frame of reference. Any points outside this
region are automatically discarded or clipped from the scene; coor-
dinates expressed at this stage are said to be in clip space. Tis task is
accomplished with a projection transformation. More specifcally, it is
called an orthographic projection or a perspective projection, depend-
ing on whether the shape of the visible world region is a box or a
frustum. A perspective projection is generally considered to produce
more realistic images, as objects that are farther away from the vir-
tual camera will require greater compression by the transformation
and thus appear smaller when the scene is rendered. Te diferences
between the two types of projections are illustrated in Figure 1.11.

In addition to these transformation calculations, the vertex shader may


perform additional calculations and send additional information to the
fragment shader as needed.

1.2.3 Rasterization
Once the fnal positions of each vertex have been specifed by the vertex
shader, the rasterization stage begins. Te points themselves must frst be
grouped into the desired type of geometric primitive: points, lines, or tri-
angles, which consist of sets of 1, 2, or 3 points. In the case of lines or
triangles, additional information must be specifed. For example, consider
an array of points [A, B, C, D, E, F] to be grouped into lines. Tey could
be grouped in disjoint pairs, as in (A, B), (C, D), (E, F), resulting in a set
of disconnected line segments. Alternatively, they could be grouped in
overlapping pairs, as in (A, B), (B, C), (C, D), (D, E), (E, F), resulting in a
set of connected line segments (called a line strip). Te type of geometric
Introduction to Computer Graphics ◾ 13

primitive and method for grouping points is specifed using an OpenGL


function parameter when the rendering process begins. Te process of
grouping points into geometric primitives is called primitive assembly.
Once the geometric primitives have been assembled, the next step is to
determine which pixels correspond to the interior of each geometric prim-
itive. Since pixels are discrete units, they will typically only approximate
the continuous nature of a geometric shape, and a criterion must be given
to clarify which pixels are in the interior. Tree simple criteria could be

1. the entire pixel area is contained within the shape


2. the center point of the pixel is contained within the shape
3. any part of the pixel is contained within the shape

Tese efects of applying each of these criteria to a triangle are illustrated


in Figure 1.12, where the original triangle appears outlined in blue, and
pixels meeting the criteria are shaded gray.
For each pixel corresponding to the interior of a shape, a fragment is
created: a collection of data used to determine the color of a single pixel in
a rendered image. Te data stored in a fragment always includes the raster
position, also called pixel coordinates. When rendering a three-dimensional
scene, fragments will also store a depth value, which is needed when points
on diferent geometric objects would overlap from the perspective of the
viewer. When this happens, the associated fragments would correspond
to the same pixel, and the depth value determines which fragment’s data
should be used when rendering this pixel.
Additional data may be assigned to each vertex, such as a color, and
passed along from the vertex shader to the fragment shader. In this case, a
new data feld is added to each fragment. Te value assigned to this feld at

FIGURE 1.12 Diferent criteria for rasterizing a triangle.


14 ◾ Developing Graphics Frameworks with Python and OpenGL

FIGURE 1.13 Interpolating color attributes.

each interior point is interpolated from the values at the vertices: calculated
using a weighted average, depending on the distance from the interior
point to each vertex. Te closer an interior point is to a vertex, the greater
the weight of that vertex’s value when calculating the interpolated value.
For example, if the vertices of a triangle are assigned the colors red, green,
and blue, then each pixel corresponding to the interior of the triangle will
be assigned a combination of these colors, as illustrated in Figure 1.13.

1.2.4 Pixel Processing


Te primary purpose of this stage is to determine the fnal color of each
pixel, storing this data in the color bufer within the framebufer. During
the frst part of the pixel processing stage, a program called the fragment
shader is applied to each of the fragments to calculate their fnal color. Tis
calculation may involve a variety of data stored in each fragment, in com-
bination with data globally available during rendering, such as

• a base color applied to the entire shape


• colors stored in each fragment (interpolated from vertex colors)
Introduction to Computer Graphics ◾ 15

FIGURE 1.14 An image fle (a) used as a texture for a 3D object (b).

• textures (images applied to the surface of the shape, illustrated by


Figure 1.14), where colors are sampled from locations specifed by
texture coordinates
• light sources, whose relative position and/or orientation may lighten
or darken the color, depending on the direction the surface is facing
at a point, specifed by normal vectors

Some aspects of the pixel processing stage are automatically handled by


the GPU. For example, the depth values stored in each fragment are used
in this stage to resolve visibility issues in a three-dimensional scene, deter-
mining which parts of objects are blocked from view by other objects.
Afer the color of a fragment has been calculated, the fragment’s depth
value will be compared to the value currently stored in the depth bufer
at the corresponding pixel coordinates. If the fragment's depth value is
smaller than the depth bufer value, then the corresponding point is closer
to the viewer than any that were previously processed, and the fragment’s
color will be used to overwrite the data currently stored in the color bufer
at the corresponding pixel coordinates.
Transparency is also handled by the GPU, using the alpha values stored
in the color of each fragment. Te alpha value of a color is used to indicate
how much of this color should be blended with another color. For example,
when combining a color C1 with an alpha value of 0.6 with another color
C2, the resulting color will be created by adding 60% of the value from
each component of C1 to 40% of the value from each component of C2.
Figure 1.15 illustrates a simple scene involving transparency.
16 ◾ Developing Graphics Frameworks with Python and OpenGL

FIGURE 1.15 Rendered scene with transparency.

However, rendering transparent objects has some complex subtleties.


Tese calculations occur at the same time that depth values are being
resolved, and so scenes involving transparency must render objects in a
particular order: all opaque objects must be rendered frst (in any order),
followed by transparent objects ordered from farthest to closest with respect
to the virtual camera. Not following this order may cause transparency
efects to fail. For example, consider a scene, such as that in Figure 1.15,
containing a single transparent object close to the camera and multiple
opaque objects farther from the camera that appear behind the transparent
object. Assume that, contrary to the previously described rendering order,
the transparent object is rendered frst, followed by the opaque objects in
some unknown order. When the fragments of the opaque objects are pro-
cessed, their depth value will be greater than the value stored in the depth
bufer (corresponding to the closer transparent object), and so the opaque
fragments’ color data will automatically be discarded, rather than blended
with the currently stored color. Even attempting to use the alpha value of
the transparent object stored in the color bufer in this example does not
resolve the underlying issue, because when the fragments of each opaque
object are being rendered, it is not possible at this point to determine if
they may have been occluded from view by another opaque fragment (only
the closest depth value, corresponding to the transparent object, is stored),
and thus, it is unknown which opaque fragment's color values should be
blended into the color bufer.
Introduction to Computer Graphics ◾ 17

1.3 SETTING UP A DEVELOPMENT ENVIRONMENT


Most parts of the graphics pipeline discussed in the previous section—
geometry processing, rasterization, and pixel processing—are handled
by the GPU, and as mentioned previously, this book will use OpenGL for
these tasks. For developing the application, there are many programming
languages one could select from. In this book, you will be using Python
to develop these applications, as well as a complete graphics framework
to simplify the design and creation of interactive, animated, three-
dimensional scenes.

1.3.1 Installing Python


To prepare your development environment, the frst step is to download
and install a recent version of Python (version 3.8 as of this writing) from
http://www.python.org (Figure 1.16); installers are available for Windows,
Mac OS X, and a variety of other platforms.
• When installing for Windows, check the box next to add to path.
Also, select the options custom installation and install for all users;
this simplifes setting up alternative development environments later.

Te Python installer will also install IDLE, Python’s Integrated


Development and Learning Environment, which can be used for devel-
oping the graphics framework presented throughout this book. A more

FIGURE 1.16 Python homepage: http://www.python.org.


18 ◾ Developing Graphics Frameworks with Python and OpenGL

sophisticated alternative is recommended, such as Sublime Text, which


will be introduced later on in this chapter, and some of its advantages
discussed. (If you are already familiar with an alternative Python
development environment, you are of course also welcome to use that
instead.)
IDLE has two main window types. Te frst window type, which auto-
matically opens when you run IDLE, is the shell window, an interactive
window that allows you to write Python code which is then immediately
executed afer pressing the Enter key. Figure 1.17 illustrates this win-
dow afer entering the statements 123 + 214 and print("Hello,
world!"). Te second window type is the editor window, which func-
tions as a text editor, allowing you to open, write, and save fles containing
Python code, which are called modules and typically use the.py fle exten-
sion. An editor window can be opened from the shell window by selecting
either File > New File or File > Open... from the menu bar. Programs may
be run from the editor window by choosing Run > Run Module from the
menu bar; this will display the output in a shell window (opening a new
shell window if none are open). Figure 1.18 illustrates creating a fle in the
editor window containing the following code:

print("Hello, world!")
print("Have a nice day!")

FIGURE 1.17 IDLE shell window.

FIGURE 1.18 IDLE editor window.


Introduction to Computer Graphics ◾ 19

FIGURE 1.19 Results of running the Python program from Figure 1.18.

Figure 1.19 illustrates the results of running this code, which appear in a
shell window.

1.3.2 Python Packages


Once Python has been successfully installed, your next step will be to
install some packages, which are collections of related modules that pro-
vide additional functionality to Python. Te easiest way to do this is by
using pip, a sofware tool for package installation in Python. In particular,
you will install

• Pygame (http://www.pygame.org), a package that can be used to


easily create windows and handle user input
• Numpy (https://numpy.org/), a package for mathematics and scientifc
computing
• PyOpenGL and PyOpenGL_accelerate (http://pyopengl.sourceforge.
net/), which provide a set of bindings from Python to OpenGL.

If you are using Windows, open Command Prompt or PowerShell (run


with administrator privileges so that the packages are automatically avail-
able to all users) and enter the following command, which will install all
of the packages described above:

py -m pip install pygame numpy PyOpenGL


PyOpenGL_accelerate

If you are using MacOS, the command is slightly diferent. Enter

python3-m pip install pygame numpy PyOpenGL


PyOpenGL_accelerate
20 ◾ Developing Graphics Frameworks with Python and OpenGL

To verify that these packages have been installed correctly, open a new
IDLE shell window (restart IDLE if it was open before installation). To
check Pygame, enter the following code, and press the Enter key:

import pygame

You should see a message that contains the number of the Pygame
version that has been installed, such as "pygame 1.9.6", and a greet-
ing message such as "Hello from the pygame community". If
instead you see a message that contains the text No module named
'pygame', then Pygame has not been correctly installed. Furthermore,
it will be important to install a recent version of Pygame—at least a
development version of Pygame 2.0.0. If an earlier version has been
installed, return to the command prompt and in the install command
above, change pygame to pygame==2.0.0.dev10 to install a more
recent version.
Similarly, to check the Numpy installation, instead use the code:

import numpy

In this case, if you see no message at all (just another input prompt), then
the installation was successful. If you see a message that contains the text
No module named 'numpy', then Numpy has not been correctly
installed. Finally, to check PyOpenGL, instead use the code:

import OpenGL

As was the case with testing the Numpy package, if there is no message
displayed, then the installation was successful, but a message mentioning
that the module is not installed will require you to try re-installing the
package.
If you encounter difculties installing any of these packages, there is
additional help available online:

• Pygame: https://www.pygame.org/wiki/GettingStarted
• Numpy: https://numpy.org/install/
• PyOpenGL: at http://pyopengl.sourceforge.net/documentation/
installation.html
Introduction to Computer Graphics ◾ 21

1.3.3 Sublime Text


When working on a large project involving multiple fles, you may want to
install an alternative development environment, rather than restrict your-
self to working with IDLE. Te authors particularly recommend Sublime
Text, which has the following advantages:

• lines are numbered for easy reference


• tabbed interface for working with multiple fles in a single window
• editor supports multi-column layout to view and edit diferent fles
simultaneously
• directory view to easily navigate between project fles in a project
• able to run scripts and display output in console area
• free, full-featured trial version available

To install the application, visit the Sublime Text website (https://www.


sublimetext.com/), shown in Figure 1.20, and click on the “download”
button (whose text may difer from the fgure to reference the operating
system you are using). Alternatively, you may click the download link in
the navigation bar to view all downloadable versions. Afer downloading,
you will need to run the installation program, which will require
administrator-level privileges on your system. If unavailable, you may
alternatively download a “portable version” of the sofware, which can

FIGURE 1.20 Sublime Text homepage


22 ◾ Developing Graphics Frameworks with Python and OpenGL

FIGURE 1.21 Sublime Text editor window.

FIGURE 1.22 Output from Figure 1.21.

be found via the download link previously mentioned. While a free trial
version is available, if you choose to use this sofware extensively, you are
encouraged to purchase a license.
Afer installation, start the Sublime Text sofware. A new editor window
will appear, containing an empty fle. As previously mentioned, Sublime
Text can be used to run Python scripts automatically, provided that Python
has been installed for all users of your computer and it is included on the
system path. To try out this feature, in the editor window, as shown in
Figure 1.21, enter the text:

print("Hello, world!")

Next, save your fle with the name test.py; the.py extension causes
Sublime Text to recognize it as a Python script fle, and syntax highlighting
will be applied. Finally, from the menu bar, select Tools > Build or press
the keyboard key combination Ctrl + B to build and run the application.
Te output will appear in the console area, as illustrated in Figure 1.22.
Introduction to Computer Graphics ◾ 23

1.4 SUMMARY AND NEXT STEPS


In this chapter, you learned about the core concepts and vocabulary used
in computer graphics, including rendering, bufers, GPUs, and shaders.
Ten, you learned about the four major stages in the graphics pipeline: the
application stage, geometry processing, rasterization, and pixel processing;
this section introduced additional terminology, including vertices,
VBOs, VAOs, transformations, projections, fragments, and interpolation.
Finally, you learned how to set up a Python development environment. In
the next chapter, you will use Python to start implementing the graphics
framework that will realize these theoretical principles.
CHAPTER 2

Introduction to
Pygame and OpenGL

I n this chapter, you will learn how to create windows with Pygame
and how to draw graphics in these windows with OpenGL. You will
start by rendering a point, followed by lines and triangles with a single
color. Ten, you will draw multiple shapes with multiple colors, create a
series of animations involving movement and color transitions, and imple-
ment interactive applications with keyboard controlled movement.

2.1 CREATING WINDOWS WITH PYGAME


As indicated in the discussion of the graphics pipeline, the frst step in
rendering graphics is to develop an application where graphics will be
displayed. Tis can be accomplished with a variety of programming lan-
guages; throughout this book, you will write windowed applications using
Python and Pygame, a popular Python game development library.
As you write code, it is important to keep a number of sofware
engineering principles in mind, including organization, reusability, and
extensibility. To support these principles, the sofware developed in this
book uses an object-oriented design approach. To begin, create a main
folder where you will store your source code. Within this folder, you will
store the main applications as well as your own packages: folders contain-
ing collections of related modules, which in this case will be Python fles
containing class defnitions.

DOI: 10.1201/9781003181378-2 25
26 ◾ Developing Graphics Frameworks with Python and OpenGL

First, you will create a class called Base that initializes Pygame and
displays a window. Anticipating that the applications created will eventu-
ally feature user interaction and animation, this class will be designed to
handle the standard phases or “life cycle” of such an application:

• Startup: During this stage, objects are created, values are initialized,
and any required external fles are loaded.
• Te Main Loop: Tis stage repeats continuously (typically 60 times
per second), while the application is running and consists of the
following three substages:
• Process Input: Check if the user has performed any action that
sends data to the computer, such as pressing keys on a keyboard
or clicking buttons on a mouse.
• Update: Changing values of variables and objects.
• Render: Create graphics that are displayed on the screen.
• Shutdown: Tis stage typically begins when the user performs an
action indicating that the program should stop running (for example,
by clicking a button to quit the application). Tis stage may involve
tasks such as signaling the application to stop checking for user input
and closing any windows that were created by the application.

Tese phases are illustrated by the fowchart in Figure 2.1.


Te Base class will be designed to be extended by the various
applications throughout this book. In accordance with the principle of
modularization, processing user input will be handled by a separate class
named Input that you will create later.
To begin, in your main folder, create a new folder called core. For Python
to recognize this (or any) folder as a package, within the folder, you need

FIGURE 2.1 Te phases of an interactive graphics-based application.


Introduction to Pygame and OpenGL ◾ 27

to create a new fle named __init __.py (note the double underscore
characters that occur before and afer init). Any code in the __init __.
py fle will be run when modules from this package are imported into
another program; leave this as an empty fle. Next, also in the core folder,
create a new fle named base.py, and enter the following code (which con-
tains some basic comments that will be explained more fully afer):

import pygame
import sys

class Base(object):

def __init__(self, screenSize=[512, 512]):

# initialize all pygame modules


pygame.init()
# indicate rendering details
displayFlags = pygame.DOUBLEBUF | pygame.
OPENGL
# initialize buffers to perform antialiasing
pygame.display.gl_set_attribute(
pygame.GL_MULTISAMPLEBUFFERS, 1)
pygame.display.gl_set_attribute(
pygame.GL_MULTISAMPLESAMPLES, 4)
# use a core OpenGL profile for cross-platform
compatibility
pygame.display.gl_set_attribute(
pygame.GL_CONTEXT_PROFILE_MASK,
pygame.GL_CONTEXT_PROFILE_CORE)
# create and display the window
self.screen = pygame.display.set_mode(
screenSize, displayFlags )
# set the text that appears in the title bar
of the window
pygame.display.set_caption("Graphics Window")

# determine if main loop is active


self.running = True
# manage time-related data and operations
self.clock = pygame.time.Clock()

# implement by extending class


28 ◾ Developing Graphics Frameworks with Python and OpenGL

def initialize(self):
pass

# implement by extending class


def update(self):
pass

def run(self):

## startup ##
self.initialize()

## main loop ##
while self.running:

## process input ##

## update ##
self.update()

## render ##
# display image on screen
pygame.display.flip()

# pause if necessary to achieve 60 FPS


self.clock.tick(60)

## shutdown ##
pygame.quit()
sys.exit()

In addition to the comments throughout the code above, the following


observations are noteworthy:

• Te screenSize parameter can be changed as desired. At present,


if the screen size is set to non-square dimensions, this will cause the
rendered image to appear stretched along one direction. Tis issue
will be addressed in Chapter 4 when discussing aspect ratios.
• Te title of the window is set with the function pygame.display.
set _ caption and can be changed as desired.
Introduction to Pygame and OpenGL ◾ 29

• Te displayFlags variable is used to combine constants rep-


resenting diferent display settings with the bitwise or operator
'|'. Additional settings (such as allowing a resizable window) are
described at https://www.pygame.org/docs/ref/display.html.
• Te pygame.DOUBLEBUF constant indicates that a rendering tech-
nique called double bufering will be used, which employs two image
bufers. Te pixel data from one bufer is displayed on screen while
new data is being written into a second bufer. When the new image
is ready, the application switches which bufer is displayed on screen
and which bufer will be written to; this is accomplished in Pygame
with the statement pygame.display.flip(). Te double bufer-
ing technique eliminates an unwanted visual artifact called screen
tearing, in which the pixels from a partially updated bufer are dis-
played on screen, which happens when a single bufer is used and the
rendering cycles are not synchronized with the display refresh rate.
• Antialiasing is a rendering technique used to remove the appear-
ance of jagged, pixelated lines along edges of polygons in a rasterized
image. Te two lines of code beneath the antialiasing comment indi-
cate that each pixel at the edge of a polygon will be sampled multiple
times, and in each sample, a slight ofset (smaller than the size of a
pixel) is applied to all screen coordinates. Te color samples are aver-
aged, resulting in a smoother transition between pixel colors along
polygon edges.
• Starting in OpenGL version 3.2 (introduced in 2009), deprecation
was introduced: older functions were gradually replaced by more
efcient versions, and future versions may no longer contain or sup-
port the older functions. Tis led to core and compatibility profles:
core profles are only guaranteed to implement functions present
in the current version of the API, while compatibility profles will
additionally support many functions that may have been deprecated.
Each hardware vendor decides which versions of OpenGL will be
supported by each profle. In recent versions of Mac OS X (10.7 and
later) at the time of writing, the core profle supported is 3.2, while
the compatibility profle supported is 2.1. Since some of the OpenGL
features (such as vertex array objects or VAOs) that will be needed
in constructing the graphics framework in this book were intro-
duced in GLSL version 3.0, a core profle is specifed for maximum
30 ◾ Developing Graphics Frameworks with Python and OpenGL

cross-platform compatibility. Te corresponding line of code also


requires at least Pygame version 2.0 to run.
• Te function pygame.display.set _ mode sets the properties
of the window and also makes the window appear on screen.
• Te Clock object (initialized with pygame.time.Clock()) has
many uses, such as keeping track of how much time has passed since
a previous function call, or how many times a loop has run during the
past second. Each iteration of the main loop results in an image being
displayed, which can be considered a frame of an animation. Since
the speed at which these images appear is the speed at which the main
loop runs, both are measured in terms of frames per second (FPS).
By default, the main loop will run as fast as possible—sometimes
faster than 60 FPS, in which case the program may attempt to use
nearly 100% of the CPU. Since most computer displays only update
60 times per second, there is no need to run faster than this, and the
tick function called at the end of the main loop results in a short
pause at the end of the loop that will bring the execution speed down
to 60 FPS.
• Te initialize and update functions are meant to be imple-
mented by the applications that extend this class. Since every func-
tion must contain at least one statement, the pass statement is used
here, which is a null statement—it does nothing.
• Te run function contains all the phases of an interactive graphics-
based application, as described previously; the corresponding code is
indicated by comments beginning with ##.

Te next task we need to address is basic input processing; at a minimum,


the user needs to be able to terminate the program, which will set the vari-
able self.running to False in the code above. To this end, in the core
folder, create a new fle named input.py containing the following code:

import pygame

class Input(object):

def __init__(self):
Introduction to Pygame and OpenGL ◾ 31

# has the user quit the application?


self.quit = False

def update(self):
# iterate over all user input events (such as
keyboard or
# mouse)that occurred since the last time
events were checked
# for event in pygame.event.get():
# quit event occurs by clicking button to
close window
if event.type == pygame.QUIT:
self.quit = True

At present, the Input class only monitors for quit-type events; in later
sections, keyboard functionality will be added as well. For now, return to
the Base class. Afer the import statements, add the code:

from core.input import Input

Tis will enable you to use the Input class from the input module
in the core package. It should be noted that the import statements
are written assuming that your application fles (which will extend the
Base class) will be stored in the main directory (which contains all the
packages).
Next, at the end of the init function, add the code:

# manage user input


self.input = Input()

Tis will create and store an instance of the Input class when the Base
class is created.
Finally, in the run function, afer the comment ## process input
##, add the code:

self.input.update()
if self.input.quit:
self.running = False

Tis will enable the user to stop the application, as described prior to the
code listing for the Input class.
32 ◾ Developing Graphics Frameworks with Python and OpenGL

You will next write an application that uses these classes to create a win-
dow. Te general approach in this and similar applications will to extend
the Base class, implement the initialize and update functions, and
then, create an instance of the new class and call the run function. To
proceed, in your main folder, create a new fle named test-2-1.py with
the code:

from core.base import Base

class Test(Base):

def initialize(self):
print("Initializing program...")

def update(self):
pass

# instantiate this class and run the program


Test().run()

In this program, a message is printed during initialization for illustra-


tive purposes. However, no print statements are present in the update
function, as attempting to print text 60 times per second would cause
extreme slowdown in any program. Run this program, and you should see
a blank window appear on screen (as illustrated in Figure 2.2) and the text
"Initializing program..." will appear in the shell. When you click
on the button to close the window, the window should close, as expected.

2.2 DRAWING A POINT


Now that you are able to create a windowed application, the next goal is
to render a single point on the screen. You will accomplish this by writing
the simplest possible vertex shader and fragment shader, using OpenGL
Shading Language. You will then learn how to compile and link the shad-
ers to create a graphics processing unit (GPU) program. Finally, you will
extend the framework begun in the previous section to use GPU programs
to display graphics in the Pygame application window.

2.2.1 OpenGL Shading Language


OpenGL Shading Language (GLSL) is a C-style language, and is both
similar to and diferent from Python in a number of ways. Similar to
Introduction to Pygame and OpenGL ◾ 33

FIGURE 2.2 Te Pygame window.

Python, there are “if statements” to process conditional statements, “for


loops” to iterate a group of statements over a range of values, "while loops"
to iterate a group of statements as long as a given condition is true, and
functions that take a set of inputs and perform some computations (and
optionally return an output value). Unlike Python, variables in GLSL
must be declared with an assigned type, the end of each statement must
be indicated with a semicolon, statements are grouped using braces (as
opposed to indentation), comments are preceded by "//" (rather than "#"),
and functions must specify the types of their input parameters and return
value. Te details of the diferences in Python and GLSL syntax will be
illustrated and indicated as the features are introduced in the examples
throughout this book.
Te basic data types in GLSL are boolean, integer, and foating point
values, indicated by bool, int, and float, respectively. GLSL has vector
data types, which are ofen used to store values indicating positions, colors,
and texture coordinates. Vectors may have two, three, or four components,
indicated by vec2, vec3, and vec4 (for vectors consisting of foats). As
a C-like language, GLSL provides arrays and structs: user-defned data
34 ◾ Developing Graphics Frameworks with Python and OpenGL

types. To facilitate graphics-related computations, GLSL also features


matrix data types, which ofen store transformations (translation, rota-
tion, scaling, and projections), and sampler data types, which represent
textures; these will be introduced in later chapters.
Te components of a vector data type can be accessed in multiple ways.
For example, once a vec4 named v has been initialized, its components
can be accessed using array notation ( v[0], v[1], v[2], v[3] ), or using
dot notation with any of the following three systems: ( v.x, v.y, v.z, v.w )
or ( v.r, v.g, v.b, v.a ) or ( v.s, v.t, v.p, v.q ). While all these systems
are interchangeable, programmers typically choose a system related to the
context for increased readability: (x, y, z, w) are used for positions, (r, g, b,
a) are used for colors (red, green, blue, alpha), and (s, t, p, q) are used for
texture coordinates.
Every shader must contain a function named main, similar to the C
programming language. No values are returned (which is indicated by
the keyword void), and there are no parameters required by the main
function; thus, every shader has the general following structure:

void main()
{
// code statements here
}

In the description of the graphics pipeline from the previous chapter, it


was mentioned that a vertex shader will receive data about the geometric
shapes being rendered via bufers. At this point, you may be wondering
how vertex attribute data is sent from bufers to a vertex shader if the
main function takes no parameters. In general, data is passed in and
out of shaders via variables that are declared with certain type qualifers:
additional keywords that modify the properties of a variable. For example,
many programming languages have a qualifer to indicate that the value of
a variable will remain constant; in GLSL, this is indicated by the keyword
const. Additionally, when working with shaders, the keyword in indi-
cates that the value of a variable will be supplied by the previous stage of
the graphics pipeline, while the keyword out indicates that a value will be
passed along to the next stage of the graphics pipeline. More specifcally,
in the context of a vertex shader, in indicates that values will be supplied
from a bufer, while out indicates that values will be passed to the frag-
ment shader. In the context of a fragment shader, in indicates that values
Introduction to Pygame and OpenGL ◾ 35

will be supplied from the vertex shader (interpolated during the rasteriza-
tion stage), while out indicates values will be stored in one of the various
bufers (color, depth, or stencil).
Tere are two particular out variables that are required when writ-
ing shader code for a GPU program. First, recall that the ultimate goal
of the vertex shader is to calculate the position of a point. OpenGL uses
the built-in variable gl _ Position to store this value; a value must be
assigned to this variable by the vertex shader. Second, recall that the ulti-
mate goal of the fragment shader is to calculate the color of a pixel. Early
versions of OpenGL used a built-in variable called gl _ FragColor to
store this value, and each fragment shader was required to assign a value
to this variable. Later versions require fragment shader code to explic-
itly declare an out variable for this purpose. Finally, it should be men-
tioned that both of these variables are vec4 type variables. For storing
color data, this makes sense as red, green, blue, and alpha (transparency)
values are required. For storing position data, this is less intuitive, as a
position in three-dimensional space can be specifed using only x, y, and
z coordinates. By including a fourth coordinate (commonly called w and
set to the value 1), this makes it possible for geometric transformations
(such as translation, rotation, scaling, and projection) to be represented
by and calculated using a single matrix, which will be discussed in detail
in Chapter 3.
As indicated at the beginning of this section, the current goal is to write
a vertex shader and a fragment shader that will render a single point on
the screen. Te code presented will avoid the use of bufers and exclusively
use built-in variables. (You do not need to create any new fles or enter any
code at this time.) Te vertex shader will consist of the following code:

void main()
{
gl_Position = vec4(0.0, 0.0, 0.0, 1.0);
}

In early versions of OpenGL, the simplest possible fragment shader could


have consisted of the following code:

void main()
{
gl_FragColor = vec4(1.0, 1.0, 0.0, 1.0);
}
36 ◾ Developing Graphics Frameworks with Python and OpenGL

For more modern versions of OpenGL, where you need to declare a vari-
able for the output color, you can use the following code for the fragment
shader:

out vec4 fragColor;


void main()
{
fragColor = vec4(1.0, 1.0, 0.0, 1.0);
}

Taken together, the vertex shader and the fragment shader produce a
program that renders a point in the center of the screen, colored yellow. If
desired, these values can be altered within certain bounds. Te x, y, and z
components of the position vector may be changed to any value between
−1.0 and 1.0, and the point will remain visible; any values outside this range
place the point outside of the region rendered by OpenGL and will result
in an empty image being rendered. Changing the z coordinate (within this
range) will have no visible efect at this time, since no perspective trans-
formations are being applied. Similarly, the r, g, and b components of the
color vector may be changed as desired, although dark colors may be dif-
fcult to distinguish on the default black background color. It should also
be noted that the number types int and float are not interchangeable;
entering just 1 rather than 1.0 may cause shader compilation errors.

2.2.2 Compiling GPU Programs


Now that you’ve learned the basics about writing shader code, the next
step is to learn how to compile and link the vertex and fragment shaders
to create a GPU program. To continue with the goal of creating a reusable
framework, you will create a utility class that will perform these tasks. In
this section and those that follow, many of the functions from PyOpenGL
will be introduced and described in the following style:

functionName( parameter1 , parameter2 , … )


Description of function and parameters.

Many of these functions will have syntax identical to that presented in


the ofcial OpenGL reference pages maintained by the Khronos Group
at https://www.khronos.org/registry/OpenGL-Refpages/. However, there
will be a few diferences, because the OpenGL Shading Language (GLSL)
Introduction to Pygame and OpenGL ◾ 37

is a C-style language, and PyOpenGL is a Python binding. In particular,


arrays are handled diferently in these two programming languages, which
is ofen refected in the Python functions requiring fewer arguments.
Te frst step towards compiling GPU programs centers on the indi-
vidual shaders. Shader objects must be created to store shader source code,
the source code must be sent to these objects, and then the shaders must be
compiled. Tis is accomplished using the following three functions:

glCreateShader( shaderType )
Creates an empty shader object, which is used to store the source code of
a shader, and returns a value by which it can be referenced. Te type of
shader (such as a vertex shader or a fragment shader) is specifed with
the shaderType parameter, whose value will be an OpenGL constant
such as GL_VERTEX_SHADER or GL_FRAGMENT_SHADER.
glShaderSource( shaderRef, shaderCode )
Stores the source code in the string parameter shaderCode in the shader
object referenced by the parameter shaderRef.
glCompileShader( shaderRef )
Compiles the source code stored in the shader object referenced by the
parameter shaderRef.

Since mistakes may be made when writing shader code, compiling a shader
may or may not succeed. Unlike application compilation errors, which are
typically automatically displayed to the programmer, shader compila-
tion errors need to be checked for specifcally. Tis process is typically
handled in multiple steps: checking if compilation was successful, and if
not, retrieving the error message, and deleting the shader object to free up
memory. Tis is handled with the following functions:

glGetShaderiv( shaderRef, shaderInfo )


Returns information from the shader referenced by the parameter
shaderRef. Te type of information retrieved is specifed with the
shaderInfo parameter, whose value will be an OpenGL constant
such as GL_SHADER_TYPE (to determine the type of shader)
or GL_COMPILE_STATUS (to determine if compilation was
successful).
38 ◾ Developing Graphics Frameworks with Python and OpenGL

glGetShaderInfoLog( shaderRef )
Returns information about the compilation process (such as errors and
warnings) from the shader referenced by the parameter shaderRef.
glDeleteShader( shaderRef )
Frees the memory used by the shader referenced by the parameter
shaderRef, and makes the reference available for future shaders that
are created.

With an understanding of these functions, coding in Python can begin.


Te Python binding PyOpenGL provides access to the needed functions
and constants through the OpenGL package and its GL namespace. To
begin, in the core folder, create a new fle named openGLUtils.py
with the following code:

from OpenGL.GL import *

# static methods to load and compile OpenGL shaders


and link to create programs
class OpenGLUtils(object):

@staticmethod
def initializeShader(shaderCode, shaderType):

# specify required OpenGL/GLSL version


shaderCode = '#version 330\n' + shaderCode

# create empty shader object and return reference


value
shaderRef = glCreateShader(shaderType)
# stores the source code in the shader
glShaderSource(shaderRef, shaderCode)
# compiles source code previously stored in the
shader object
glCompileShader(shaderRef)

# queries whether shader compile was successful


compileSuccess = glGetShaderiv(shaderRef,
GL_COMPILE_STATUS)
Introduction to Pygame and OpenGL ◾ 39

if not compileSuccess:
# retrieve error message
errorMessage = glGetShaderInfoLog(shaderRef)
# free memory used to store shader program
glDeleteShader(shaderRef)
# convert byte string to character string
errorMessage = '\n' + errorMessage.
decode('utf-8')
# raise exception: halt program and print
error message
raise Exception( errorMessage )

# compilation was successful; return shader


reference value
return shaderRef

Note that in the code above, initializeShader is declared to be static


so that it may be called directly from the OpenGLUtils class rather than
requiring an instance of the class to be created.
Next, a program object must be created and the compiled shaders must
be attached and linked together. Tese tasks require the use of the following
functions:

glCreateProgram( )
Creates an empty program object, to which shader objects can be
attached, and returns a value by which it can be referenced.

glAttachShader( programRef, shaderRef )


Attaches a shader object specifed by the parameter shaderRef to the
program object specifed by the parameter programRef.

glLinkProgram( programRef )
Links the vertex and fragment shaders previously attached to the pro-
gram object specifed by the parameter programRef. Among other
things, this process verifes that any variables used to send data from
the vertex shader to the fragment shader are declared in both shaders
consistently.
Other documents randomly have
different content
back
back
back
back
back
back
back
back
back
back
back
back
back
back
back
back
back
back
back
back
back
Welcome to Our Bookstore - The Ultimate Destination for Book Lovers
Are you passionate about books and eager to explore new worlds of
knowledge? At our website, we offer a vast collection of books that
cater to every interest and age group. From classic literature to
specialized publications, self-help books, and children’s stories, we
have it all! Each book is a gateway to new adventures, helping you
expand your knowledge and nourish your soul
Experience Convenient and Enjoyable Book Shopping Our website is more
than just an online bookstore—it’s a bridge connecting readers to the
timeless values of culture and wisdom. With a sleek and user-friendly
interface and a smart search system, you can find your favorite books
quickly and easily. Enjoy special promotions, fast home delivery, and
a seamless shopping experience that saves you time and enhances your
love for reading.
Let us accompany you on the journey of exploring knowledge and
personal growth!

ebookball.com

You might also like