0% found this document useful (0 votes)
115 views

Ray Tracing

Ray tracing is a technique for generating realistic computer graphics images by simulating the physical behavior of light. It works by tracing the path of light rays as they interact with virtual objects in a scene. This allows for highly realistic rendering of effects like reflections, refractions, and shadows. While computationally intensive, ray tracing can produce images with a very high degree of visual realism. The algorithm works by extending rays into a scene from a virtual camera and simulating how they interact with objects based on their material properties and lighting conditions.

Uploaded by

Nitesh Kumar
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
115 views

Ray Tracing

Ray tracing is a technique for generating realistic computer graphics images by simulating the physical behavior of light. It works by tracing the path of light rays as they interact with virtual objects in a scene. This allows for highly realistic rendering of effects like reflections, refractions, and shadows. While computationally intensive, ray tracing can produce images with a very high degree of visual realism. The algorithm works by extending rays into a scene from a virtual camera and simulating how they interact with objects based on their material properties and lighting conditions.

Uploaded by

Nitesh Kumar
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

Ray tracing (graphics)

From Wikipedia, the free encyclopedia


Not to be confused with  Ray casting.

This article's introduction section may not adequately summarize its contents. To


comply with Wikipedia's lead section guidelines, please consider expanding the lead
to provide an accessible overview of the article's key points. (June 2010)
This article needs additional citations for verification.
Please help improve this article by adding reliable references. Unsourced material may
be challenged and removed. (March 2008)

This recursive ray tracing of a sphere demonstrates the effects of shallow depth of field, area light sources, diffuse
interreflection, ambient occlusion[citation needed] and fresnel reflection[citation needed].

In computer graphics, ray tracing is a technique for generating an image by tracing the path
of light through pixels in an image plane and simulating the effects of its encounters with virtual objects. The
technique is capable of producing a very high degree of visual realism, usually higher than that of
typical scanline rendering methods, but at a greater computational cost. This makes ray tracing best suited for
applications where the image can be rendered slowly ahead of time, such as in still images and film and
television special effects, and more poorly suited for real-time applications like computer games where speed is
critical. Ray tracing is capable of simulating a wide variety of optical effects, such
as reflection and refraction, scattering, and chromatic aberration.

Contents
 [hide]

1 Algorithm overview

2 Detailed description of ray tracing computer algorithm and its

genesis
o 2.1 What happens in nature

o 2.2 Ray casting algorithm

o 2.3 Ray tracing algorithm

o 2.4 Advantages over other rendering methods

o 2.5 Disadvantages

o 2.6 Reversed direction of traversal of scene by the rays

3 In real time

4 Example

5 See also

6 References

7 External links

o 7.1 Videos

[edit]Algorithm overview

The ray tracing algorithm builds an image by extending rays into a scene

Optical ray tracing describes a method for producing visual images constructed in 3D computer
graphics environments, with more photorealism than either ray casting or scanline rendering techniques. It
works by tracing a path from an imaginary eye through each pixel in a virtual screen, and calculating the color
of the object visible through it.

Scenes in raytracing are described mathematically by a programmer or by a visual artist (typically using
intermediary tools). Scenes may also incorporate data from images and models captured by means such as
digital photography.

Typically, each ray must be tested for intersection with some subset of all the objects in the scene. Once the
nearest object has been identified, the algorithm will estimate the incoming light at the point of intersection,
examine the material properties of the object, and combine this information to calculate the final color of the
pixel. Certain illumination algorithms and reflective or translucent materials may require more rays to be re-cast
into the scene.

It may at first seem counterintuitive or "backwards" to send rays away from the camera, rather than into it (as
actual light does in reality), but doing so is many orders of magnitude more efficient. Since the overwhelming
majority of light rays from a given light source do not make it directly into the viewer's eye, a "forward"
simulation could potentially waste a tremendous amount of computation on light paths that are never recorded.
A computer simulation that starts by casting rays from the light source is called Photon mapping, and it takes
much longer than a comparable ray trace.

Therefore, the shortcut taken in raytracing is to presuppose that a given ray intersects the view frame. After
either a maximum number of reflections or a ray traveling a certain distance without intersection, the ray
ceases to travel and the pixel's value is updated. The light intensity of this pixel is computed using a number of
algorithms, which may include the classic rendering algorithm and may also incorporate techniques such
as radiosity.

[edit]Detailed description of ray tracing computer algorithm and its genesis


[edit]What happens in nature

Ray tracing can achieve a very high degree of visual realism

In nature, a light source emits a ray of light which travels, eventually, to a surface that interrupts its progress.
One can think of this "ray" as a stream of photons traveling along the same path. In a perfect vacuum this ray
will be a straight line (ignoringrelativistic effects). In reality, any combination of four things might happen with
this light ray: absorption, reflection, refractionand fluorescence. A surface may reflect all or part of the light ray,
in one or more directions. It might also absorb part of the light ray, resulting in a loss of intensity of the reflected
and/or refracted light. If the surface has any transparent or translucentproperties, it refracts a portion of the light
beam into itself in a different direction while absorbing some (or all) of the spectrum(and possibly altering the
color). Less commonly, a surface may absorb some portion of the light and fluorescently re-emit the light at a
longer wavelength colour in a random direction, though this is rare enough that it can be discounted from most
rendering applications. Between absorption, reflection, refraction and fluorescence, all of the incoming light
must be accounted for, and no more. A surface cannot, for instance, reflect 66% of an incoming light ray, and
refract 50%, since the two would add up to be 116%. From here, the reflected and/or refracted rays may strike
other surfaces, where their absorptive, refractive, reflective and fluorescent properties again affect the progress
of the incoming rays. Some of these rays travel in such a way that they hit our eye, causing us to see the scene
and so contribute to the final rendered image.

[edit]Ray casting algorithm


Main article:  Ray casting

In addition to the high degree of realism, ray tracing can simulate the effects of a camera due to depth of
field andaperture shape (in this case a hexagon).

The first ray casting (versus ray tracing) algorithm used for rendering was presented by Arthur Appel[1] in 1968.
The idea behind ray casting is to shoot rays from the eye, one per pixel, and find the closest object blocking the
path of that ray – think of an image as a screen-door, with each square in the screen being a pixel. This is then
the object the eye normally sees through that pixel. Using the material properties and the effect of the lights in
the scene, this algorithm can determine the shading of this object. The simplifying assumption is made that if a
surface faces a light, the light will reach that surface and not be blocked or in shadow. The shading of the
surface is computed using traditional 3D computer graphics shading models. One important advantage ray
casting offered over older scanline algorithms is its ability to easily deal with non-planar surfaces and solids,
such as cones and spheres. If a mathematical surface can be intersected by a ray, it can be rendered using ray
casting. Elaborate objects can be created by using solid modeling techniques and easily rendered.

[edit]Ray tracing algorithm


The number of reflections a “ray” can take and how it is affected each time it encounters a surface is all controlled via
software settings during ray tracing. Here, each ray was allowed to reflect up to 16 times. Multiple “reflections of reflections”
can thus be seen. Created with Cobalt

The number of refractions a “ray” can take and how it is affected each time it encounters a surface is all controlled via
software settings during ray tracing. Here, each ray was allowed to refract and reflect up to 9 times. Fresnel reflections were
used. Also note the caustics. Created withVray

The next important research breakthrough came from Turner Whitted in 1979[2]. Previous algorithms cast rays
from the eye into the scene, but the rays were traced no further. Whitted continued the process. When a ray
hits a surface, it could generate up to three new types of rays: reflection, refraction, and shadow. [3] A reflected
ray continues on in the mirror-reflection direction from a shiny surface. It is then intersected with objects in the
scene; the closest object it intersects is what will be seen in the reflection. Refraction rays traveling through
transparent material work similarly, with the addition that a refractive ray could be entering or exiting a material.
To further avoid tracing all rays in a scene, a shadow ray is used to test if a surface is visible to a light. A ray
hits a surface at some point. If the surface at this point faces a light, a ray (to the computer, a line segment) is
traced between this intersection point and the light. If any opaque object is found in between the surface and
the light, the surface is in shadow and so the light does not contribute to its shade. This new layer of ray
calculation added more realism to ray traced images.
[edit]Advantages over other rendering methods
Ray tracing's popularity stems from its basis in a realistic simulation of lighting over other rendering methods
(such as scanline rendering or ray casting). Effects such as reflections and shadows, which are difficult to
simulate using other algorithms, are a natural result of the ray tracing algorithm. Relatively simple to implement
yet yielding impressive visual results, ray tracing often represents a first foray into graphics programming. The
computational independence of each ray makes ray tracing amenable toparallelization.[4]

[edit]Disadvantages

A serious disadvantage of ray tracing is performance. Scanline algorithms and other algorithms use data
coherence to share computations between pixels, while ray tracing normally starts the process anew, treating
each eye ray separately. However, this separation offers other advantages, such as the ability to shoot more
rays as needed to perform anti-aliasing and improve image quality where needed. Although it does handle
interreflection and optical effects such as refraction accurately, traditional ray tracing is also not necessarily
photorealistic. True photorealism occurs when the rendering equation is closely approximated or fully
implemented. Implementing the rendering equation gives true photorealism, as the equation describes every
physical effect of light flow. However, this is usually infeasible given the computing resources required. The
realism of all rendering methods, then, must be evaluated as an approximation to the equation, and in the case
of ray tracing, it is not necessarily the most realistic. Other methods, including photon mapping, are based upon
ray tracing for certain parts of the algorithm, yet give far better results.

[edit]Reversed direction of traversal of scene by the rays


The process of shooting rays from the eye to the light source to render an image is sometimes
called backwards ray tracing, since it is the opposite direction photons actually travel. However, there is
confusion with this terminology. Early ray tracing was always done from the eye, and early researchers such
as James Arvo used the term backwards ray tracing to mean shooting rays from the lights and gathering the
results. Therefore it is clearer to distinguish eye-based versus light-based ray tracing.

While the direct illumination is generally best sampled using eye-based ray tracing, certain indirect effects can
benefit from rays generated from the lights. Caustics are bright patterns caused by the focusing of light off a
wide reflective region onto a narrow area of (near-)diffuse surface. An algorithm that casts rays directly from
lights onto reflective objects, tracing their paths to the eye, will better sample this phenomenon. This integration
of eye-based and light-based rays is often expressed as bidirectional path tracing, in which paths are traced
from both the eye and lights, and the paths subsequently joined by a connecting ray after some length. [5][6]

Photon mapping is another method that uses both light-based and eye-based ray tracing; in an initial pass,
energetic photons are traced along rays from the light source so as to compute an estimate of radiant flux as a
function of 3-dimensional space (the eponymous photon map itself). In a subsequent pass, rays are traced
from the eye into the scene to determine the visible surfaces, and the photon map is used to estimate the
illumination at the visible surface points.[7][8] The advantage of photon mapping versus bidirectional path tracing
is the ability to achieve significant reuse of photons, reducing computation, at the cost of statistical bias.

An additional problem occurs when light must pass through a very narrow aperture to illuminate the scene
(consider a darkened room, with a door slightly ajar leading to a brightly-lit room), or a scene in which most
points do not have direct line-of-sight to any light source (such as with ceiling-directed light fixtures or
torchieres). In such cases, only a very small subset of paths will transport energy; Metropolis light transport is a
method which begins with a random search of the path space, and when energetic paths are found, reuses this
information by exploring the nearby space of rays.[9]

To the right is an image showing a simple example of a path of rays recursively generated from the camera (or
eye) to the light source using the above algorithm. A diffuse surface reflects light in all directions.

First, a ray is created at an eyepoint and traced through a pixel and into the scene, where it hits a diffuse
surface. From that surface the algorithm recursively generates a reflection ray, which is traced through the
scene, where it hits another diffuse surface. Finally, another reflection ray is generated and traced through the
scene, where it hits the light source and is absorbed. The color of the pixel now depends on the colors of the
first and second diffuse surface and the color of the light emitted from the light source. For example if the light
source emitted white light and the two diffuse surfaces were blue, then the resulting color of the pixel is blue.

[edit]In real time

The first implementation of a "real-time" ray-tracer was credited at the 2005 SIGGRAPH computer graphics
conference as the REMRT/RT tools developed in 1986 by Mike Muuss for theBRL-CAD solid modeling system.
Initially published in 1987 at USENIX, the BRL-CAD ray-tracer is the first known implementation of a parallel
network distributed ray-tracing system that achieved several frames per second in rendering performance.
[10]
 This performance was attained by means of the highly-optimized yet platform independent LIBRT ray-tracing
engine in BRL-CAD and by using solid implicit CSG geometry on several shared memory parallel machines
over a commodity network. BRL-CAD's ray-tracer, including REMRT/RT tools, continue to be available and
developed today as Open source software.[11]
Since then, there have been considerable efforts and research towards implementing ray tracing in real time
speeds for a variety of purposes on stand-alone desktop configurations. These purposes include interactive 3D
graphics applications such as demoscene productions, computer and video games, and image rendering.
Some real-time software 3D engines based on ray tracing have been developed by hobbyist demo
programmers since the late 1990s.[12]

The OpenRT project includes a highly-optimized software core for ray tracing along with an OpenGL-like API in
order to offer an alternative to the current rasterisation based approach for interactive 3D graphics. Ray tracing
hardware, such as the experimental Ray Processing Unit developed at the Saarland University, has been
designed to accelerate some of the computationally intensive operations of ray tracing. On March 16, 2007, the
University of Saarland revealed an implementation of a high-performance ray tracing engine that allowed
computer games to be rendered via ray tracing without intensive resource usage. [13]

On June 12, 2008 Intel demonstrated a special version of Enemy Territory: Quake Wars, titled Quake Wars:
Ray Traced, using ray tracing for rendering, running in basic HD (720p) resolution. ETQW operated at 14-29
frames per second. The demonstration ran on a 16-core (4 socket, 4 core) Tigerton system running at
2.93 GHz.[14]

At SIGGRAPH 2009, Nvidia announced OptiX, an API for real-time ray tracing on Nvidia GPUs. The API
exposes seven programmable entry points within the ray tracing pipeline, allowing for custom cameras, ray-
primitive intersections, shaders, shadowing, etc. [15]

[edit]Example

As a demonstration of the principles involved in raytracing, let us consider how one would find the intersection
between a ray and a sphere. In vector notation, the equation of a sphere with center   and radius   is

Any point on a ray starting from point   with direction   (here   is a unit vector) can be written as

where t is its distance between   and  . In our problem, we know  ,  ,   (e.g. the position of a
light source) and  , and we need to find t. Therefore, we substitute for  :

Let   for simplicity; then


Knowing that d is a unit vector allows us this minor simplification:

This quadratic equation has solutions

The two values of t found by solving this equation are the two
ones such that   are the points where the ray intersects
the sphere.

If one (or both) of them are negative, then the intersections do


not lie on the ray but in the opposite half-line (i.e. the one starting
from   with opposite direction).

If the quantity under the square root ( the discriminant ) is


negative, then the ray does not intersect the sphere.

Let us suppose now that there is at least a positive solution, and


let t be the minimal one. In addition, let us suppose that the
sphere is the nearest object on our scene intersecting our ray,
and that it is made of a reflective material. We need to find in
which direction the light ray is reflected. The laws
of reflection state that the angle of reflection is equal and
opposite to the angle of incidence between the incident ray and
the normal to the sphere.

The normal to the sphere is simply

where   is the intersection point found


before. The reflection direction can be found by
a reflection of   with respect to  , that is

Thus the reflected ray has equation

Now we only need to compute the intersection


of the latter ray with our field of view, to get the
pixel which our reflected light ray will hit. Lastly,
this pixel is set to an appropriate color, taking
into account how the color of the original light
source and the one of the sphere are combined
by the reflection.

This is merely the math behind the line–sphere


intersection and the subsequent determination
of the colour of the pixel being calculated. There
is, of course, far more to the general process of
raytracing, but this demonstrates an example of
the algorithms used.

You might also like