0% found this document useful (0 votes)
14 views14 pages

Light Field Camera Design Insights

Uploaded by

Di Mei
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views14 pages

Light Field Camera Design Insights

Uploaded by

Di Mei
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

See discussions, stats, and author profiles for this publication at: [Link]

net/publication/242321329

Light Field Camera Design for Integral View Photography

Article · July 2008

CITATIONS READS
30 1,356

2 authors, including:

Todor Georgiev
Adobe
70 PUBLICATIONS 2,083 CITATIONS

SEE PROFILE

All content following this page was uploaded by Todor Georgiev on 23 October 2019.

The user has requested enhancement of the downloaded file.


ADOBE TECHNICAL REPORT

Light Field Camera


Design for Integral
View Photography
Todor Georgeiv and Chintan Intwala
Adobe Systems Incorporated
345 Park Ave, San Jose, CA 95110
tgeorgie@[Link]

Figure 1: Integral view of a seagull

Abstract
extend them to affine transforms, which are of special
This paper introduces the matrix formalism of optics as
importance to designing integral view cameras. Our
a useful approach to the area of “light fields”. It is
main goal is to provide solutions to the problem of
capable of reproducing old results in Integral
capturing the 4D light field with a 2D image sensor.
Photography, as well as generating new ones.
From this perspective we present a unified affine optics
Furthermore, we point out the equivalence between
view on all existing integral / light field cameras. Using
radiance density in optical phase space and the light
this framework, different camera designs can be
field. We also show that linear transforms in matrix
produced. Three new cameras are proposed.
optics are applicable to light field rendering, and we
Table of Contents
Abstract 1
1. Introduction 3
1.1 Radiance and Phase Space 3
1.2 Structure of this paper 3

2. Linear and Affine Optics 4


2.1. Ray transfer matrices 4
2.2. Affine optics: Shifts and Prisms 4
3. Light field conservation 5
4. Building blocks of optical
system 5
4.1."Camera" 5
4.2."Eyepiece" 6
[Link] eyepieces 6
5. The art of light field camera
design 6
5.1. Integral view photography 6
5.2. Camera designs 7
6. Results from our light field
cameras 9

Conclusion 13
References 13

Adobe Technical Report


These coordinates are very similar to the traditional
Introduction
( s, t , u, v) coordinates used in light field literature.
Linear (Gaussian) optics can be defined as the use of Only, in our formalism a certain analogy with
matrix methods from linear algebra in geometrical Hamiltonian mechanics is made explicit. Our variables
optics. Fundamentally, this area was developed q and p play the same role as the coordinate and
(without the matrix notations) back in the 19-th
momentum in Hamiltonian mechanics. In more detail,
century by great minds like Gauss and Hamilton.
it can be shown that all admissible transformations of
Matrix methods became popular in optics during the
the light field preserve the so called symplectic form
1950-ies, and are widely used today [1], [2]. In those old
methods we recognize our new friend, the Light Field.
⎛ 0 1 ⎞ , same as in the case of canonical transforms
⎜ ⎟
⎝ −1 0 ⎠
We show that a slight extension of the above ideas to in mechanics [4].
what we call affine optics, and then a transfer into the
area of computer graphics, produces new and very In other words, the phase space of mechanics and “light
useful practical results. Applications of the theory to field space” have the same symplectic structure. For the
designing “Integral” or “light field” cameras are light field one can derive the volume conservation law
demonstrated. (Liouville's theorem) and other invariants of mechanics
[4]. This observation is new to the area of light fields.
Transformations of the light field in an optical system
1.1 Radiance and Phase Space play a role analogous to canonical transforms in
mechanics.
The radiance density function (or “Light field'' - as it is
often called) describes all light rays in space, each ray
defined by 4 coordinates [3]. We use a slightly modified
1.2 Structure of this paper
version of the popular “2-plane parameterization”,
which describes each ray based on intersection point Next section 2 shows that: (1) A thin lens transforms
with a predefined plane and the two angles / directions the light field linearly, by the appropriate ray transfer
of intersection. Thus, a ray will be represented by space matrix. (2) Light traveling a certain distance in space is
coordinates, q1 , q2 and direction coordinates, also described by a linear transformation (a shear) - as
first pointed out in a paper [5] by Durand at. al. (3)
p1 , p2 which together span the phase space of optics Shifting a lens from the optical axis or inserting a prism
(See Figure 2). In other words, at a given transversal is described by the same affine transform. This extends
plane in our optical system, a ray is defined by a 4D linear optics into what we call affine optics. These
vector, ( q1 , q2 , p1 , p2 ) , which we will call the light field transformations will be central to future “light field''
image processing, which is coming to replace
vector.
traditional image processing.

Section 3: Transformation of the light field in any


optical device, like telescope or microscope, has to
preserve the integral of light field density. Any such
transformation can be constructed as a product of only
the above two types of matrices, and this is the most
general linear transform for the light field.

Section 4 defines a set of optical devices, based on the


above three transforms. Those optical devices do
everything possible in affine optics, and they will be
used as building block for our integral view cameras.
The idea is that since those building blocks are the most
general, everything that is possible in optics could be
done using only those simple blocks.

Section 5 describes the main goal of Integral View


Figure 2: A ray intersecting a plane Photography, and introduces several camera designs
perpendicular to the optical axis. from the perspective of our theory. Three of those
Directions (angles) of intersection
are defined as derivatives of q1 and designs are new. Section 6 shows some of our results.
q2 with respect to t.

3 Adobe Technical Report


( T stands for “travel” -- as in [5]), who first introduced
2. Linear and Affine Optics
this “shear” transform of light field traveling in space.)
The linear transform is:
This section introduces the simplest basic transforms of
the light field. They may be viewed as the geometric ⎛ q ' ⎞ ⎛ 1 T ⎞⎛ q ⎞
primitives of image processing in light space, similar to ⎜ ⎟=⎜ ⎟⎜ ⎟ (2)
⎝ p ' ⎠ ⎝ 0 1 ⎠⎝ p ⎠
rotate and resize in traditional imaging in the plane.
where the bottom left matrix element 0 specifies that
there is no change in the angle p when a light ray
2.1. Ray transfer matrices travels through space. Also, positive angle p produces
(1) Light field transformation by a lens: positive change in q , proportional to the distance
traveled T .
Just before the lens the light field vector is (q, p ) . Just
after the lens the light field vector is (q ', p ') . The lens
doesn't shift the ray, so q ' = q . Also, the transform is 2.2. Affine optics: Shifts and Prisms
linear. The most general matrix representing this type In this paper we need to slightly extend traditional
of transform would be: linear optics into what we call affine optics.

⎛ q ' ⎞ ⎛ 1 0⎞⎛ q ⎞ This is done by using (in the optical system) additive
⎜ ⎟=⎜ ⎟⎜ ⎟ (1)
⎝ p '⎠ ⎝ a 1 ⎠ ⎝ p ⎠ elements together with the above matrices.

Our motivation is that all known light field cameras and


related systems have some sort of lens array, where
individual lenses are shifted from the main optical axis.
This includes Integral Photography [6], the Hartmann-
Shack sensor [7], Adelson's Plenoptic camera [8], 3D
TV systems [9], [10], the light field - related [3] and the
camera of Ng [11]. We were not able to find our
current theoretical approach anywhere in the literature.

One such “additive” element is the prism. By definition


it tilts each ray by adding a fixed angle of deviation α .

Figure 3: A lens transform of the Expressed in terms of the light field vector the prism
light field. transform is:

Note: As a matter of notations, a = −1 where f is ⎛ q'⎞ ⎛ q ⎞ ⎛ 0 ⎞


⎜ ⎟ = ⎜ ⎟+⎜ ⎟
f
(3)
called focal length of the lens. Positive focal length ⎝ p '⎠ ⎝ p ⎠ ⎝α ⎠
produces negative increment to the angle, see Figure 3.
One interesting observation is that the same transform,
(2) Light field before and after traveling a distance T in combination with lens refraction, can be achieved by
simply shifting the lens from the optical axis.

If the shift is s, formula (1) for lens refraction would be


modified as follows:

Convert to lens-centered coordinates by subtracting s .


Apply linear lens transform. Convert to original
coordinates by adding back s,

⎛ q'⎞ ⎛ 1 0⎞⎛ q − s ⎞ ⎛ s ⎞
Figure 4: Space transfer of light. ⎜ ⎟ = ⎜− 1 ⎟ +
1 ⎠ ⎜⎝ p ⎟⎠ ⎜⎝ 0 ⎟⎠
(4)
⎝ p '⎠ ⎝ f
which is simply:

4 Adobe Technical Report


After transformation in the optical device represented
⎛ q'⎞ ⎛ 1 0⎞⎛ q ⎞ ⎛ 0 ⎞ by matrix M , the area between the new rays will be
⎜ ⎟ = ⎜− 1 ⎟ +⎜ ⎟
1 ⎠ ⎜⎝ p ⎟⎠ ⎝ sf ⎠
(5)
⎝ p '⎠ ⎝ f
⎛ 0 1 ⎞ ⎛ q2 ⎞
Final result: “Shifted lens = lens + prism” (q1 p1 ) M T ⎜ ⎟ M ⎜ ⎟, (7)
⎝ − 1 0 ⎠ ⎝ p2 ⎠
This idea will be used later in section 5.2
where M T is the matrix transposed to M . The
Figure 5 illustrates the above result by showing how you condition for expressions (6) and (7) to be equal for any
can build a prism with variable angle of deviation pair of rays is:
α = sf from two lenses of focal length f and − f
⎛ 0 1⎞ ⎛ 0 1⎞
shifted by a variable distance s from one-another. MT ⎜ ⎟M = ⎜ ⎟. (8)
⎝ −1 0 ⎠ ⎝ −1 0 ⎠

This is the condition for area conservation. In the


general case, a similar expression describes 4D volume
conservation for the light field. The reader can check
that (1) the matrix of a lens and (2) the matrix of a light
ray traveling distance T discussed above both satisfy
this condition.

Further, any optical system has to satisfy it, as a product


of such transforms. It can be shown [12] that any linear
transform that satisfies (8) can be written as a product
of matrices of type (1) and (2).

Figure 5: A variable angle prism. The last step of this section is to make use of the fact
that since light field density for each ray before and
after the transform is the same, the sum of all those
densities times the infinitesimal area for each pair of
rays must be constant. In other words, integral of the
3. Light field conservation light field over a given area (volume) in light space is
conserved during transforms in any optical device.

The light field (radiance) density is constant along each


ray. The integral of this density over any volume in 4D 4. Building blocks of our
phase space (light field space) is preserved during the
transformations in any optical device. This is a general
optical system
fact that follows from the physics of refraction, and it
has a nice formal representation in symplectic geometry
We are looking for simple building blocks for optical
(see [12]).
systems (light field cameras), that are most general. In
other words, they should be easy to understand in terms
In our 2D representation of the light field this fact is of the mathematical transformations that they perform,
equivalent to area conservation in (q, p) - space, which and at the same time they should be general enough so
will be shown next: they do not exclude useful optical transforms.

Consider two rays, (q1 , p1 ) and (q2 , p2 ) . After the According to our previous section, in the space of affine
transform in an optical system, the rays will be optical transforms, everything can be achieved as
different. The signed area between those rays in light products of the matrices of equations (1), (2) and
space (the space of rays) is defined by their cross prisms. However, those are not simple enough. That's
product. In our matrix formalism the cross product why we define other building blocks as follows:
expression for the area will be:

⎛ 0 1 ⎞ ⎛ q2 ⎞ 4.1. “ Camera ”
q1 p2 − q2 p1 = (q1 p1 ) ⎜ ⎟ ⎜ ⎟. (6)
⎝ − 1 0 ⎠ ⎝ p2 ⎠

5 Adobe Technical Report


This is not the conventional camera, but is closely length f , and in the end they travel a distance f . The
related to it by adding a field lens. With this lens the result is:
camera transform becomes simple:
⎛1 f ⎞⎛ 1 0⎞⎛1 f ⎞ ⎛0 f⎞
⎛m 0⎞ ⎜ ⎟⎜ ⎟⎜ ⎟=⎜ ⎟. (14)
M =⎜ ⎝0 1 ⎠ ⎝ −f1 1 ⎠ ⎝ 0 1 ⎠ ⎝ −f1 0⎠
1 ⎟
. (9)
⎝0 m⎠
This is “inverse diagonal” matrix, which satisfies area
First, light travels a distance a from the object to the conservation. It will be used in section 5.2 for switching
objective lens. This is described by a transfer matrix between q and p in a light field camera.
M a . Then it is refracted by the objective lens of focal
length f , represented by transfer matrix M f . In the
end it travels to the image plane a distance b ,
represented by M b . The full transform, found by 4.3. Combining eyepieces
multiplication of those three matrices is: Inversion:

⎛1 − bf a − abf + b ⎞ Two eyepieces together produce a “camera” with


MbM f M a = ⎜ 1 ⎟. (10)
⎜− 1 − af ⎟⎠ magnification -1:
⎝ f

The condition for focusing on the image plane is that ⎛ −1 0 ⎞


⎜ ⎟ (15)
the top right element of this matrix is 0 , which is ⎝ 0 −1 ⎠
equivalent to the familiar lens equation:
An eyepiece before and after a variable space T and
1 1 1 then inversion produces a lens of variable focal
+ = . (11) f2
a b f length F = T
:

Using (11), our camera transfer matrix can be ⎛ 0 f ⎞⎛1 T ⎞⎛ 0 f⎞ ⎛ 1 0⎞


−⎜ 1 ⎟ ⎜ ⎟=⎜ ⎟ . (16)
0 ⎠ ⎜⎝ 0 1 ⎟⎠ ⎝ − 1f
converted into a simpler form:
⎝− f 0 ⎠ ⎜⎝ − fT2 1⎟

⎛ − ba 0 ⎞
⎜ 1 ⎟. (12) By symmetry, same combination of eyepieces with a
⎝− f − ab ⎠ 2
lens produces space translation T = fF without using
up real space! Devices corresponding to the above
We also make the bottom left element 0 by inserting a matrices (9), (14), (15), (16) together with shifts and
so called “field lens'' (of focal length F = bfa ), just before prisms are the elements that can be used as “building
the image plane: blocks” for our light field cameras.

⎛ 1 0 ⎞ ⎛ − ba 0 ⎞ ⎛ − ba 0 ⎞ Those operators are also useful as primitives for future


⎟⎜ ⎟=
⎜ 1 − ba ⎠ ⎜⎝ 0 ⎟. (13) optical image processing in software. They are the
⎝− F 1 ⎠ ⎝ − 1f − ba ⎠
building blocks of the main transforms. Corresponding
to geometric transforms like Resize and Rotate in
This matrix is diagonal, which is the simple final form current image processing.
we wanted to achieve. It obviously satisfies our area
conservation condition, which the reader can easily
verify. The parameter m = −ab is called “magnification”
and is a negative number. (Cameras produce inverted 5. The art of light field
images.) camera design

5.1. Integral view photography


4.2. “ Eyepiece ”
We define Integral View Photography as a
This element has been used as an eyepiece (ocular) in generalization from several related areas of research.
optics, that's why we give it the name. It is made up of These include Integral Photography [6], [9] and related,
two space translations and a lens. First, light rays travel Adelson's “Plenoptic” camera [8], a number 3D TV
a distance f , then they are refracted by a lens of focal systems ([10] and others), and the “Light Field” camera
of Ng at. al. [11].

6 Adobe Technical Report


In our approach we see conventional cameras as registered independently (but on the same sensor!), as if
“integration devices’’ which integrate the optical field coming from different cameras. This makes the design
over all points on the aperture into the final image. This compact and cheap to manufacture.
is already an integral view camera. It achieves effects
like refocusing onto different planes and changing First design:
depth of field, commonly used by photographers. The
idea of Integral View photography is to capture some Consider formula (5). With this in mind, Figure 6 in
representation of that same optical field and be able to which each lens is shifted from the optical axis, would
integrate it afterwards, in software. In this way, the be equivalent to adding prisms to a single main lens. See
captured “light field”, “plenoptic” or “holographic” Figure 7. This optical device would be cheaper to
image potentially contains the full optical information, manufacture because it is made up of one lens and
and much greater flexibility can be achieved.

(1) Integration is done in software, and not


mechanically.

(2) Instead of fixing all parameters in real time, the


photographer can relax while taking the picture, and
defer focusing, and integration in general, to post
processing in the dark room. Currently only color and
lightness are done as post processing (in Aperture and
Light Room).

(3) Different methods of integrating the views can be


applied or mixed together to achieve much more than
what is possible with a conventional camera. Examples
include focusing on a surface, “all in focus” and others. Figure 6: An array of cameras
used in integral photography for
capturing the light field.
(4) Also, more power is gained in image processing
because now we have access to the full 3D information
multiple prisms, instead of multiple lenses. Also, it's
about the scene. Difficult tasks like refocusing become
more convenient for the photographer to use the
amazingly easy. We expect tasks like deblur, object
common controls of one single lens, while effectively
extraction, painting on 3D surfaces, relighting and
working with a big array of lenses.
many others to become much easier, too.

5.2. Camera designs

We are given the 4D light field (radiance density


function), and we want to sample it into a discrete
representation with a 2D image sensor. The approach
taken is to represent this 4D density as a 2D array of
images. Different perspectives on the problem are
possible, but for the current paper we would choose
to discuss it in the following framework. Traditional
Integral photography uses an array of cameras
focused on the same plane, so that each point on that
plane is imaged as one pixel in each camera. These
pixels represent different rays passing at different
angles through that same point. In this way angular Figure 7: Array of prisms design.
dimensions are sampled. Of course, the image itself
samples space dimensions, so we have a 2D array of
2D arrays.

The idea of compact light field camera design is to put We believe this design is new. Based on what we call
all the optics and electronics into one single device. We affine optics (formula (5)), it can be considered a
want to make different parts of the main camera lens reformulation of the traditional “multiple cameras”
active separately, in the sense that their input is design of integral photography.

7 Adobe Technical Report


In traditional cameras all rays from a far away point
are focused into the same single point on the sensor.
This is represented in Figure 7, where all rays coming
from the lens are focused into one point. We want to
split rays coming from different areas of the main
lens. This is equivalent to a simple change of angle,
so it can be done with prisms of different angles of
deviation placed next to the main lens, at the
aperture.

Second design:

Figure 9: More detail about


Figure 8.

A very interesting approach would be to make the array


of lenses or prisms external to the camera: With
positive lenses we get an array of real images, which are
captured by a camera focused on them. Figure 11.

Figure 8: Array of eyepieces


generating multiple views.

This approach was invented by Adelson-Wang [8], and


recently used by Ng et. al [11]. We would like to
propose an interesting interpretation of their design. It
is a traditional camera, where each pixel is replaced by
an eyepiece (E) with matrix of type ⎛ 0 1 ⎞ , and
⎜ ⎟
⎝ −1 0 ⎠
sensor (CCD matrix) behind it. The role of the eyepiece
is to switch between coordinate and momentum
position-direction) in optical phase space (light field).
As a result different directions of rays at a given
eyepiece are recorded as different pixels on the sensor
of that eyepiece. Rays coming from each area of the Figure 10: Lenses instead
main lens go into different pixels at a given eyepiece. of prisms in Figure 7.
See Figure 8, where we have dropped the field lens for
clarity (but it should be there at the focal plane of the
main camera lens for the theory to be exact). In other
words this is the optical device “camera” of section 4.1,
followed by an array of eyepieces (section 4.2).

Figure 9 shows two sets of rays, and their path in the


simplified version of the system - without field lens.

Our next step will be to generalize designs (1) and (2)


by building cameras equivalent to them in the optical
sense. Using formula (5), we can replace the array of
prisms with an array of lenses. See Figure 10. We get
shift up in angle, same as with prisms. Total inverse
focal length will be sum of inverse focal length of main
lens and individual lenses.
Figure 11: Multiple lenses
creating real images.
8 Adobe Technical Report
With negative lenses we get virtual images on the other
side of the main lens. Rays are shifted down on Figure
12, but virtual images are shifted up. This design is not
possible as internal for the camera because images are
virtual. We believe it is new.

If all negative lenses had same focal length as the main


lens (but opposite), we would get a device equivalent to
an array of prisms. See Figure 5. This works perfectly
well, but with a large number of lenses the field of view
is too small. In order to increase it, we need the focal
length of the array of lenses to be small.
Figure 14: Picture of our hexagonal
Another problem is the big main lens, which is heavy array of lenses and prisms.
and expensive. The whole device can be replaced with
an array of lens-prism pairs, shown in Figure 13. This is
another new design. See a picture of this array of 19
negative lenses and 18 prisms, Figures 14.

Most of our results are produced with a camera with the


design of Figure 12, with an array of 20 lenses cut into
squares so they can be packed together with the least
loss of pixels. A picture of our camera is shown in
Figure 15. One of the datasets obtained with it is shown
reduced in Figure 16.

Figure 12: Multiple lenses


creating virtual images.

Figure 15: Working model of the


camera in Figure 12, with 2 positive
lenses and an array of 20 negative
lenses.

6. Results from our light


field cameras

In terms of results, in this paper we chose to make our


only goal showing that our cameras work. Making use
of the advantages of the light field in image processing,

Figure 13: Lenses and prisms.


9 Adobe Technical Report
and achieving all the improvements and amazing effects two sets of shifts; one is for foreground S f and one is
is too broad task - and deferred to future works.
for background Sb .

The only effect that we are going to demonstrate will be


refocusing. It creates the sense of 3D and clearly shows As a result, the intermediate depth plane is obtained by
the power of our new camera. Also, in this way we have linear interpolation
a chance to compare our new results against the current
state of the art in the field, the camera of Ng at. al. [11]. S D = S f + D ⋅ ( Sb − S f ), (17)

where the depth D is between 0 and 1.

Another example of refocusing after taking the picture


is demonstrated in our seagull images. A photographer
wouldn't have the time to refocus on birds while they
are flying, but with integral view photography this can
be done later in the dark room. Figure 21 is obtained
for D =0, Figure 22 is obtained for D =0.65 and Figure
23 is obtained for D =1.

As seen from the results, the parallax among the 20


images and the overall range of depths of the scene are
responsible for the believability of the resulting images.
The larger the parallax, the more artifacts are produced
in the final image. For example, one can compare the
two images focused on the foreground:
Figure 16: A set of 20 images
obtained with our camera.
Figure 18 and Figure 21. The problem in the later case
can be solved by generating images from virtual
With the square lens array we are able to obtain 20
intermediate viewpoints using view morphing or other
images from 20 different viewpoints. See Figure 16.
Vision algorithms.
Firstly, one image is chosen as a base image with which
all the rest of the images are registered. Only certain
region of interest (ROI) in the base image is used for Using a different camera design (not described in this
matching with the rest of the images. An example of paper) which is able to capture 100 images
ROI is shown in Figure 17. This registration process corresponding to 100 slightly shifted viewpoints, we
were able to refocus much more precisely and with
yields a 2D shift Si = ( dxi , dyi ) for an image I i that will
minimal artifacts. Refer to Figure 24 and Figure 25 for
shift the region of interest from I i to align it with the the obtained results. Notice that the scene has a large
base image. Since for registration we consider only the variation in depth (comparable to Figure 21) and yet
region of interest, we compute a normalized cross- the results are much smoother.
correlation coefficient ci ( x, y ) between the appropriate
channel of the ROI and I i at every point of I i . Figure 26 shows one example of refocusing on the
foreground using 3-view morphing, where 144 images
The location that yields the maximum value of the have been generated from the original 20. Artifacts are
coefficient ci ( x, y ) is used to compute the shift. This greatly reduced, and there is no limit to the number of
views (and level of improvement) that can be achieved
yields an array of shifts vectors with this method.
(dx1 , dy1 ), (dx2 , dy2 ),K , (dx19 , dy19 ) corresponding to
each image I i , for a given ROI. In order to obtain an
image, like the one in Figure 18, we simply blend 20
shifted images equally. Because of the fact that objects
at equal depth have equal shifts, this produces an image
with objects in focus that lie in the same depth plane as
the objects in ROI. Figure 18 and 20 show the mixture
of appropriately shifted images. Note that the edges of
those images are left unprocessed in order to show the
shifts more clearly. Now, in order to obtain various
intermediate depth planes, like in Figure 19, we need

10 Adobe Technical Report


Figure 17: Rodin’s Burghers (region
of interest). Figure 19: Rodin’s Burghers at
depth 0.125.

Figure 18: Rodin’s Burghers at Figure 20: Rodin’s Burghers at


depth 0, closest to camera. depth 1.

11 Adobe Technical Report


Figure 21: Seagulls at depth 0. Figure 22: Seagulls at depth 0.65

Figure 24: Our 100 view light


field focused at depth 0.

Figure 23: Seagulls at depth 1.

Figure 25: Our 100-image light


field focused at depth 1.
Figure 26: Our 20-image light field
focused at the head of Scott. Result
obtained from 144 synthetic images
generated using view morphing.

12 Adobe Technical Report


8.T. Adelson and J. Wang. Single Lens Stereo with a
Conclusion
Plenoptic Camera. IEEE Transactions on Pattern
In this paper we used a new approach to the light field, Analysis and Machine Intelligence 14, 2, 99-106,
based on linear and affine optics. We constructed 1992.
simple transforms, which can be used as building blocks
of integral view cameras. Using this framework, we 9.F. Okano, H. Hoshino, J. Arai, I. Yuyama. Real-time
were able to construct the main types of existing light pickup method for a three-dimensional image based
field / integral cameras and propose three new designs. on integral photography. Applied Optics, 36, 7,
March 1997.
Our results show examples of refocusing the light field
after the picture has been taken. With a 16 mega pixel 10.T. Naemura, T. Yoshida and H. Harashima. 3-D
sensor our 20-lens camera (Figure 12) produces computer graphics based on integral photography.
refocused output image of 700X700. View morphing Optics Express, 8, 2, 255-262, 2001.
approach to improving the quality of the final images is
discussed and shown to be practical. 11.R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz,
P. Hanrahan. Light Field Photography with a Hand-
We would like to thank Colin Zheng from Univ. of held Plenoptic Camera. Stanford Tech Report CTSR
Washington for help with producing Figure 26. We 2005-02.
would also like to thank Gavin Miller for discussions,
and for providing a very relevant reference [13]. 12.V. Guillemin and S. Sternberg. Symplectic
techniques in physics. Cambridge University Press.

References 13.L. Ahrenberg and M. Magnor. Light Field Rendering


1. E. Hecht and A. Zajac. Optics, Addison-Wesley, 1979. using Matrix Optics. WSCG 2006, Jan 30-Feb 3, 2006,
Plzen, Czech Republic.
2. A. Gerrard and J. M. Burch. Introduction to Matrix
Methods in Optics, Dover Publications, 1994.
Second Level Subhead. Bio
3. M. Levoy and P. Hanrahan. Light Field Rendering. In Name: adipiscing elit, sed diam nonummy nibh
SIGGRAPH 96, 31-42. euismod tincidunt ut laoreet dolore magna aliquam erat
volutpat. Ut wisi enim ad minim veniam, quis nostrud
4.R. Abraham and J. Marsden. Foundations of exerci tation ullamcorper suscipit lobortis nisl ut aliquip
Mechanics. Perseus Publishing, 1978. ex ea commodo consequat. Duis autem vel eum iriure
dolor in hendrerit in vulputate velit esse molestie
5. F. Durand, N. Holzschuch, C. Soler, E. Chan, F. consequat, vel illum
Sillion. A Frequency Analysis of Light Transport. In
SIGGRAPH 2005, 1115-1126. Name: adipiscing elit, sed diam nonummy nibh
euismod tincidunt ut laoreet dolore magna aliquam erat
6.M. Hutley. Microlens Arrays. Proceedings, IOP Short volutpat. Ut wisi enim ad minim veniam, quis nostrud
Meeting Series No 30. Institute of Physics, May 1991. exerci tation ullamcorper suscipit lobortis nisl ut aliquip
ex ea commodo
7. R. Tyson. Principles of Adaptive Optics. Academic
Press, 1991.

Adobe Systems Incorporated • 345 Park Avenue, San Jose, CA 95110-2704 USA • [Link]

Adobe, the Adobe logo, Acrobat, Clearly Adobe Imaging, the Clearly Adobe Imaging logo, Illustrator, ImageReady, Photoshop, and PostScript are either
registered trademarks or trademarks of Adobe Systems Incorporated in the United States and/or other countries. Mac and Macintosh are trademarks of
Apple Computer, Inc., registered in the United States and other countries. PowerPC is a registered trademark of IBM Corporation in the United States.
Intel and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. Microsoft,
Windows, and Windows NT are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. All other
trademarks are the property of their respective owners.

© 2003 Adobe Systems Incorporated. All rights reserved. Printed in the USA. 95000289 3/03

View publication stats

You might also like