0% found this document useful (0 votes)
40 views

Simple, Accurate, and Robust Projector-Camera Calibration

This document presents a new method for calibrating structured light systems consisting of a projector and camera. The key idea is to estimate the coordinates of calibration points in the projector image plane using local homographies between the camera and projector pixels. This allows adapting existing camera calibration techniques to projectors. The method requires projecting patterns onto a calibration object and capturing images with the camera, then automatically calibrating the projector and camera without user intervention. Experimental results show the method provides high precision calibration enabling accurate 3D reconstructions.

Uploaded by

Josue Melong
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views

Simple, Accurate, and Robust Projector-Camera Calibration

This document presents a new method for calibrating structured light systems consisting of a projector and camera. The key idea is to estimate the coordinates of calibration points in the projector image plane using local homographies between the camera and projector pixels. This allows adapting existing camera calibration techniques to projectors. The method requires projecting patterns onto a calibration object and capturing images with the camera, then automatically calibrating the projector and camera without user intervention. Experimental results show the method provides high precision calibration enabling accurate 3D reconstructions.

Uploaded by

Josue Melong
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Simple, Accurate, and Robust Projector-Camera Calibration

Daniel Moreno and Gabriel Taubin


School of Engineering
Brown University
Providence, RI, USA
Email: {daniel moreno,gabriel taubin}@brown.edu

Abstract—Structured-light systems are simple and effective


tools to acquire 3D models. Built with off-the-shelf components,
a data projector and a camera, they are easy to deploy and
compare in precision with expensive laser scanners. But such
a high precision is only possible if camera and projector are
both accurately calibrated. Robust calibration methods are well
established for cameras but, while cameras and projectors can
both be described with the same mathematical model, it is not
clear how to adapt these methods to projectors. In consequence,
many of the proposed projector calibration techniques make
use of a simplified model, neglecting lens distortion, resulting
in loss of precision. In this paper, we present a novel method
to estimate the image coordinates of 3D points in the projector
image plane. The method relies on an uncalibrated camera and
makes use of local homographies to reach sub-pixel precision.
As a result, any camera model can be used to describe the
projector, including the extended pinhole model with radial
and tangential distortion coefficients, or even those with more
complex lens distortion models.
Figure 1. Structured-light system calibration
Keywords-structured-light; camera; projector; calibration;
local homography;

The key idea of our method is to estimate the coordinates


I. I NTRODUCTION
of the calibration points in the projector image plane using
Structured-light systems are the preferred choice for do-it- local homographies. First, a dense set of correspondences
yourself 3D scanning applications. They are easy to deploy, between projector and camera pixels is found by projecting
only an off-the-shelf data projector and camera are required, onto the calibration object identical pattern sequence as the
and they are very accurate when implemented carefully. A one later projected to scan the target, reusing most of the
projector-camera pair works as a stereo system, with the software components written for the scanning application.
advantage that a properly chosen projected pattern simplifies Second, the set of correspondences is used to compute a
the task of finding point correspondences. In such systems, group of local homographies that allow to find the projection
projectors are modeled as inverse cameras and all consid- of any of the points in the calibration object onto the
erations known for passive stereo systems may be applied projector image plane with sub-pixel precision. In the end,
with almost no change. However, the calibration procedure the data projector is calibrated as a normal camera.
must be adapted to the fact that projectors cannot directly Our main contribution is a method for finding correspon-
measure the pixel coordinates of 3D points projected onto dences between projector pixels and 3D world points. Once
the projector image plane as cameras do. those correspondences are known any calibration technique
Viewpoint, zoom, focus, and other parameters ought to be available for passive stereo can be applied directly to the
adjusted, both in projector and camera, to match each target structured-light system. Our method does not rely on the
object size and scanning distance; invalidating any previous camera calibration parameters to find the set of correspon-
calibration. Therefore, structured-light systems must be cali- dences. As a result, the projector calibration is not affected
brated before each use in order to guaranteed the best result, in any way by the accuracy of the camera calibration.
turning the calibration procedure simplicity as valuable as We show, as a second contribution, that the proposed
its precision. In this paper, we present a new calibration calibration method can be implemented in such a way that no
procedure for structured-light systems that is both very easy user intervention is necessary after data acquisition, making
to perform and highly accurate. the procedure effective even for unexperienced users. To
this purpose, we have made a calibration software package, from a camera to run, rendering impossible to separate the
which we plan to make publicly available for anyone inter- capture stage from the calibration algorithm, which is a
ested in structured-light applications to try. Concisely, our common and useful practice in the field.
software requires two actions: A common practice among projector calibration methods
1) Project a sequence of gray code patterns onto a ([3], [7], [8], [10], and [12]) is to find one homography
static planar checkerboard placed within the working transformation between a calibration plane and the projector
volume. Capture one image for each pattern and store image plane. Despite the elegance of the concept, being
them all in the same directory. Repeat this step for homographies linear operators, they cannot model non-linear
several checkerboard poses until properly cover all distortions as the ones introduced by projector lenses.
the working volume. Use a separate directory for each In [14], the authors claim to get very accurate results
sequence. with their method that involves projecting patterns on a
2) Execute the calibration software and select the direc- “flat aluminum board mounted on a high precision moving
tory containing all the sequences. Enter the checker- mechanism”. Our complain is that such a special equipment
board dimensions. Click on “Calibrate” button. The is not available to the common user, limiting its general
software will automatically decode all the sequences, applicability. We disregard this method as non-practical.
find corner locations, and calibrate both projector and Finally, Zhang and Huang [15], and others ([7], [16])
camera. The final calibration will be saved to a file for employ structured-light patterns similarly to us, however, in-
later use. stead of computing projector point correspondences directly
from the images as captured by the camera, they create
A. Related Work new synthetic images from the projector’s viewpoint and
Many projector calibration procedures exist, however, feed them to standard camera calibration tools. The inter-
we have not found any satisfying the following two key mediate step of creating synthetic images at the projector’s
properties: easy-to-perform for the common user and high resolution, usually low, might discard important information
precision to enable accurate 3D reconstructions. Several being undesirable. On the contrary, the method we propose
methods ([1], [2], [3], [4], [5], and [6]) use a pre-calibrated finds projector point correspondences from structured-light
camera to find world coordinates in some calibration artifact, patterns directly at the camera resolution. No synthetic
which in turn they use to assign projector correspondences. projector image is created.
These methods might be simple to perform, but all of them The rest of the paper is organized as follows: Section II
lack of accuracy in the projector parameters due to their explains the calibration method, Section III expands the
dependence on the camera calibration. The inaccuracies are previous section with implementation details, Section IV
a direct consequence of their approach: even small camera discusses the experiments done to verify the precision of
calibration errors could result into large world coordinate the method and presents a comparison with other calibration
errors. Their failure point is to estimate the projector pa- software, finally Section V concludes our work.
rameters from those, far from accurate, world coordinates
decreasing the complete system precision. II. M ETHOD
A different approach is adopted in [7], [8], and [9] where, Our setup comprises one projector and one camera be-
neither a calibrated camera, nor a printed pattern is required. having as a stereo pair. We describe them both using the
Instead, they ask the user to move the projector to several pinhole model extended with radial and tangential distortion,
locations so that the calibration pattern—projected onto a an advantage over several methods ([3], [5], [6], [7], [8],
fix plane—changes its shape. We argue that moving the [9], and [12]), which fail to compensate for distortions
projector might be inconvenient, or impossible in general in the projected patterns. Moreover, we have seen in our
(e.g. a system mounted on a rig). Moreover, these methods experiments that most projectors have noticeable distortions
are not applicable if a metric reconstruction is mandatory outside their focus plane, distortions that affects the accuracy
because their result is only up-to-scale. of the final 3D models.
Other authors have proposed algorithms ([10], [11], [12], We took Zhang’s method [17] as inspiration in favor of
and [13]) where a projected pattern is iteratively adjusted its simplicity and well-known accuracy. It uses a planar
until they overlap a printed pattern. The overlap is measured checkerboard as calibration artifact, which is easy-to-make
with help of an uncalibrated camera. Since both patterns for anyone with access to a printer. In Zhang’s camera
must be clearly identified, the classic black and white pattern calibration, the user captures images of a checkerboard
is replaced by color versions of it—a color camera is also of known dimensions at several orientations and the al-
mandatory. In practice, switching to color patterns make gorithm calculates the camera calibration parameters using
color calibration unavoidable—printed and camera colors the relation between the checkerboard corners in a camera
seldom match—imposing an extra requirement to the user. coordinate system and a world coordinate system attached
Besides, this calibration scheme demands continuous input to the checkerboard plane.
A. Projector and camera models
The proposed calibration method allows to choose any
parametric model to describe the projector and camera. Our
implementation uses the pinhole model extended with radial
and tangential distortion for both projector and camera. Let
be X ∈ R3 a point in a world coordinate system with origin
at the camera center, and let u ∈ R2 the pixel coordinates
of the image of X in the camera plane, then X and u are
related by the following equations: Figure 2. Example of the calibration images: completely illuminated image
(left), projected gray code onto the checkerboard (right)
 
x    
ũx x/z
X = y , ũ =
  = (1)
ũy y/z
z

u = Kc · L(ũ) (2)
 
fx γ ox
Kc =  0 fy oy  (3)
0 0 1
ũ · (1 + k1 r2 + k2 r4 ) + ∆t (ũ)
 
L(ũ) = (4) Figure 3. Decoded gray pattern example: pixels with the same color
1 correspond either to the same projector column (left) or same projector
row (right). Gray color means “uncertain”. Note that there are no uncertain
2k3 ũx ũy + k4 (r2 + 2ũ2x )
 
pixels in the checkerboard region.
∆t (ũ) = (5)
k3 (r2 + 2ũ2y ) + 2k4 ũx ũy

r2 = ũ2x + ũ2y (6) speed. Someone might argue that capturing many images
for each checkerboard pose makes our method complex, but
where Kc is known as camera intrinsic calibration, k1 and k2 the whole data acquisition task is identical to the standard
as radial distortion coefficients, and k3 and k4 as tangential structured-light scanning task as would be executed later.
distortion coefficients. Similarly, if R and T are a rotation Furthermore, the only actual requirement for the user is
matrix and a translation vector that encode the pose of to keep the checkerboard static for a few seconds, time
the projector’s center of projection in the world coordinate necessary to project and capture a complete sequence.
system defined above, and let v ∈ R2 the pixel coordinates
of the image of X in the projector plane, then
 0  C. Camera calibration
x  0 0 
x /z
X 0 =  y 0  = R · X + T, ṽ = (7) Intrinsic camera calibration refers to estimating the pa-
0 y 0 /z 0
z rameters in the chosen camera model. Following Zhang’s
method, we need to find the coordinates in the camera
v = Kp · L(ṽ) (8) image plane of all the checkerboard corners, for each of
where the projector is described by its intrinsic calibra- the captured checkerboard orientations. Corner locations
tion Kp , and the pair (R, T ) is known as the stereo system are sought in a completely illuminated image, of each
extrinsic calibration. checkerboard orientation, using a standard procedure. A
completely illuminated image is an image captured while
B. Data acquisition all data projector pixels are turned on—if no such image
Camera calibration involves collecting images of a planar is available, it could be created as the maximum of every
checkerboard. We have modified this acquisition step to image in the sequence. The procedure continues as the usual
make possible to calibrate both camera and projector. The camera calibration, please review [17] for more details.
new data acquisition is: for each plane orientation, instead Our software expects the first image in every gray code
of capturing only one image, the user must project and cap- sequence to be a completely illuminated image that could
ture a complete structured-light pattern sequence. Although be used directly for camera calibration. It uses OpenCV’s
any structured-light pattern sequence would work, we have findChessboardCorners() function [18] to automatically find
used and recommend gray code sequences (Fig. 2) because checkerboard corner locations and, then, it refines them to
they are robust to decoding errors—in a calibration routine reach sub-pixel precision. Finally, a call to the function
avoiding all possible errors usually outweighs execution calibrateCamera() returns the calibrated camera parameters.
amount of light, known as global component, originated
at other sources (including reflections from other projector
pixels). Decoding errors in gray sequences are mostly caused
by failure on identifying these components, or completely
ignoring their existence. On the contrary, if each component
is correctly identified, a simple set of rules permits to
drastically reduce decoding errors (Fig. 3). The rules and
additional information on the topic are given in [20] under
the name of robust pixel classification.
The relation learned from structured-light patterns is not
Figure 4. Projector corner locations are estimated with sub-pixel precision bijective—it cannot be used right away to translate from
using local homographies to each corner in the camera image camera to projector coordinates. To overcome this issue we
propose the concept of local homography: a homography
that is valid only in a region of the plane. Instead of applying
D. Projector calibration a single global homography to translate all the checkerboard
corners into projector coordinates, we find one local ho-
Our projector and camera are described with the same mography for each of the checkerboard corners. Each local
mathematical model, thus, we would like to follow identical homography is estimated within a small neighborhood of
procedure to calibrate them both. But the projector is not a the target corner and is valid only to translate that corner
camera. If the projector were a camera, it would be possible into projector coordinates and no other corner. Local homo-
to capture images from its viewpoint, to search checkerboard graphies allow to model non-linear distortions because each
corners in them, and to continue just as before. In reality corner is translated independently of the others. Additionally,
no such images exist, but we know a relation between they are robust to small decoding errors because they are
projector and camera pixels—extracted from structured-light overdetermined; they are estimated from a neighborhood
sequences—and we will show how to use this relation to with more points than the minimum required.
estimate checkerboard corner locations in projector pixel A local homography is found for each checkerboard
coordinates. Moreover, being all the computations carried corner considering all the correctly decoded points in a patch
on at the camera’s original resolution, corner coordinates of the camera image centered at the corner location. Let be
are localized with greater precision than if synthetic images p the image pixel coordinates of a point in the patch under
at the projector’s resolution were used. consideration, and let be q the decoded projector pixel for
The procedure to compute checkerboard corner coordi- that point, then we find a homography Ĥ that minimizes:
nates in the projector coordinate system can be decompose X
in three steps: first, the structured-light sequence is decoded Ĥ = argmin ||q − Hp||2 (9)
H
and every camera pixel is associated with a projector row ∀p

and column, or set to “uncertain” (Fig. 3); second, a local H ∈ R3×3 , p = [x, y, 1]T , q = [col, row, 1]T (10)
homography is estimated for each checkerboard corner in
the camera image; third and final, each of the corners is The target corner p̄, located at the center of the patch, is
converted (Fig. 4) from camera coordinates to projector translated to q̄, given in projector coordinates, applying the
coordinates applying the local homography just found. local homography Ĥ:
The structured-light decoding step depends on the pro-
q̄ = Ĥ · p̄ (11)
jected pattern, in our case complementary gray codes for
rows and columns. Here, our method differs from [15] The same strategy is repeated until all checkerboard
where fringe patterns were proposed—our choice prioritize corners have been translated. Now, knowing the location of
decoding precision over acquisition speed. As pointed out all corners in the projector coordinate system, the projector
in [19], a subset of the gray code images—the ones where intrinsic calibration is found with identical procedure as for
the stripes look “thin”—may be regarded as exhibiting a the camera.
high frequency pattern. These high frequency patterns make
possible to split the intensity measured at each pixel in a E. Stereo system calibration
direct and a global component. Ideally, the amount of light Stereo calibration means finding the relative rotation and
perceived at each camera pixel is product of exactly one translation between projector and camera. At this point, the
projector pixel being turned on or off, but in practice this intrinsic parameters found before are kept fixed, the world
is rarely true. The intensity value reported by the camera coordinates are identified with camera coordinates, and we
at one pixel is the sum of the amount of light emitted seek for the pose of the projector in world coordinates.
by a projector pixel, called direct component, plus some The physical dimensions of the calibration checkerboard
are known. The checkerboard corner projections onto both
camera and projector image planes are also known—they
were found in the previous steps. The calibration of the
projector-camera stereo system, therefore, is identical to the
calibration of any other camera-camera system.
Our software calls OpenCV’s stereoCalibrate() function
with the previously found checkerboard corner coordinates
and their projections, the output is a rotation matrix R and
a translation vector T relating the projector-camera pair.
F. Algorithm
The complete calibration procedure can be summarized
in simple steps and implemented as a calibration algorithm:
1) Detect checkerboard corner locations for each plane Figure 5. Calibration software main screen
orientation in the completely illuminated images.
2) Estimate global and direct light components for each
set using gray code high frequency patterns. (GUI) capable of calibrate such systems following a simple
3) Decode structured-light patterns into projector row procedure. The software is completely written in C++, uses
and column correspondences by means of robust pixel Qt [21] as a graphical interface library, and OpenCV [18]
classification, considering pixel global and direct com- library for the vision related tasks. This library selection
ponents from step 2. enables to build and run the software in common platforms
4) Take small image patches centered at the checkerboard such as Microsoft Windows and GNU/Linux.
corner coordinates from step 1 (e.g. a 47x47 pixels Checkerboard corner detection is done with OpenCV’s
square) and use all the correctly decoded pixels in findChessboardCorners() function, however, as reported
each patch to compute a local homography that con- in [22], this function is very slow in combination with high-
verts from camera to projector coordinates. Correspon- resolution images. We worked with 12Mpx images and we
dences were obtained in step 3. have observed this issue. Our solution is to downsample
5) Translate corner locations (step 1) from camera to the input images in order to accelerate the corner search,
projector coordinates using patch local homographies and to consider the downsampled corner locations as an
from step 4. approximate solution to the high resolution search. This
6) Fix a world coordinate system to the checkerboard simple technique has proven to be fast yet effective: search
plane and use Zhang’s method to find camera intrinsics speed is independent of the camera resolution and results as
using camera corner locations from step 1. accurate as if no downsampling were performed—because
7) Fix a world coordinate system to the checkerboard the refinement is executed at the original resolution.
plane and use Zhang’s method to find projector intrin- Theoretically, direct and global light components should
sics using projector corner locations from step 5. be estimated from the highest frequency pattern projected.
8) Fix camera and projector intrinsics (steps 6 and 7) In practice, doing so results in decoded images with visi-
and use world, camera, and projector corner locations ble artifacts. Thus, we skip the highest frequency and we
(steps 1 and 5) to estimate stereo extrinsic parameters. compute the direct and global components from the two
9) Optionally, all the parameters, intrinsic and extrinsic, second highest patterns. Combining more than one pattern
can be bundle-adjusted together to minimize the total gives better precision and skipping the last pattern removes
reprojection error. the artifacts due to limited projector resolution.
Let be S = {I1 , · · · , Ik } the selected set of pattern
III. C ALIBRATION SOFTWARE images, and let be p a valid pixel location, then the direct
We have implemented the algorithm in Section II-F into and global components at p, Ld (p) and Lg (p), are found as
a complete structured-light system calibration software. The follows:
purpose is two-fold: first, to prove that our method can be
executed fully automatic provided the calibration images are L+
p = max Ii (p), L−
p = min Ii (p), (12)
0<i≤k 0<i≤k
available; second, to facilitate the access to high quality 3D
scans for a broad range of users—we think that structured- −
L+
p − Lp L−
p − b Lp
+
light systems are the key. Our experience says that cali- Ld (p) = , Lg (p) = 2 , (13)
1−b 1−b 2
brating structured-light systems accurately is a cumbersome
and time consuming task. In hopes of ease the task, we have where b ∈ [0, 1) is a user-set value modeling the amount
written a software (Fig. 5) with a Graphical User Interface of light emitted by a turned-off projector pixel—we recom-
mend the reader to study [19] for more details. We have
set b = 0.3 in our setup.
Finally, local homographies are estimated from fix size
image patches; we have, therefore, to select a proper patch
size for them. If the chosen size is too small, the algorithm
becomes very sensitive to decoding errors. On the contrary,
if the patch is too large, the algorithm is robust to errors, but
unable to cope with strong lens distortions. Experimentally,
we have found a patch size of 47x47 pixels to perform well
in our system; we have used this value in all our tests.
Nevertheless, a more rigorous analysis is required to decide
the optimum size given the system parameters.

IV. R ESULTS
We have developed this calibration method to enable high
precision 3D scanning. In consequence, we think the best
calibration quality evaluation is to scan objects of known
geometry and to compare their 3D models with ground
truth data. Additionally, we think that an evaluation would
not be complete without a comparison with other avail- Figure 6. System setup
able calibration methods. We have searched and found that
Method Camera Projector
Samuel Audet’s ProCamCalib [10] and Projector-Camera
Proposed 0.1447
Calibration Toolbox (also procamcalib) [1] are publicly Proposed with global homography 0.3288 0.2176
available tools. We have tried both, but Audet’s tool current procamcalib 0.8671
version cannot be used with our camera, for that reason, we
will compare our method with Projector-Camera Calibration Table I
R EPROJECTION ERROR
Toolbox [1] only, from now on just “procamcalib”.

A. Test setup
Our test setup comprises a Mitsubishi XD300U DLP correspondences. The modified method is an improvement
data projector and a Canon EOS Rebel XSi camera. Pro- over procamcalib, however, given the linearity of its global
jector’s resolution is 1024x768 and camera’s resolution is homography, it fails to model projector lens distortion being
4272x2848. They were placed one next to the other (Fig. 6). suboptimal.
Their focus length, zoom, and direction were adjusted prior
calibration accordingly to the scan target. C. Projector lens distortion
One of the main advantages of our method is that it allows
B. Reprojection error to model radial and tangential distortion in projector lenses
Usually, the quality of camera calibration is evaluated the same as in cameras. Opposite to what is said in other
considering only reprojection errors. But, a minimum re- papers (e.g. [9]), projector lenses have noticeable distortion,
projection error measured in the calibration images does specially near the edges. Table II shows an example of
not ensure the best reconstruction accuracy of arbitrary the distortion coefficients estimated by our method; note
objects. In fact, in our experiments adding an additional that k2 has a non-negligible value. The complete distortion
minimization step of the intrinsic and extrinsic parameters model (Fig. 7) shows that points close to the top-left corner
altogether (Section II-F, step 9) overfitted the calibration are displaced about 12 pixels from their ideal non-distorted
data producing slightly less accurate 3D models. All in coordinates; at the bottom-center of the projected image,
all, reprojection errors are indicators of calibration accuracy where its principal point is located, there is no distortion
and we report ours as a reference for comparison with as expected. In conclusion, data projectors have non-trivial
other methods. Table I shows the reprojection error of our lens distortions that cannot be ignored.
method and procamcalib; for further comparison, we have
also included a modified version of our method which uses D. Ground truth data
one global homography instead of local ones. In result, To evaluate the quality of the calibration beyond its
using identical camera calibration, procamcalib reprojection reprojection errors, we scanned real objects and created 3D
error is much higher than ours as consequence of its de- models of them which could be compared with ground truth
pendency on the camera calibration to find world plane data. Our first model corresponds to a plane of 200x250 mm,
k1 k2 k3 k4
-0.0888 0.3365 -0.0126 -0.0023

Table II
P ROJECTOR DISTORTION COEFFICIENTS : k1 AND k2 RADIAL
DISTORTION , k3 AND k4 TANGENTIAL DISTORTION

Figure 8. Plane error histogram: error between an ideal plane and a scanned
plane recontructed using the proposed calibration (top) and procamcalib
calibration (bottom)

Figure 7. Projector distortion model: points are displaced about 12 pixels


near the top-left corner

for which we created one 3D model with each of both


calibrations in the previous section. The ground truth data
for these models are points sampled from an ideal plane.
The error distribution of the model reconstructed with our
calibration (Fig. 8 top) resembles a Gaussian distribution
where 95% of its samples are errors equal or less than
0.33 mm. On the other hand, the reconstruction made with
Figure 9. Structured-light versus laser scanner: Hausdorff distance between
procamcalib’s calibration (Fig. 8 bottom) has an irregular meshes from 0mm to 1mm
error distribution denoting calibration inaccuracy. The results
are summarized in Table III.
Our next model is a statue head scanned both with our regions that were in shadow during scanning. In the face
structured-light system and with a commercial laser scanner. region the error is 0.5 mm at most.
The laser scanner is a NextEngine Desktop Scanner 2020i. Finally, we scanned a statue from six different viewpoints
Both 3D models are compared with the Hausdorff distance and, after manual alignment and merging, we created a
and the result is shown as an image (Fig. 9). The color complete 3D model using Smooth Signed Distance (SSD)
scale denotes the distance between both meshes ranging surface reconstruction ([23], [24]). The final mesh preserves
from 0 to 1 mm. The error reaches its maximum only in (Fig. 10) even small details.
V. C ONCLUSION
Method Max. Error Mean Error Std. Dev. We have introduced a new method to calibrate projector-
Proposed 0.8546 0.000042 0.1821 camera systems that is simple to implement and more
procamcalib 1.6352 0.000105 0.2909
accurate than previous methods because it uses a full pinhole
Table III model—including radial and tangential lens distortions—to
I DEAL AND RECONSTRUCTED PLANE COMPARISON describe both projector and camera behaviors and computes
sub-pixel resolution 3D point projections from uncalibrated
[8] J. Draréni, S. Roy, and P. Sturm, “Geometric video projector
auto-calibration,” in Computer Vision and Pattern Recognition
Workshops, 2009, pp. 39–46.
[9] J. Draréni, S. Roy, and P. Sturm, “Methods for geometrical
video projector calibration,” Machine Vision and Applica-
tions, vol. 23, pp. 79–89, 2012.
[10] S. Audet and M. Okutomi, “A user-friendly method to geo-
metrically calibrate projector-camera systems,” in Computer
Vision and Pattern Recognition Workshops, 2009, pp. 47–54.
[11] I. Martynov, J.-K. Kamarainen, and L. Lensu, “Projector
calibration by inverse camera calibration,” in Proceedings of
the 17th Scandinavian conference on Image analysis, Berlin,
Heidelberg, 2011, pp. 536–544.
[12] J. Mosnier, F. Berry, and O. Ait-Aider, “A new method
Figure 10. SSD 3D model from 6 structured-light scans for projector calibration based on visual servoing,” in IAPR
Conference on Machine Vision Applications, 2009, pp. 25–29.

camera images. We have developed a simple-to-use calibra- [13] S.-Y. Park and G. G. Park, “Active calibration of camera-
projector systems based on planar homography,” in Interna-
tion software that we will make freely available for people tional Conference on Pattern Recognition, 2010, pp. 320–323.
to experiment with.
[14] X. Chen, J. Xi, Y. Jin, and J. Sun, “Accurate calibration for
ACKNOWLEDGMENT a cameraprojector measurement system based on structured
light projection,” Optics and Lasers in Engineering, vol. 47,
The authors want to thank Fatih Calakli for proof-reading no. 34, pp. 310–319, 2009.
this paper and the useful suggestions he has made to improve
this work. The material presented in this paper describes [15] S. Zhang and P. S. Huang, “Novel method for structured light
work supported by the National Science Foundation under system calibration,” Optical Engineering, vol. 45, no. 8, pp.
083 601–083 601–8, 2006.
Grants No. IIS-0808718, and CCF-0915661.
[16] Z. Li, Y. Shi, C. Wang, and Y. Wang, “Accurate calibration
R EFERENCES method for a structured light system,” Optical Engineering,
[1] G. Falcao, N. Hurtos, J. Massich, and D. Fofi, “Projector- vol. 47, no. 5, p. 053604, 2008.
Camera Calibration Toolbox,” Tech. Rep., 2009, available at [17] Z. Zhang, “A flexible new technique for camera calibration,”
http://code.google.com/p/procamcalib. Pattern Analysis and Machine Intelligence, IEEE Transac-
tions on, vol. 22, no. 11, pp. 1330–1334, nov 2000.
[2] F. Sadlo, T. Weyrich, R. Peikert, and M. Gross, “A practical
structured light acquisition system for point-based geometry [18] G. Bradski, “The OpenCV Library,” Dr. Dobb’s Journal of
and texture,” in Point-Based Graphics. Eurographics/IEEE Software Tools, 2000. [Online]. Available: http://opencv.org/
VGTC Symposium Proceedings, june 2005, pp. 89–145.
[19] S. K. Nayar, G. Krishnan, M. D. Grossberg, and R. Raskar,
[3] M. Kimura, M. Mochimaru, and T. Kanade, “Projector cal- “Fast separation of direct and global components of a scene
ibration using arbitrary planes and calibrated camera,” in using high frequency illumination,” in SIGGRAPH. New
Computer Vision and Pattern Recognition, 2007, pp. 1–2. York, NY, USA: ACM, 2006, pp. 935–944.

[4] J. Liao and L. Cai, “A calibration method for uncoupling pro- [20] Y. Xu and D. G. Aliaga, “Robust pixel classification for 3d
jector and camera of a structured light system,” in Advanced modeling with structured light,” in Proceedings of Graphics
Intelligent Mechatronics, july 2008, pp. 770–774. Interface. New York, NY, USA: ACM, 2007, pp. 233–240.
[21] Qt Project, “Qt cross-platform application and UI
[5] K. Yamauchi, H. Saito, and Y. Sato, “Calibration of a struc- framework.” 2012. [Online]. Available: http://qt.nokia.com/
tured light system by observing planar object from unknown
viewpoints,” in 19th International Conference on Pattern [22] J. Chen, K. Benzeroual, and R. Allison, “Calibration for high-
Recognition, dec. 2008, pp. 1–4. definition camera rigs with marker chessboard,” in Computer
Vision and Pattern Recognition Workshops, 2012, pp. 29–36.
[6] W. Gao, L. Wang, and Z.-Y. Hu, “Flexible calibration of a
portable structured light system through surface plane,” Acta [23] F. Calakli and G. Taubin, “SSD: Smooth signed distance
Automatica Sinica, vol. 34, no. 11, pp. 1358–1362, 2008. surface reconstruction,” Computer Graphics Forum, vol. 30,
no. 7, pp. 1993–2002, 2011.
[7] H. Anwar, I. Din, and K. Park, “Projector calibration for 3D
scanning using virtual target images,” International Journal of [24] F. Calakli and G. Taubin, “SSD-C: Smooth signed distance
Precision Engineering and Manufacturing, vol. 13, pp. 125– colored surface reconstruction,” in Expanding the Frontiers
131, 2012. of Visual Analytics and Visualization, 2012, pp. 323–338.

You might also like