3d Vision With 2d/3d Sensor (By Arun Prasad)
3d Vision With 2d/3d Sensor (By Arun Prasad)
)
Telc, Czech Republic, February 68
Czech Pattern Recognition Society
Introduction
Related Work
The anthropomorphous thinking often convinces human being to consider the camera as a technical eye. The conventional image sensors (CCD/CMOS) measure the intensity of
the scene, but it lacks the depth information of the scene.
As the nature introduces severe physical limits to acquire
remote 3-D information, contact less extraction of depth information is still one of the key task in image processing.
The insensible depth information is generally solved by
two means. By using mathematics, the computer vision
community in the past decades have strived to develop a
suitable method for 3-D vision. This was often done with
the help of 2-D images, using techniques drawn mainly from
linear algebra and matrix theory. Other means of finding the
unfound depth are based on the laws of physics using methods such as - interferometry, triangulation or time-of-flight
(TOF) methods [5]. The innovation of laser scanners and
range sensors gave a big helping hand on this issue.
The basic component of the 3-D vision a time-of-flight sensor system which has been presented in this paper is a well
known and evolving 3-D vision technique. To the knowledge of the authors, no system exists with this optical insitu measurement (with common lens) for 2D/3D vision
systems. However there is a combination of 2D-3D sensor system [9]which consists of movable-mechanical components. The main goal of this paper is to show the procedure to increase the image resolution of the 3-D sensor,
which is currently available about 19,000 pixels. In the following section, we will discuss about a TOF camera and its
measuring (sensing) techniques. In section 3, we introduce
2D/3D imaging technique. Then in section 4,5 about characterisation and experimental setup. In section 6, we show
some initial results from the 2D/3D setup as well as its future perspectives.
1.1
The 2-D sensors (CCD/CMOS) come up with very high image resolution (millions of pixels), one can use intelligent
image processing algorithms to calculate the depth information of the scene, recover the shape or reveal the structure.
These are extracted at high computational cost. Further, it is
obvious for many real world problems it cannot achieve the
Image Multiplier
[]
Mixer Device) pixels mix the modulated light directly in
optical active area in the sensing process, which leads to a
smart operation. The distance information is weighted on
each pixel independently. A 3-D information is obtained
by evaluating at least two frames. The distance information
can be calculated inside the camera system, only the 3-D
data is transferred to the PC. All these operations are be performed in the PMD system as it has an in built 32 bit (AMD
Elan SC520) microprocessor running under ELinOS operating system. The timing requirements are met by the FPGA
(Field Programmable Gate Array) circuitry inside the system. The PMD camera system has a frame rate up to 50 fps
(frames per second).
3.2.1 Lighting Module The PMD camera system has its
own lighting module. The array of infrared LEDs are used
for scene illumination. These LEDs are modulated with 20
MHz. The system emits an RF modulated optical radiation
field (typically 20 MHz or higher) in the infra-red spectrum.
The diffused backscattered signal from the scene is detected
by the camera. Each pixel has the capability to demodulate
the signal and detect its phase, which is proportional to the
distance of the reflecting object. The signal frequency of
20MHz defines the unambiguous distance range of 7.5 m.
The PMD sensor with 1024 pixels is used in this experiment,since it has a matured technology and a key advantage of rejecting back ground illumination. This is done by
means of an active Suppression of Background Illumination
(SBI) circuitry in each pixels of the chip, which makes it
efficient, to operate indoor (even in the dark) and outdoor
(with ambient light) with its own active illumination and not
affected by ambient light including sunlight.
3.3
CCD Camera
This device needs to be calibrated for better results. The calibration can be classified into three steps for such a system;
Calibration using Optics (Image Multiplier), Calibration using Intensity Information and Calibration using Range Information. More about the special calibration device and
technique used for this 2D/3D system is discussed in [1].
4.1
3.2
As the 3-D TOF cameras measure depth information using time-of-flight principle, the RF-modulated light signal
with variable phase shift is sent to the 3-D-object and reflected back to the camera system. The PMD (Photonic
T.D.Arun Prasad, Klaus Hartmann, Wolfgang Weihs, Seyed Eghbal Ghobadi, and Arnd Sluiter
[]
of the 3-D data set is smaller than that of the 2-D data interpolation becomes essential. The Figure 3 shows the steps
involved in the enhanced 3-D vision.
Experimental Setup
Sensor
Number of Pixels
Pixel Size (m)
PMD(H)x(V)
64 x 16
155.3 x 210.8
CCD(H)x(V)
780 x 582
8.3 x 8.3
[]
70
0
60
50
40
30
5
20
10
10
15
5
10
15
20
25
30
70
60
50
40
30
20
10
10
15
0
20
40
50
100
150
200
250
300
350
400
450
since it is about 1 cm. The range resolution can be more accurate by improving the optics (image multiplier) and also
from the Shape from - Shading, Surface approximation and
Motion [11] techniques.
T.D.Arun Prasad, Klaus Hartmann, Wolfgang Weihs, Seyed Eghbal Ghobadi, and Arnd Sluiter
[4]
[5]
[6]
[7]
Conclusion
[8]
[9]
[10]
[11]
[12]
[]
Acknowledgement
We are highly indebted to all the co-workers in ZESS, especially Wolf Twelsiek, Rolf Wurmbach for their dedicated
cooperation. We would also like to gratefully acknowledge
the support of DAADs IPP program and the funding of
the Ministry of Science of Northrhine Westphalia. Our sincere thanks to the Deutsche Forschungsgemeinschaft (German Research Foundation) for funding this 2D/3D camera
project(LO 455/10-1).
References
[1] Hartmann,K., Loffeld,O., Ghobadi,S.E., Peters,V.,
Prasad,T.D.A., Sluiter,A., Weihs,W., Lerch,T.,
Lottner,O.;, Klassifizierungsaspekte bei der
3D-Szenenexploration mit einerneuen
2D/3D-Multichip-Kamera,
pp.74-80;SpringerVerlag;ASM 2005.
[2] Schwarte, R., Heinol, H., Buxbaum, B., Ringbeck, T.,
Xu,Z., Hartmann.K.;, Principles of Three
Dimensional Imaging techniques, Handbook of
Computer Vision and Applications, Volume 1,
Sensors and Imaging, Jahne, B., Hauecker, H.,
Geiler, P. (Hrsg.), The Academic Press,1999.
[3] Schwarte,R.;, Eine neuartige 3D-Kamera auf Basis
eines 2D-Gegentaktkorrelator-Arrays, Symposium