Sensors On 3D Digitization
Sensors On 3D Digitization
Seminar Report By Mridul Sachan Under the guidance of Asst. Prof. Jaydeep Kishore & Asst. Prof. Satendra Kumar
Acknowledgement
I am thankful to my seminar guide Mr. SATENDRA KUMAR for his proper guidance and valuable suggestions. I am also greatly thankful to Mr. JAYDEEP KISHORE, the head of the Department of Computer Science and Engineering and other faculty members for giving me an opportunity to learn and do this seminar. If not for the above mentioned people, my seminar would never have been completed in such a successfully manner. I once again extend my sincere thanks to all of them.
Mridul Sachan
Table of Content
S. No.
1 2 3 4 5 6 7 8 9 10 11 Abstract Introduction Colour 3D Imaging Technology Autosynchronized Scannner Position Sensitive Detectors Proposed Sensors Pixel Prototype Chip Implementation and Experimental Result Applications Conclusion References 22 23
Title
Page No.
4 5 6 7 9 13 15 17
ABSTRACT
Machine vision involves the analysis of the properties of the luminous flux reflected or radiated by objects. To recover the geometrical structures of these objects, either to recognize or to measure their dimension, two basic vision strategies are available. The first strategy, known as passive vision, attempts to analyze the structure of the scene under ambient light. In contrast, the second, known as active vision, attempts to reduce the way in which images are formed. Sensors that capitalize on active vision can resolve most of the ambiguities found with 2-D imaging systems. Moreover, with laser based approaches, the 3-D information becomes relatively insensitive to background illumination and surface texture. Complete images of visible surfaces that are rather featureless to the human eye or a video camera can be generated. Thus the task of processing 3-D data is greatly simplified.
INTRODUCTION
Digital 3D imaging can benefit from advances in VLSI technology in order to accelerate its deployment in many fields like visual communication and industrial automation. High-resolution 3D images can be acquired using laser-based vision systems. With this approach, the 3D information becomes relatively insensitive to background illumination and surface texture. Complete images of visible surfaces that are rather featureless to the human eye or a video camera can be generated. Intelligent digitizers will be capable of measuring accurately and simultaneously color and 3D.
AUTOSYNCHRONIZED SCANNER
The auto-synchronized scanner, depicted schematically on Figure 1, can provide registered range and colour data of visible surfaces. A 3D surface map is captured by scanning a laser spot onto a scene, collecting the reflected laser light, and finally focusing the beam onto a linear laser spot sensor. Geometric and photometric corrections of the raw data give two images in perfect registration: one with x, y, z co-ordinates and a second with reflectance data. The laser beam composed of multiple visible wavelengths is used for the purpose of measuring the colour map of a scene (reflectance map).
Advantage Limitation Increasing the accuracy increases the triangulation distance. The larger the triangulation distance, the more shadows appear on the scanning object and the scanning head must be made larger. Triangulation is the most precise method of 3D
1.
SYNCHRONIZATION CIRCUIT BASED UPON DUAL PHOTOCELLS This sensor ensures the stability and the repeatability of range measurements in environment
with varying temperature. Discrete implementations of the so-called synchronization circuits have posed many problems in the past. A monolithic version of an improved circuit has been built to alleviate those problems.
2. SMART 3D POSITION SENSORS Integration of most low level processing steps on a chip using advances in VLSI will al-low
digital 3D imaging technology to become widely accepted and accessible to research labs, universities, industries, hobbyists, and, also to the home. Currently, commercial photodiode arrays used in 3D vision sensors are intended for 2D imaging applications, spectroscopic instruments or wavelength division multiplexing in telecommunication systems. Their specifications change according to the evolution of their respective fields and not to digital 3D imaging. For instance, speckle noise dictates a large pixel size [8] that is not compatible with current 2D imaging developments (where pixels are getting smaller).
Speckle C CD no ise
Figure 1 Optical geometry and photo-sensors used in a typical laser spot scanner 3D digitizer: a) dual-cell (bi-cell) for synchronizing a scanned laser spot, b) discrete response position sensor, and, c) continuous response position sensors.
DUAL AXIS PSD This particular PSD is a five terminal device bounded by four collection surfaces; one terminal is connected to each collection surface and one provides a common return. Photocurrent is generated by light which falls on the active area of the PSD will be collected by these four perimeter electrodes.
The amount of current flowing between each perimeter terminal and the common return is related to the proximity of the centre of mass of the incident light spot to each collection surface. The difference between the up current and the down current is proportional to the Y-axis position of the light spot .Similarly ,the right current minus the left current gives the X -axis position. The designations up,down, rightand leftare arbitrary; the device may be operated in any relative spatial orientation.
10
11
Schematically the new smart position sensor for light spot measurement in the context of 3D and colour measurement. In a monochrome range camera, a portion of the reflected radiation upon entering the system is split into two beams (Figure 3a). One portion is directed to a CRPSD that determines the location of the best window and sends that information to the DRPSD. In order to measure colour information, a different optical element is used to split the returned beam into four components, e.g., a diffractive optical element (Figure 3b). The whitezero order component is directed to the DRPSD, while the RGB 1storder components are directed onto three CRPSD, which are used for colour detection (Figure 3c). The CRPSDs are also used to find the centroid of the light distribution impinging on them and to estimate the total light intensity The centroid is computed on chip with the well-known current ratio method i.e. (I1-I2)/ (I1+I2) where I1 and I2 are the currents generated by that type of sensor. The weighed centroid value is fed to a control unit that will select a sub-set (window) of contiguous photo-detectors on the DRPSD. That sub-set is located around the estimate of the centroid supplied by the CRPSD.Then, the best algorithms for peak extraction can be applied to the portion of interest.
12
An object is illuminated by a collimated RGB laser spot and a portion of the reflected radiation upon entering the system is split into four components by a diffracting optical element as shown in figure 4b. The white zero order component is directed to the DRPSD, while the RGB 1storder components are directed onto three CRPSD, which are used for colour detection (Figure). The CRPSDs are also used to find the centroid of the light distribution impinging on them and to estimate the total light intensity.
The centroid is computed on chip with the well-known current ratio method i.e. (I1-I2)/ (I1+I2) where I1 and I2 are the currents generated by that type of sensor. The weighed centroid value is fed to a
13
control unit that will select a sub-set (window) of contiguous photo-detectors on the DRPSD. That sub-set is located around the estimate of the centroid supplied by the CRPSD. Then, the best algorithms for peak extraction can be applied to the portion of interest.
Figure 2 A typical situation where stray light blurs the measurement of the real but much narrower peak. A CRPSD would provide A as an answer. But a DRPSD can provide B, the desired response peak.
14
15
ADVANTAGES . Reduced size and cost Better resolution at a lower system cost High reliability that is required for high accuracy 3D vision systems Complete images of visible surfaces that are rather featureless to the human eye or a video camera can be generated.
DISADVANTAGE The elimination of all stray light in an optical system requires sophisticated techniques.
16
Figure 3 Artistic view of a smart sensor for accurate, precise and rapid light position measurement.
17
The window selection logic, LOGIC_A, receives the address of the central pixel of the 16 pixel and calculates the ad-dress of the starting and ending pixel. The analog value at the output of the each CA within the addressed window is sequentially put on the bitline by a decoding logic, DECODER, and read by the video amplifier. LOGIC_A generates also synchronisation and end-of-frame signals which are used from the external processing units. LOGIC_B, instead, is devoted to the generation of logic signals that drive both the CA and the CDS blocks. To add flexibility also the integration time can be changed by means of the external switches T0-T4. The chip has been tested and its functionality has been proven to be in agreement with specifications. In Figures illustrated the experimental results relative to spectral responsivity and power responsivity, respectively.
LO G IC _ A
18
The spectral responsivity, obtained by measuring the response of the photoelements to a normal impinging beam with varying wavelength is in agreement with data reported in literature . maximum value of ~0.25 A/W is found around l~660nm. The several peaks in the curve are due to multiple reflection of light passing through the oxide layers on top of the photosensitive area. The power responsivity has been measured by illuminating the whole array with a white light source and measuring the pixel response as the light power is increased. As expected the curve can be divided into three main regions: a far left region dominated by photoelement noise, a central region where the photoelement response is linear with the impinging power and a saturation region. The values of the slope, linearity and dynamic range of the central region have been calculated for three chips and are shown in Table 1.
19
The capability of the sensor of detecting the spot position has been measured by scanning a focused laser spot over the array at different speeds. Figure 7 illustrates the response of the sensor for a scanning speed of 12m/s and a spot diameter of 1.8mm. The sensor was operated in free running mode with an effective frame rate of 34ms. Five successive frames are illustrated showing the spot entering the array from the right side and exiting from the left side.
Table 1 Relevant electro-optical parameters as calculated by power responsivity measurements Sample # 2 3 4 Power Resp. 0.167 A/W 0.176 0.177 Linearity Useful Dynamic % 2.9 2.9 2.8 Range 47 45 dB 50
20
APPLICATIONS
Intelligent digitizers will be capable of measuring accurately and simultaneously colour and 3D For the development of hand held 3D cameras Multiresolution random access laser scanners for fast search and tracking of 3D features
FUTURE SCOPE Anti reflecting coating film deposition and RGB filter deposition can be used to enhance sensitivity and for colour sensing
CONCLUSION
The results obtained so far have shown that optical sensors have reached a high level of development and reliability those are suited for high accuracy 3D vision systems. The availability of standard fabrication technologies and the acquired know-how in the design techniques, allow the implementation of optical sensors that are application specific: OptoASICs. The trend shows that the use of the low cost CMOS technology leads competitive optical sensors. Furthermore post-processing modules, as for example anti reflecting coating film deposition and RGB filter deposition to enhance sensitivity and for colour sensing, are at the final certification stage and will soon be available in standard fabrication technologies. The work on the Color range is being finalized and work has started on a new improved architecture.
22
REFERENCES
[1]
L.GONZO,
A.SIMONI,
A.GOTTARDI,
DAVID
STAPPA,
J.A
BERNALDIN,Sensors optimized for 3D digitization, IEEE transactions on instrumentation and measurement, vol 52, no.3, June 2003, pp.903-908. [2] P.SCHAFER, R.D.WILLIAMS. G.K DAVIS, ROBERT A. ROSS, Accuracy of position detection using a position sensitive detector, IEEE transactions on instrumentation and measurement, vol 47, no.4, August 1998, pp.914-918 [3] [4] [5] J.A.BERALDIN,Design of Bessel type pre-amplifiers for lateral effect photodiodes, International Journal Electronics, vol 67, pp 591-615, 1989. A.M.DHAKE,Television and video engineering, Tata Mc Graw Hill X.ARREGUIT, F.A.VAN SCHAIK, F.V.BAUDUIN, M.BIDIVILLE, E.RAEBER,A CMOS motion detector system for pointing devices, IEEE journal solid state circuits, vol 31, pp.1916-1921, Dec 1996 [6] P.AUBERT, H.J.OGUEY, R.VUILLEUNEIR, Monolithic optical position encoder with on-chipphotodiodes,IEEE J.solid state circuits, vol 23, pp.465-473, April 1988 [7] [8] K.THYAGARAJAN,A.K.GHATAK,Lasers-Theory and Press N.H.E.Weste, Kamran Eshraghian,Principles of CMOS VLSIdesign, A systems perspective, Low Price Edition applications, Plenum
23