Rover Using Estimation
Rover Using Estimation
Abstract. Given ambitious mission objectives and long delay times between command-uplink/data-downlink sessions, increased autonomy is required for planetary rovers. Specically, NASAs planned 2003 and 2005 Mars rover missions must incorporate increased autonomy if their desired mission goals are to be realized. Increased autonomy, including autonomous path planning and navigation to user designated goals, relies on good quality estimates of the rovers state, e.g., its position and orientation relative to some initial reference frame. The challenging terrain over which the rover will necessarily traverse tends to seriously degrade a dead-reckoned state estimate, given severe wheel slip and/or interaction with obstacles. In this paper, we present the implementation of a complete rover navigation system. First, the system is able to adaptively construct semi-sparse terrain maps based on the current ground texture and distances to possible nearby obstacles. Second, the rover is able to match successively constructed terrain maps to obtain a vision-based state estimate which can then be fused with wheel odometry to obtain a much improved state estimate. Finally the rover makes use of this state estimate to perform autonomous real-time path planning and navigation to user designated goals. Reactive obstacle avoidance is also implemented for roaming in an environment in the absence of a user designated goal. The system is demonstrated in soft soil and relatively dense rock elds, achieving state estimates that are signicantly improved with respect to dead reckoning alone (e.g., 0.38 m mean absolute error vs. 1.34 m), and successfully navigating in multiple trials to user designated goals. Keywords: planning planetary rovers, range map registration, state estimation, vision-based navigation, real-time path
1.
Introduction
The current planned objectives for NASAs 2003 and 2005 rover missions to Mars include traverses totalling tens of kilometers over time frames spanning several months to a year or more. During these long distance traverses, various science functions are to be performed, including sample acquisition and cache, spectroscopic imaging, and micro-camera imaging. In
order to adequately achieve these ambitious desired mission objectives within the specied time frames, increased autonomy must be introduced and incorporated into NASAs 2003 and 2005 rover platforms. Given the long delay times between command-uplink/datadownlink sessions, increased levels of autonomy are necessary as teleoperation is infeasible, and high average ground speeds are desired to maximize the amount of terrain covered per unit time.
114
Hoffman et al.
Of paramount importance are the robustness and computational efciency of the autonomous behaviors. Not only must these algorithms function with repeated high performance in varying terrains and lighting conditions, but also work well given the computational limitations imposed by radiation hardened processor requirements and power constraints. Additionally, these behaviors must take into account and be capable of dealing with the possiblities of sensor and actuator degradation over time, both during transit, landing, and surface traverse. Thus, we desire that the rover be capable of robustly and autonomously completing the following tasks:
obstacle avoidance state estimation real time path planning to a user designated goal
For the task of obstacle avoidance, we desire that the rover be capable of distinguishing hazardous terrain features with a minimum of false positives or undetected hazards, with the former requiring the rover to perform excessive, time-consuming avoidance maneuvers, and the latter being directly hazardous to the health and well being of the rover. Our implementation of autonomous obstacle avoidance, using a texture-based interest operator (Espinal et al., 1998) coupled with stereo correlation and triangulation, is adaptive to varying surface and lighting conditions, as well as being computationally efcient. We achieve this adaptive behavior through application of the texture-based interest operator to the local vicinity of the rover wheels in the current images returned by the stereo cameras. Any changes in local terrain or lighting will automatically be incorporated into the newly computed texture signature since the search for interesting points uses the texture signature as a baseline. In soft soil wheel slip is signicant which results in the degradation of the dead-reckoned vehicle state estimate. Likewise, in terrain with dense rock elds, obstacle climbing and numerous rotations in obstacle avoidance maneuvers signicantly degrade the deadreckoned vehicle estimate. Thus, for the task of state estimation, we implement an extended Kalman lter (Baumgartner and Skarr, 1994) to fuse together two estimates of the rovers change in state. The rst estimate is visually based, using terrain maps generated at each frame during the rovers traverse to derive the rovers between-frame rotation and translation (Zhang, 1994).
The second estimate is the dead-reckoned estimate, using the rovers wheel odometry. The combination of the two measurements gives us a much more accurate state estimate than relying on either measurement alone. Finally, using our current Kalman ltered state estimate, we replan our desired path to the user designated goal at each point, using a real-time 2D potential ow based path planning methodology (Feder and Slotine, 1997), addressing the problem of real-time path planning to a user designated goal. These approaches have been experimentally veried using NASA/JPLs Lightweight Survivable Rover (LSR), pictured in gure 12. Much work has been accomplished in range map generation, some using texture features such as corners or other interest point identication methods (Bernard and Fischler, 1982; Matthies, 1987). Some generation algorithms use parallel approaches (Fua, 1993). Given the computational limitations imposed by power and radiation hardened processor requirements, we have developed a computationally efcient semi-sparse range map generation algorithm which is adaptive to both distance and texture. Likewise, much work has been done in the area of mobile vehicle state estimation, often using vision (Baumgartner and Skarr, 1994; Zhang, 1996; Matthies, 1987; Hebert et al., 1989; Olson, 1997; Genner, 1989; Lacroix et al., 1993), or using statistical estimation techniques to fuse together measurements of the rover state from multiple sensors (Baugartner and Skarr, 1994; Matthies, 1987; Roumeliotis and Bekey, 1997). However, experimental validation of statistical state estimation approaches such as Kalman ltering for mobile robotics generally has only been performed in laboratory settings, benign terrain (Baumgartner and Skarr, 1994; Matthies, 1987; Lacroix et al., 1993; Betg` eBrezetz et al., 1995), or only in simulation (Roumeliotis and Bekey, 1997). Some navigation performance evaluation has been pursued (Matthies et al., 1995), however state estimation during these trials relied on dead reckoning only. We add experimental validation in very difcult terrain, as might be experienced by a rover performing traverses on the surface of Mars, and show the feasibility of such a sensor fusion approach to state estimation. It is important to note that the state estimation framework is not only valid for vision-based measurements as described in this paper, but also for other sensor inputs as well. In fact researchers have investigated the use of inertial navigation sensors (INS) in rover sys-
115
tems (Barshan and Durrant-Whyte, 1995; Roberts and Bhanu, 1992). The fusion of vision-based map registration with wheel odometry provides an important capability for determining rover state estimates that can only be enhanced by the inclusion of additional independent measurement sources such as INS systems. Potential ow based path planning has been demonstrated extensively in simulation, (Feder and Slotine, 1997; Kim and Khosla, 1991; Connolly et al., 1995; Quinlan and Khatib, 1993; Khatib, 1986), (see (Feder and Slotine, 1997) for more references). Here, we demonstrate experimental validation of real-time path planning using harmonic potentials in dynamic environments. Additionally, the environment consists of soft soil and relatively dense rock elds, thus validating the potential ow based path planning approach in complex and challenging environments. Finally, we show the unication of these techniques in system level trials. This includes adaptive range map generation, successive range map registration, Kalman ltering of vision and dead reckoning for improved state estimation, and real time path planning using potential ow-based methodologies. Path planning relies on the state estimate, which relies on Kalman ltering of the registered range maps, which relies on range map registration, all within an integrated system. However, this also validates each of the system components, which could of course be used separately in another system. The following section presents the implementation of our vision-based rover state estimator. This includes a discussion of our high-speed texture-based adaptive range map generation algorithm in section 2.1. It also includes an overview of our implementation of a range map registration algorithm (Zhang, 1994) in section 2.2. We detail our approach to rover state estimation via Kalman ltering of vision (the output of the range map registration algorithm) and dead reckoning in section 3. An overview of the experimental system is presented in section 4 with the results being presented in section 5. As part of the results, we discuss two navigation methodologies, reactive obstacle avoidance in section 5.1 and real-time path planning to a user designated goal in section 5.2. Finally, we present conclusions and plans for future efforts in section 6.
2.
To address the problem of robust and computationally efcient obstacle avoidance, we present the novel application of a texture-based interest operator in conjunction with stereo correlation and triangulation to generate range maps, adaptive in renement to both texture and distance. 2.1. Adaptive Range Map Generation
To adaptively generate a terrain map, we begin by acquiring a stereo image pair from the rover navigation cameras. A sample image pair may be seen in gure 1. 2.1.1. Wavelet Segmentation
Once the images are acquired, we compute the wavelet decomposition of each image using the standard two dimensional Harr wavelet basis (Stollnitz et al., 1996). The wavelet transform gives a multi-resolution view of an image with an average channel and three detail channels corresponding to horizontal (H), vertical (V), and diagonal (D) features. A representative wavelet decomposition from one of the stereo images is shown in gure 2. The ground texture is then computed in several known locations at the bottom of each image, between the rover wheels, as shown in gure 1, using a texture operator on the wavelet decompositions. We assume that the portion of the stereo images close to and in between the wheels of the rover are free of obstacles. The output of this operation is a texture signature, T S , in the space dened by the H, V, and D channels of the wavelet decomposition (Espinal et al., 1998) and, for each channel, is dened as
n/2 m/2 k=m/2
j =n/2
|W (a+j,b+k)|
TS =
i=0
(nm)
(1)
where the parameter l refers to the level to which the wavelet decomposition, W , has been computed. The summation in the numerator of equation (1) effectively gives the texture energy of a local neighborhood at a given level of resolution (Lain and Fan, 1993). In our implementation, we use l = 2, as can be seen in gure 2. Thus, for each channel (H,V,D) in the wavelet decomposition, we sum the contributions to the texture signature from each level of the decomposition, by
116
Hoffman et al.
Fig. 1. Stereo camera pair with ground texture computation area highlighted.
Fig. 2. Wavelet transform showing horizontal, vertical and diagonal channels for a l = 2 decomposition with the l = 1 decomposition results shown in the upper left quadrant.
summing the absolute value of the thresholded wavelet coefcients W over a window of size n m centered on (a, b) as indicated by equation (1). The values of n and m are dependent upon the current level, and the coordinates (a, b) are the image coordinates (x, y ) in the wavelet coefcient space at level l and channel (H, V, D). In our implementation, n = m = 8 at l = 0 and n = m = 4 at l = 1, and n = m = 2 at l = 2. Once we have an average H,D,V texture signature which we assume is the ground, we search the wavelet coefcient space of the left image with a stepsize of 16 pixels in an area where we expect to nd obstacles (i.e., slightly ahead of the rover wheels), looking for points which do not match the average ground texture signature that has been previously computed. These interesting points are marked, as shown in gure 3. A range map computed using a stepsize of 16 will be referred to as a Level 16 range map. Similarly, a range map computed with a stepsize of 8 will be referred to as a Level 8 range map, and so on.
This method of texture-based feature point selection is dependent on the initial ground signatures in that interesting points are ones whose signatures fall outside of the bandwidth of the safe points. As such, it is an adaptive algorithm that is robust with respect to local changes in the terrain and with respect to lighting conditions. The density of the distribution of feature points is directly tied to the search radius (stepsize) in the wavelet coefcient space. If a dense map is needed for relatively rough terrain, a smaller stepsize is required. We are currently investigating adaptive stepsize generation based on the density of feature points returned by the algorihm (see section 2.1.4).
117
2.1.2.
Correlation
We then attempt to correlate the interesting points found in the left stereo image with their corresponding points in the right stereo image, using the fundamental matrix (Zhang et al., 1994) to reduce this search from a 2D problem to a 1D problem. Although we are using 120 eld of view lenses on the stereo camera pair, we have found that the error in using the fundamental matrix to compute a line of correspondance (rather than computing a curve of correspondance as would be expected for such highly distorted images) is on the order of approximately one pixel (Zhang, 1995). This experimental result is corroborated by the investigation performed in (Zhang, 1996). In correlating, we search along the epipolar line of left image point m1 given by Fm1 , where F is the 3 3 fundamental matrix, and m1 is a vector in the form [ x y 1.0 ]T , where x and y are the column and row pixel locations, respectively, in the left stereo image. We assume that the cameras are reasonably aligned to the same horizontal axis, and thus limit our search vertically to 10 rows. Figure 4 details a left image point of interest and its corresponding epipolar line and search band. To account for possible errors in the epipolar geometry, we also search left and right at each point on the epipolar line by 8 columns. However, if we use cameras which can reasonably be modeled as orthographic, then this error is greatly reduced and, correspondingly, the number of columns to the right or left of the epiolar line we bother searching is also reduced. For our stereo cameras, the F matrix is 8.907e 06 1.159e 03 7.025e 02 1.155e 03 2.457e 04 1.966e 01 . 6.521e 02 2.082e 01 9.532e 01 A valid match between a left image point and its corresponding point in the right image is dened by the greatest correlation score found during the search along the right epipolar line, larger than some threshold. To compute the correlation scores between points in the left and right image, we employ the standard sum minus average divided by standard deviation over an n m box in each image. In this normalized denition of correlation, the maximum correlation value is +1 (identical) and the minimum value is 1 (totally uncorrelated). In our implementation the correlation threshold is chosen to be 0.80, following (Zhang et al., 1994). Likewise, n and
m are chosen to be 7, which has become the de-facto standard correlation window size (Fua, 1993). The correlation score, S , between a left image point m1 and a right image point m2 is dened as S (m1 , m2 ) =
n i=n [I1 (u1 +i,v1 +j )I1 (u1 ,v1 )] m j =m [I2 (u2 +i,v2 +j )I2 (u2 ,v2 )]
(2n + 1)(2m + 1) 2 (I1 ) 2 (I2 ) where Ik (u, v ) is dened by Ik (u, v ) = Ik (u + i, v + j ) [(2 n + 1)(2m + 1)] i=n j =m
n m
(2)
(3)
and represents the average image intensity over a window of size (2n + 1) (2m + 1) centered on (u, v ) in image Ik , where k = 1, 2. The variable (Ik ), in equation (2), is dened by (I k ) =
n i=n m 2 j =m Ik (u, v )
Ik (u, v )
(2n + 1)(2m + 1)
(4) and is the standard deviation in the image intensity over a window of size (2n +1) (2m +1) centered on (u, v ) in image Ik , where k = 1, 2. 2.1.3. Subpixel Correlation
Once we have an initial point match between the left and right images, we compute the subpixel location of the matching point in the right image, which is obtained by tting a parabola to the pixel-level correlation scores in both the row and column directions, as given by (Xiong and Matthies, 1997) Scol Scol+ 2(Scol + Scol+ 2Scol ) Srow Srow+ row = 2(Srow + Srow+ 2Srow ) col =
(5)
where Srow and Scol refer to the correlation score between a point in the left image m1 and a point in the right image m2 at position (col, row), where the maximum pixel level correlation score has been found. Scol is the score between m1 and the point in the right image one column to the left of m2 . Srow , Scol+ and Srow+ are computed similarly. The right image row and column are then set equal to their integer value found during the pixel level correlation, plus the delta row and column found during the subpixel correlation. This procedure greatly improves
118
Hoffman et al.
the smoothness of the derived range map, with little extra computation. Using the set of matching image points computed to subpixel precision, we then triangulate to obtain 3D position information, using metrically calibrated camera models that take into account radial lens distortion (Gennery, 1993). 2.1.4. Adaptive Mesh Renement
Using the computed 3D position information, we can further iteratively improve the density of our range map by returning to those interesting points computed in the previous iteration that are found to be within a threshold danger distance from the rover. Searching the image over an area of Stepsize2 centered on each interesting point using a stepsize of Stepsize/2, we nd a new set of interesting points based on texture and compute their 3D position information, as can be seen in gure 5. In this fashion we perform what we have dubbed adaptive mesh renement, being somewhat akin to the process of adaptive mesh renement in nite element analysis, in that we only bother computing the range map in areas where we care about what we are going to nd, i.e., areas that are both texturally interesting and dangerous, or close to the rover. The density of the nal range map is user specied. The initial Level 16 range map is computed with a pixel stepsize of 16, which at 2 meters from the rover corresponds to approximately 26 cm between adjacent range map points for our 256 x 243 stereo image pairs. Level 8 is the next step up in density, with Levels 4, 2, and nally, 1, becoming increasingly more dense. Figure 6 shows how the range map has been computed adaptively such that the areas corresponding to the ground in texture are not computed. Likewise, it can be seen in the upper left corner that initially interesting points texturally are not further rened, as they are too far away to be either accurate in distance or immediately dangerous to the rover. Figures 7 and 8 offer some comparison between the density of the nal range maps generated, between Level 4 and Level 1. 2.1.5. Adaptive Range Map Generation Summary
We can thus summarize the procedure of adaptively generating the range map based on texture and danger distance as follows:
1. Obtain stereo image pair (256 x 243, 8 bit grayscale) 2. Compute wavelet transform (2D Harr basis) using the procedure given by (Stollnitz et al., 1996). 3. Set initial stepsize (Stepsizei = 16) for iteration i = 0 and set convergence variable M INI
119
0.5 0.4 0.3 0.2 0.1 Z (m) 0 0.1 0.2 0.3 0.4 0.5 2 1 0 Y (m) 1 0.8
2. Add any points found to running list of new interesting points. 3. Stepsizei+1 = Stepsizei /2 2.1.6. Timing and Performance
0.6
0.4
0.2
0.2
0.4
0.6
0.8
X (m)
Fig. 7.
0.5 0.4 0.3 0.2 0.1 Z (m) 0 0.1 0.2 0.3 0.4 0.5 2 1 0 Y (m) 1
We have time tested the adaptive range map generation algorithm using the 171 image pairs gathered during our rst autonomous navigation exercise. At Level 4, the maximum time required to complete range map generation (AND path selection, as described in section 5.1) was 17 seconds (wall clock time) on a Sparc20, running as a user process, with an average time of approximately 9 seconds. At Level 8, the maximum time was approximately 5 seconds (wall clock time) on the same system, with an average time of approximately 2.5 seconds. The same code timed on a Pentium 166 Mhz processor took 7.5 seconds at Level 4, and 1.9 seconds at Level 8, as measured by the proler under Microsoft Visual C++. Computation time is linear in the number of points computed. The strength of the algorithm lies in the following features:
0.8
0.6
0.4
0.2
0.2
0.4
0.6
0.8
X (m)
Fig. 8.
4. Search for points of interest using equation (1) between some max and min row and column locations known to probably contain points of interest. 5. Correlate points using equation (2) and the fundamental matrix. 6. Compute subpixel matches (optional, recommended). 7. Compute 3D locations using triangulation. 8. Perform Adaptive Mesh Renement (see below) on dangerous points. 9. i = i + 1. 10. If (i > M INI ) goto 5, else end. Adaptive Mesh Renement For each point (x,y) preserved as interesting from iteration i: 1. Search for points of interest using equation (1) between x Stepsizei , y Stepsizei using Stepsizei /2.
Range maps are computed selectively, only for texturally interesting points, which automatically eliminates those areas where no correlation or an incorrect correlation would be found. Range maps are computed adaptively, only for points which are close enough to be accurately located and of possible near-term danger to the rover. De-warping and registering the image pairs is not required, as the epipolar geometry is used explicitly to nd matching points. Easily implementable. Low memory requirements. Vision-Based State Estimation
2.2.
At each iteration through our navigation algorithm, we compute a range map, which can be used to identify obstacles in the eld of view of the rover. We can also register these successive range maps such that they are all in a single frame of reference, effectively computing a global world map of the environment through which the rover is traversing. The process of registering these maps also has the advantage of providing an estimate of the rovers current state (position and orientation relative to its initial frame of reference).
120
Hoffman et al.
2.2.1. Extraction of Inter-Frame Rotation and Translation The algorithm we have implemented for the registration of two range maps is described in (Zhang, 1994) and (Zhang, 1996). Here, a brief overview of the algorithm is presented. The interested reader is referred to (Zhang, 1994) and (Zhang, 1996) for additional details. This iterative registration process minimizes a performance criteria, F , given by
if < D I = + 3 Dmax else if < 3D I = + 2 Dmax else if < 6D I =+ Dmax else I = Dmax
Any
F (R, T) =
1
m i=1
pi
pi d2 (Rxi + T, S )
i=1
(6)
which is related to the difference in distance between two clouds of 3D range points. In equation (6), R represents the rotation between the two range maps and T is the translation between the two range maps. Also in equation (6), the variable pi is 1 if the point xi is matched to a point on the surface S dened by the second set of 3D range points, and 0 otherwise, relating to points which are visible only from one frame or the other. The variable m refers to the number of points in the rst terrain map. The function d(x, S ) is dened by d(x, S ) = min d(x, xj ) (7)
j {1,...,n}
where d(x, x ) is the Euclidean distance between two points. We are then able to nd a set of closest point matches between the two frames where the distances between the two points in a matched pair is less than an adapI . tively computed maximum distance tolerance Dmax The search for closest points is accomplished through the use of a k-dimensional binary search tree (Zhang, 1994) for speed. In each iteration we adapt the maximum distance tolerance based on the mean and standard deviation of the distance between the points in each set of matched pairs which satised the maximum I 1 from the previous iteration. distance tolerance Dmax
I points which do not satisfy the newly computed Dmax distance tolerance are rejected and the remaining matched pairs to compute the motion between the two frames are utilized. To compute the motion between the two frames, a dual quaternion methodology (Walker et al., 1991; Zhang, 1994) is used to solve optimally for the rotation and translation between the two frames simultaneously. Once the motion between the two frames has been determined, the computed motion is applied to the rst frame. We then iterate, nding the next set of closest point matches, updating the point matches using the I , comstatistically computed distance tolerance Dmax puting the motion between the two frames, and applying that motion to the rst set of points until the change in the computed motion reaches a lower threshold, or the algorithm reaches the maximum allowed number of iterations. One variable referenced above is D, which is computed to be the average distance between points and their closest neighbors in a terrain map, and is effectively related to the resolution of the terrain map data. The second variable, , is computed from the point match set distance histogram. The reader is referred to (Zhang, 1994) for a more thorough explanation of D and , and their implications to the convergence of the algorithm.
2.2.2.
Experimental Validation
We now present experimental validation of the algorithm described above, using terrain maps generated by the adaptive range map generation algorithm presented in Sections 2.1.1 - 2.1.6. Figure 9 shows a terrain map generated in one position by the rover. Figure 10 shows a terrain map generated by the rover after a nominal 10 counterclockwise rotation. Applying the range map registration algorithm to the above two range
121
Fig. 9.
The vehicle rotation computed from Rnal results in an 8.26 counterclockwise rotation which corresponds well with the observed motion of the rover, and with the commanded rotation of 10.0 counterclockwise. Figure 11 shows the initial registration between the two terrain maps, after the application of Rinitial and Tinitial to terrain map 1, and the nal registration between the two terrain maps, after the application of Rnal and Tnal to terrain map 1. Recent experimental trials indicate that the initial rotation matrix and translation vector can be in error by as much as 50% before an incorrect minimum is reached. Also, the accuracy of the map registration technique is heavily dependent on the quality of the range map which in turn is a function of the stereo camera calibration. For the eld of view of the stereo camera pair, range maps are accurate to within 1-2 centimeters. A formal error analysis will be conducted to determine how this range uncertainty propagates into the nal rotation matrix and translation vector. 3. State Estimation
Fig. 10.
maps, with the following initial rotation matrix R and translation vector T: Rinitial = 0.984807 0.173648 0.000000 0.173648 0.984807 0.000000 0.000000 0.000000 1.000000 Tinitial = 0.000000 0.000000 0.000000
The nal computed rotation matrix R and translation vector T is computed to be: Rnal = 0.989621 0.135732 0.047190 0.136498 0.990550 0.013388 0.044927 0.019690 0.998796 Tnal = 0.044082 0.026828 0.013626
Due to various factors, the motion estimate that is produced by the map registration technique presented in the previous section may be in error. These factors include errors in the camera calibration from which the range maps are generated and the nite precision for which the performance criteria in equation (6) can be computed. Therefore, the vision estimate provides one means for establishing the position and orientation of the rover throughout its traverse. Likewise, the nominal wheel odometry can be utilized to estimate the rovers location. However, like the vision-based motion estimates, the dead- reckoned rover location estimates will be in error due to numerous factors including wheel slip in soft terrain, incorrectly-modeled rover kinematics, and inaccuracies due to traverse over obstacles by the rovers rocker-bogie suspension. This section describes the implementation of an extended Kalman lter methodology (Baumgartner and Skarr, 1994) to the problem of fusing together the rover state estimates as produced by the vision system and deadreckoning. 3.1. Dead-Reckoned Estimates
The nominal estimates of the rover motion due to wheel odometry alone (e.g., dead-reckoning) can be described
122
Hoffman et al.
Fig. 11. Initial Registration vs. Final Registration; Light Points: Terrain Map 1, Dark Points: Terrain Map 2; Computed Rotation: 8.26 degrees counterclockwise.
= f (x(), u) + w()
where x = [X Y ]T represents the in-plane position and orientation of the rover relative to some global reference frame, R is the nominal wheel radius, and B is half the distance between the wheel base. The independent variable, , is dened as = r + l 2 (9)
where dr and dl are the differential wheel rotations of the right and left wheels, respectively. Finally, w() represents a Gaussian-distibuted, white noise process known as the process noise associated with the state equations. The process noise accounts for random errors such as wheel-slippage. For a skid-steered rover, the state equations given in equation (8) are valid for all maneuvers except turnin-place for which the independent variable, , goes to zero and the state equations become singular. To handle this special case, is re-dened as = r l 2 (11)
where r and l are the absolute wheel rotations of the right and left wheels, respectively. The control
123
which results in the following state-equations for a turnin-place maneuver dX () 0 dx() d () 0 + w() = dY = d d R d()
d B
ous vision-based update to the Kalman lter was computed. Also, v(i ) represents a Gaussian-distributed, white noise process known as the measurement noise. The update to the rover state is given by x (i ) = x (i |i1 ) + (i |i1 )] (15) K(i )[z(i ) (hx where the Kalman gain, K(i ), is K(i ) = P(i |i 1)HT [HP(i |i 1)HT + R]1
= f (x()) + w()
(12)
In either case, with the real-time sensing of the wheel rotations via rotary encoders, the above state equations can be numerically integrated (e.g., propagated) in real time thereby producing the dead-reckoned estimates of the rover motion. Likewise, the estimation error covariance matrix, P(), associated with the Kalman lter is propagated in real-time by the following dP() = F()P() + P()F()T + Q d (13)
(16)
where F() is the Jacobian of the state equations and Q is the covariance matrix associated with the process noise. The process noise covariance matrix is assumed to be a diagonal matrix, i. e., Q = diag[QXX , QY Y , QZZ ], and, qualitatively, represents the condence placed in the dead-reckoned estimates of the state. 3.2. Kalman Filter Update via Vision
and where x refers to the Kalman ltered estimate of the rover state. In equation (16), H is the Jacobian of the measurement equation given by (14) which for this case is the 3 x 3 identity matrix, I. Also, the measurement noise covariance matrix is denoted by R and is assumed to be a diagonal matrix, i.e., R = diag[RXX , RY Y , RZZ ], and, qualitatively, represents the condence placed in the vision-based estimates of the state. The estimation error covariance matrix is updated as follows P(i ) = (I K(i )H)P(i |i1 ) (17)
As presented in section 2, the inter-frame rotation and translation of the rover can be determined (to within some level of accuracy) by resolving two successive vision-based range maps. As an output from this map registration technique, we will consider the in-plane rotation and translation of the rover as a measurement of the rover state that will be fused together with the dead-reckoned state estimates via the Kalman lter approach. Therefore, the measurement of the rover state is X (i1 ) + X (i ) z(i ) = Y (i1 ) + Y (i ) + v(i ) (i1 ) + (i ) (14) = h(x(i )) + v(i ) where X , Y , and are computed directly from R and T as detailed in section 2.2. Also, in equation (14), i represents the value of at which time a stereo pair is acquired and the map registration algorithm is completed and i1 is the value of at which the previ-
where, in the above equation and in equation (16), P(i |i1 ) represents the propagated estimation error covariance matrix produced by the integration of equation (13). Likewise, in equation (15), x (i |i1 ) represents the propagated state estimates produced by integrating forward the state equations given either by equation (8) or by equation (12). The performance of this state estimator is presented in section 5. As with any Kalman lter application, the performance of the lter is greatly affected by the assumed process noise covariance matrix, Q, and the measurement noice covariance matrix, R. This is especially true for the rover case where, for example, the type of terrain (e.g., soft soil, hard-pack ground, etc.) can change the quality of the state estimates produced by dead-reckoning. Therefore, in the results presented in this paper, the state-estimate gains are dened as RXX , GY = QXX RY Y , G = QY Y R Q
GX =
so that the relative weight given to the dead-reckoned estimates with respect to the vision-based estimate can be varied.
124
Hoffman et al.
4. 4.1.
The algorithms described in this paper were experimentally veried using NASA/JPLs Light-weight Survivable Rover (LSR) platform which utilizes a six wheeled, skid steered, rocker-bogie rover design. MicroArm-1, mounted to LSR, is a 4 degree-offreedom robotic manipulator arm, 0.7 m at full extent, driven by 1-inch-pound torque capability piezoelectric ultrasonic motor actuators through 200:1 harmonic gear reductions. The powered multifunction end effector has available a close distance color imaging camera, a rotary abrading tool, and a clam-shell scoop/gripper. LSR is pictured in gure 12 and MicroArm-1 is pictured in gure 13 (Schenker et al., 1997). 4.2. Sensor Suite
LSR is outtted with a black and white stereo CCD pair with a 10 cm baseline, mounted directly beneath MicroArm-1. Each camera is equipped with a 120 eld-of-view lens. The stereo pair as mounted on LSR may be seen in gure 14. This camera pair was shown several calibration xtures in 34 different positions to provide input data for a least-squares calibration technique which attempts to generate a camera model compensating for radial lens distortion (Gennery, 1993). Given the large eld-ofview of our cameras, radial lens distortion was quite signicant. Other than the stereo camera pair (or deadreckoning), LSR has no method by which it may attempt to determine its change in rotation or translation, as it is not currently equipped with wheel encoders or gyro/accelerometers. However, nominal rover translation and rotation of LSR is known via the bang-bang control of the rover wheels under a no-wheel-slip assumption. 4.3. Computing Environment
VMEbus using socket connections. Frame grabbing is accomplished using the Indigo-2 video hardware. A Java-based graphical user interface is provided to remotely initiate autonomous routines, as well as display images and data as collected by the rover to the remote scientist/operator. A block diagram of the current computing environment for the LSR vehicle is pictured in gure 15. 5. Results
The computing environment for the LSR platform is currently a hybrid SUN Sparc-20/Silicon Graphics Indigo-2/VMEbus architecture. Computation is performed on the Sparc-20 using C and Java, with low level commands being sent to the LSR vehicle via the
We now present the results for the two navigation modes in which our rover currently may operate: reactive obstacle avoidance and potential ow based path planning for navigation to a user designated goal. All of the results presented in this section were carried out within the rover pit in the Jet Propulsion Laboratorys Planetary Robotics Laboratory. The 9.1 meter x 4.5 meter rover pit is lled with a soft sand material and is populated with a variety of rock type and sizes. Throughout each run, the rover experiences signicant
125
Sparc SLC
by low resolution images taken through 120 eld of view lenses. The path having both the least number of encountered points and also the least amount of required rotation is chosen. If the absolute value of the rotation requested is less than 5 degrees, then the rover proceeds forward for 5 seconds (approximately 20 cm). If the rotation is greater than +5 degrees, the rover rotates clockwise for 5 seconds, otherwise the rover rotates counterclockwise for the same period of time, which corresponds to a rotation of approximately 10 degrees. We nd that this technique gives us adequate obstacle avoidance performance. Using only current information and the above strategy may lead to oscillatory behavior during obstacle avoidance behaviors. The rover thus maintains a history of its past executed motions and initiates an oscillation-breaking maneuver (20 degree rotation) if it detects oscillatory behavior. 5.1.2. State Estimate During Reactive Obstacle Avoidance As can be seen in gure 16, the Kalman ltered vehicle state estimate is quite good after approximately a 6 meter traverse. Also in gure 16, the path labeled vision refers to the rover state produced by the vison-based estimate of rover inter-frame rotation and translation alone without knowledge of wheel odometry. The nal error between the ltered state and the ground truth is approximately 0.22 meters, with a maximum absolute error over the whole traverse of 0.64 meters, and a mean of 0.38 meters. The vehicle made 79 iterations through the reactive obstacle avoidance algorithm, making 20 right turns (10 nominal), 11 left turns (10 nominal), and 48 forward motions (20 cm nominal). Figure 17 details the position error of the Kalman ltered estimate with respect to the ground truth. The gains used in the Kalman lter were GX = 1.9, GY = 1.9, G = 1.2. The ground truth in all trials is computed by selecting the cross shaped target on the top of the rover in the view provided by a 120 eld-of-view camera mounted on the ceiling of the rover workspace. This so-called sky camera has been calibrated by presenting it with a calibration xture in multiple locations covering the oor of the workspace, and then tting a third degree polynomial to the calibration points, relating pixel location in the image with ground location. A separate curve was tted for the inverse relationship. The ground truth is accurate to within approximately 5 cm.
LSR
Sun Sparc 20
wheel slip, including the occasional wheel hang-up on an obstacle. The rover also traverses over small rocks whose height is determined to be less than one rover wheel diameter. This environment is very indicative of the type of terrain that would be encountered by a rover during Martian surface operations.
5.1.
5.1.1.
Once we have adaptively computed the range map, we make use of the terrain information to perform obstacle avoidance. To do so, the sum of the number of terrain points encountered when sweeping a box having an outline of the frontal area of the rover through 1.5 meters along the Y axis (directly ahead of the rover) is determined. We choose 1.5 meters as our lookahead distance because of the accuracy limitations imposed
126
Hoffman et al.
Here we present a brief treatment of the potential ow based path planning methodology described more fully in (Feder and Slotine, 1997). The interested reader is referred there for more details. If we consider the rover workspace as the complex plane, the potential for a plane ow at an angle with the real axis is given by w = U zei . (18)
Fig. 16.
The potential for a circular cylinder centered at z = X0 + iY0 with radius r in a plane ow with velocity U at an angle with the real axis is given by r2 ei ). (19) z z0 Finally, the ow around an ellipse with major axis a and minor axis b may be obtained using a conformal mapping as given by w = U (zei + r2 . (20) 4 Applying the conformal mapping given by equation (20) to equation (19), we obtain the equation for a cylinder in the plane, as may be seen in the following: z=+ (a + b)2 ei ). (21) 4 If we then solve for , we obtain the following equation: w = U (ei + = 1 (z 2 z 2 r2 ), r2 = a2 b2 (22)
Position Est. Error wrt. Ground Truth, K.F. vs. D.R. 3 Kalman Filtered Dead Reckoned 2.5
1.5
0.5
0 0
10
20
60
70
80
Fig. 17. Position error (m); Kalman ltered vs. dead reckoned; reactive run.
5.2.
As an alternative to simply performing reactive obstacle avoidance, we can also navigate to a user designated goal and avoid (either autonomously located or user designated) obstacles along the way. To do this, we have implemented a real-time path planner based on potential ow methodologies (Feder and Slotine, 1997). During each iteration of the navigation algorithm, this path-planner computes the path from the rovers current estimated position to the desired goal. Currently, we limit path planning to 2D and obstacles to be limited to cylinders or ellipses at some specied angle, although arbitrarily shaped objects may be mod-
whose solution describes the velocity eld around an ellipse. 5.2.2. Potential Construction
where Xf and Yf represent the (X, Y ) goal position, and X and Y are the rovers current position. An approximate harmonic potential solution to the continuous potential ow is achieved by the superposition of
127
Fig. 18. State estimates during potential ow-based path planning run: unobstructed goal.
Fig. 19. State estimates during potential ow-based path planning: obstructed goal.
the closed form potentials for the plane ow and all of the individual obstacles added to the ow. More details on the construction of this harmonic potential are provided in (Feder and Slotine, 1997). 5.2.3. Rover Motion
Once the harmonic potential is constructed, the u and v components of the ow can be found using the following: u = ( dw dz ), (24) v= ( dw dz ) . The u and v components of the ow dene a vector in the rover workspace. The potential ow based path planning methodology was originally developed for an omni-rover. However, our rover may only move in the direction which it is facing. Thus, we must align the pointing angle of the rover with the current streamline vector. Once the rover has rotated to be nominally aligned with the streamline by making 10 nominal rotations, (in our implementation, the pointing error must be less than 5 ), it may proceed forward, in steps of 20 cm nominal. Currently obstacle locations are dened a priori by the user in the global reference frame of the sky camera. However, given a manipulator-mounted or mastmounted stereo pair on a rover platform, all obstacles could be located either by the user, or located automatically by processing the terrain map generated by this
stereo pair, all in the reference frame of the rover. We plan to pursue this avenue in the future. As a point of note, it is explicitly the ability of the real-time path planner to deal with dynamic environments which allows its application to the context of rover navigation. If all obstacle locations are not known a priori, then incremental discovery of obstacles and their addition to the world potential ow requires a path planner capable of dealing with dynamic environments. Similarly, as the commanded motion from the path planner may not actually be the exact motion achieved, due to wheel slip and numerous other factors, we require a path planner that is capable of dealing with a dynamic environment. This can be seen by assuming that if the rover actually did achieve the commanded position exactly, then the obstacles must have moved so as to be in the same relative position with respect to the rover. Thus, the rover must be capable of replanning its path to the goal in real time in a dynamic environment, or in response to its current best position and orientation estimate. 5.2.4. State Estimates During Potential Flow-Based Path Planning Figure 18 details the traverse of the LSR vehicle to a nearly directly accessable goal, through soft soilsimulant and over medium sized rocks. In this gure, the obstacles are user-selected and are bounded by the
128
Hoffman et al.
Table 1. State estimation results for all experimental trials. Trial Kalman Filtered Max/Mean 0.68/0.38 m 0.76/0.40 m 1.09/0.59 m Dead Reckoned Max/Mean 2.83/1.34 m 2.99/1.38 m 1.34/0.70 m
6.
Conclusions
ellipses shown. The nal error between the ltered state and the ground truth was 0.63 meters, with a maximum absolute error over the whole traverse of 0.76 meters, and a mean of 0.40 meters. The vehicle made 125 iterations through the potential ow based navigation algorithm, making 41 right turns, 44 left turns, and 40 forward motions. The gains used in the Kalman lter for this rst run were GX = 1.9, GY = 1.9, G = 1.2. Figure 19 details the traverse of the LSR vehicle around an obstruction to a goal. Again, in this gure, the obstacles are user-selected and bounded by the ellipses shown. The nal error between the ltered state and the ground truth was 0.67 meters, with a maximum absolute error over the whole run of 1.09 meters and a mean of 0.59 meters. The vehicle made 69 motions, making 8 right turns, 22 left turns, and 39 forward motions. The gains used in the Kalman lter for the second run were GX = 5.0, GY = 5.0, G = 2.5. Table 1 summarizes our results for all of the experimental trials presented in this section.
In summary, this paper has shown that the fusion of multiple estimates of a rovers state via an extended Kalman lter framework can have a profound impact on the quality of the rovers state estimate as a whole, in some cases more than doubling the accuracy of the state estimate with respect to dead-reckoned estimates. Additionally, the paper has presented a viable methodology for the texture-based construction of semi-sparse range maps given computational resource restrictions, especially if it is made adaptive to current ground texture and distances to possible obstacles. Finally, this paper successfully demonstrated the use of real-time, potential ow based path planning methodologies as applied to complex, dynamic environments with very positive results. The described algorithms are currently being implemented on NASAs Sample Retrieval Rover (SRR) platform (shown in gure 20) which is a 4-wheeled, skidsteered, split-differential rover design capable of a sustained top ground speed of approximately 35 cm/sec. This rover is dedicated to the development of advance planetary rover technologies. SRR is equipped with wheel encoders, an INS system, and a full vision system. This rover will serve as the platform on which future investigations regarding the development of state estimators that will fuse INS measurements, vision measurements, and wheel odometry using the extended Kalman lter framework described in this paper. Acknowledgments The authors would like to acknowledge Hrand Aghazarian, Jet Propulsion Laboratory, for many insightful discussions and for his assistance in implementing the described algorithms on the SRR platform. Additionally, we would like to acknowledge Hans Jacob Feder, Massachusetts Institute of Technology, for the original version of the potential ow code, and Todd Litwin, Jet Propulsion Laboratory, for the original version of the stereo triangulation and camera calibration code. This research was carried out by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration.
129
References
Barshan, B. and Durrant-Whyte, H. F. 1995. Inertial Navigation Systems for Mobile Robots, IEEE Transactions on Robotics and Automation, Vol. 11, No. 3, pp. 328-342. Baumgartner, E. T., and Skarr, S. B. 1994. An Autonomous VisionBased Mobile Robot, IEEE Transactions on Automatic Control, Vol. 39, No. 3, 493-502. Bernard, S. T., Fischler, M. A. 1982. Computational Stereo, Comput. Surv. 14(4):553-572. Betge-Brezetz, S., Chatila, R., Devy, M. 1995. Object-based Modelling and Localization in Natural Environments, IEEE International Conference on Robotics and Automation, pp. 2920-2927. Connolly, C. I., Burns, J. B., Weiss, R. 1990. Path Planning Using Laplaces Equation, Proc. IEEE International Conference on Robotics and Automation, pp. 2101-2106. Espinal, F., Huntsberger, T., Jawerth, B., Kubota, T. 1998. WaveletBased Fractal Signature Analysis for Automatic Target Recognition, Optical Engineering, Vol. 37, No. 1, pp. 166-174. Feder, H. J. S., and Slotine, J-J. E. 1997. Real-Time Path Planning Using Harmonic Potentials in Dynamic Environments, IEEE International Conference on Robotics and Automation (ICRA), Vol 1, pp. 874-881. Fua, P. 1993. A parallel stereo algorithm that produces dense depth maps and preserves image features, Machine Vision and Applications, 6(1). Gennery, D. B. 1989. Visual terrain matching for a Mars rover, Proc. International Conference on Computer Vision and Pattern Recognition, pp. 483-491, San Diego, CA. Gennery, D. B. 1993. Least-Squares Camera Calibration Including Lens Distortion and Automatic Editing of Calibration Points, Calibration and Orientation of Cameras in Computer Vision, A. Gr un and T. Huang, editors, Springer-Verlag. Hebert, M., Caillas, C., Krotkov, E., Kweon, I.S., Kanade, T. 1989. Terrain Mapping for a Roving Planetary Explorer, Proc. IEEE International Conference on Robotics and Automation, pp. 9971002. Khatib, O. 1986. Real-Time Obstacle Avoidance for Manipulators and Mobile Robotics, The International Journal of Robotics Research, 5, No. 1, pp. 90-98. Kim, J-O., Khosla, P. 1991. Real-Time Obstacle Avoidance Using Harmonic Potential Functions, IEEE International Conference on Robotics and Automation, vol. 1, pp. 790-796. Lacroix, S., Fillatreau, P., Nashashibi, F. 1993. Perception for Autonomous Navigation in a Natural Environment, Workshop on Computer Vision for Space Applications, Antibes, France, Sept. 22-24. Lain, A., and Fan, J. 1993. Texture Classication by Wavelet Packing Signatures, IEEE Trans. PAMI, Vol. 15, No. 11, pp. 1186-1191. Matthies, L., Gat, E., Harrison, R., Wilcox, B., Volpe, R., Litwin, T. 1995. Mars Microrover Navigation: Performance Evaluation and Enhancement, Autonomous Robots Journal, special issue on Autonomous Vehicles for Planetary Exploration, Vol. 2(4), pp. 291-312. Matthies, L. 1987. Dynamic Stereo Vision, Ph.D. Thesis, Computer Science Department, Carnegie Mellon. Matthies, L., Olson, C., Tharp, G., Laubach, S. 1997. Visual Localization Methods for Mars Rovers Using Lander, Rover, and Descent Imageery, International Symposium on Articial Intelli-
gence, Robotics, and Automation in Space (i-SAIRAS), Tokyo, Japan, pp. 413-418. Olson, C. 1997. Mobile Robot Self-Localization by Iconic Matching of Range Maps, Proc. of the 8th International Conference on Advanced Robotics, pp. 447-452. Quinlan, S., Khatib, O. 1993. Elastic Bands: Connecting Path Planning and Control, Proc. IEEE International Conference on Robotics and Automation, 2, pp. 802-808. Roberts, B. and Bhanu, B. 1992. Inertial Navigation Sensor Integrated Motion Analysis for Autonomous Vehicle Navigation, Journal of Robotic Systems, Special Issue on Passive Ranging For Robotic Systems, Vol. 9, No. 6, pp. 817-842. Roumeliotis, S., Bekey, G. 1997. An Extended Kalman Filter for frequent local and infrequent global sensor data fusion, Proc. SPIE Vol. 3209, pp. 11-22. Schenker, P.S., Baumgartner, E.T., Lee, S., Aghazarian, H., Garrett, M.S., Lindemann, R.A., Brown, D.K., Bar-Cohen, Y., Lih, S., Joffe, B., Kim, S.S., Hoffman, B.D., Huntsberger, T. 1997. Dexterous robotic sampling for Mars in- situ science, Intelligent Robotics and Computer Vision XVI, Proc. SPIE 3208, Pittsburgh, PA, Oct 14-17, pp. 170-185. Stollnitz, E. J., DeRose, T. D., Salesin, D. H. 1996. Wavelets for Computer Graphics, Theory and Applications, Morgan Kaufmann Publishers, Inc., San Francisco, CA. Walker, M. W., Shao, L. and Volz, R. A. 1991. Estimating 3-D location parameters using dual number quaternions, CVGIP: Image Understanding 54(3), 358-367. Xiong, Y. and Matthies, L. 1997. Error Analysis of a Real-Time Stereo System, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1087-1093. Zhang, Z. 1994. Iterative Point Matching for Registration of FreeForm Curves and Surfaces, International Journal of Computer Vision, 13:2, 119-152. Zhang, Z., Deriche, R., Faugeras, O., Luong, Q-T. 1995. A Robust Technique for Matching Two Uncalibrated Images Through the Recovery of the Unknown Epipolar Geometry, Articial Intelligence Journal, Vol. 78, pp. 87-119. Zhang, Z. 1996. A Stereovision System for a Planetary Rover: Calibration, Correlation, Registration, and Fusion, Proc. IEEE Workshop on Planetary Rover Technology and Systems, Minneapolis, Minnesota. Zhang, Z. 1996. On the Epipolar Geometry Between Two Images With Lens Distortion, Proc. Intl Conf. Pattern Recognition (ICPR), Vol. I, pp. 407- 411, Vienna.
Brian D. Hoffman received a bachelors degree, 1997, and a masters degree, 1998, in Mechanical Engineering from the Massachusetts Institute of Technology in Cambridge, MA. His thesis titles were Scene Analysis Using Active Vision and Increased Autonomy for Planetary Rover Operations, respectively. He is currently employed by Silicon Graphics Computer Systems, in Mountain View, CA. His interests in-
130
Hoffman et al.
in these and related areas. He received a BA in Physics from Hofstra University (1971), an MA in Physics from Hofstra University (1973), and a Ph.D. in Physics from the University of South Carolina (1978). He is a member of SPIE, ACM, IEEE Computer Society, and INNS.
Eric T. Baumgartner is a group leader in the Mechanical and Robotics Technology Group and a senior member of engineering staff in the Science and Technology Development Section at NASAs Jet Propulsion Laboratory in Pasadena, CA. At JPL, he serves in a systems engineering capacity for the development of advanced planetary rovers and also contributes to technology developments in the areas of robotic sensing and control. Previous to his tenure at JPL, he was an Assistant Professor in the Mechanical Engineering Engineering Mechanics Department at Michigan Technological University in Houghton, MI. He has published over 25 articles in the area of robotics, controls, and state estimation and is active in SPIE and ASME. He received his B.S. degree in Aerospace Engineering from the University of Notre Dame in 1988, the M.S. degree in Aerospace Engineering from the University of Cincinnati in 1990, and a Ph.D. in Mechanical Engineering from the University of Notre Dame in 1993.
Terry Huntsberger is a member of engineering staff of the Mechanical and Robotics Technologies Group in the Science and Technology Development Section at the Jet Propulsion Laboratory in Pasadena, CA. Previously, he was an Associate Professor and Director of the Intelligent Systems Laboratory in the Department of Computer Science at the University of South Carolina. His research interests include rover control systems, image compression, neural networks, wavelets, and computer vision. He has published over 80 technical papers
Paul S. Schenker is Supervisor, Mechanical and Robotics Technologies Group, Mechanical Systems Engineering and Research Division, Jet Propulsion Laboratory. His current work emphasizes planetary rover development and robotic sampling technologies; he is task manager for NASA/JPLs Sample Return Rover (SRR) and Exploration Technology Rover Design, Integration, and Field Test R&D efforts (ET Rover advanced rover prototypes supporting the NASA Mars 01- 05 missions); he also recently led JPLs Planetary Dexterous Manipulators R&D under NASA funding, producing therein a robotic sampling concept which ies on the Mars98 Surveyor mission. Schenkers other recent robotics R&D activities include a role as founding co-PI for NASA/MicroDexterity Systems development of a Robot Assisted Microsurgery high-dexterity tele- operative workstation and his longer standing involvement in various tele- robotic technology and system developments for remote orbital servicing and autonomous robotic exploration. Schenker is a member of AAAI, IEEE, SPIE; he is a Fellow and 98 Vice-President/99 President Elect of the last. Schenker is widely active in external technical meetings and publications in the areas of robotics and machine perception, having contributed about 100 archival articles to same. Dr. Schenker received his B.S. in Engineering Physics from Cornell University, and completed his M.S., Ph. D. and postdoctoral studies in Electrical Engineering at Purdue University. Prior to joining Jet Propulsion Laboratory in 1984, Schenker was a member of the Electrical Sciences faculty, Brown University, and Research Section Chief of Signal and Image Processing, Honeywell Corporation.