Interactive 3D Scanning Without Tracking
Course project for 3D Photography and Geometry Processing (EN 292 s34)
Presented at SIBGRAPI 2007
Using inexpensive and readily available materials ― a calibrated pair of cameras and a laser line projector ― a 3D laser range scanner which requires no tracking is implemented in this paper. We introduce a planarity constraint for reconstruction, based on the fact that all points observed on a laser line in an image are on the same plane of laser light in 3D. This plane of laser light linearly parametrizes the homography between a pair of images of the same laser line, and this homography can be recovered from point correspondences derived from epipolar geometry. Points visible from multiple views can be reconstructed via triangulation and projected onto this plane, while points visible in only one view can be recovered via ray-plane intersection. The use of the planarity constraint during reconstruction increases the system's accuracy, and using the planes for reconstruction increases the number of points recovered. Additionally, an interactive scanning environment is constructed, where incremental reconstruction is used to provide continuous visual feedback. Reconstructions with this method are shown to have higher accuracy than standard triangulation.
This section gives a summary of the main ideas of this project. The important figures from the paper a reproduced and explained. Please download the paper for details.
Figure 1. The scanning system, with a frog model in place
Figure 1 shows the physical setup of the scanning system. In this configuration we are using two Point Grey Flea 2 1394b cameras. Each camera is pointed at a model to be scanned. In this case, the model is a frog. The model sits on a simple calibration board. The calibration board is used to compute the extrinsic calibration parameters of the cameras. This calibration needs to be performed only once after the cameras are positioned (Refer to the Matlab Camera Calibration Toolbox for details). The calibration board may be removed after computing camera calibration, it is not needed for scanning.
Also illustrated in Figure 1 is hand-held projection of a plane of laser light. The user holds a standard laser pointer which has a 6mm diameter glass rod attached to the front. This glass rod acts as a cylindrical lens and diverges the laser beam into a plane of light. This light intersects the model surface forming a curve that lies in plane. Reconstruction is performed using the image of this curve as seen by the pair of cameras. While scanning, the user freely moves the light projector to sweep the plane across the model. The resulting reconstruction is computed iteratively in real time and displayed on the computer screen (Bottom right of Figure 1). This allows the user to interactively move the light source to fill in gaps in the reconstruction
Figure 2. A top view of the working volume and scannable surfaces of a T-shaped object
The hardware configuration we used is equivalent to that used by Davis and Chen [Davis and Chen 2001], but the method of reconstruction is different. Davis and Chen match points along epipolar lines and use standard stereo triangulation to reconstruct depths. As a result, the working volume is the intersection of the view volumes of the cameras (Figure 2). Only points visible in both cameras can be reconstructed. Also, some points may not have a unique match along epipolar lines. This ambiguity cannot be resolved by the method of Davis and Chen, so ambiguous points are discarded.
The reconstruction method in this project uses the constraint that all observed points must lie on a plane in 3-d. We first estimate this plane from the non-ambiguous matches. Once we know the plane, points seen in both views are triangulated and then projected to the closest point on the plane. The reconstruction of these points is made more accurate by the planar constraint. For points seen in only one view, we can reconstruct them by back projecting a ray from the camera and intersecting the light plane. This extends the working volume of our method to the union of the camera view volumes as illustrated in Figure 2.
Figure 3. Homographies between the laser plane and image planes
Knowing the plane of light induces a homography, H, that maps the image of the light curve by the first camera into the image by the second camera. As shown in Figure 3, this is true because the points lie on the world plane Π. There exists homographies H1 and H2 mapping Π to the camera image planes. Thus H=H2H1-1, and as shown in the paper, H has a reduced linearly parameterization using the four paramaters of the plane Π. This allows us to estimate the plane using a least squares formulation. In practice, we also use RANSAC to handle the occasional outliers. Given the robust plane estimate (and therefore the estimate of H) we can resolve most matching ambiguities and use the additional points discarded in the method of Davis and Chen.
Figure 4. Histograms showing the distribution of distances from the corresponding fit cylinders
To evaluate the accuracy of the reconstructions we scanned a simple object of known dimensions ― a cylinder. Using only points seen in both views, we reconstructed the cylinder using both triangulation and our plane restricted methods. Figure 4 show the distribution of distances to the true surface as computed by both methods. Both methods produce errors that are roughly normally distributed about zero. However, applying the planar constraint reduces the variance of the error.
Figure 5. Camera views and reconstruction results for the frog model. (a) contains 47,271 points, (b) contains 104,448 points
Results of the reconstruction of the frog are shown in Figure 5. The views of the model as seen from each camera under ambient light are shown in Figure 5(a). These images are also used to assign color to the reconstructed points. Figures 5(b) and 5(c) show the increase in the number of reconstructed points when points seen in only one view are considered. Using all points more than doubles the number of points in the reconstruction. The reconstruction using standard triangulation (as in [Davis and Chen 2001]) is similar to 5(b), but also contains several outliers from false matches.
Figure 6. A demo video of the software (high quality download in sidebar).
- [Davis and Chen 2001]
- J. Davis and X. Chen. A laser range scanner designed for minimum calibration complexity. 3dim, pg. 91, 2001.