A Bayesian Approach to Image-Based Visual Hull Reconstruction



Kristen Grauman, Gregory Shakhnarovich, Trevor Darrell




We present a Bayesian approach to image-based visual hull reconstruction.  The 3D shape of an object of a known class is represented by sets of silhouette views simultaneously observed from multiple cameras.  We show how the use of a class-specific prior in a visual hull reconstruction can reduce the effect of segmentation errors from the silhouette extraction process.  In our representation, 3D information is implicit in the joint observations of multiple contours from known viewpoints.  We model the prior density using a probabilistic principal components analysis-based technique and estimate a maximum a posteriori reconstruction of multi-view contours.  The proposed method is applied to a dataset of pedestrian images, and improvements in the approximate 3D models under various noise conditions are shown. 

CVPR 2003 paper on this work: pdf
Further details in SM thesis: ps or pdf




An embedded mpeg movie should appear above.  If your browser does not have a plug-in supporting embedded mpeg's, please download the file to view here.



The above images show an example of visual hull segmentation improvement with the proposed PPCA-based Bayesian reconstruction scheme.  The four top-left silhouettes show the multi-view input, corrupted by segmentation errors.  The four silhouettes directly to their right show the corresponding Bayesian reconstructions formed using our statistical multi-view shape model.  In the gray sections below each set of silhouettes are their corresponding visual hulls; that is, the left VH is formed from the raw silhouettes, and the right VH is formed from the reconstructed silhouettes.  Note how undersegmentations in the raw input silhouettes cause portions of the approximate 3D volume to be missing (left, gray background), whereas the reconstructed silhouettes produce a fuller 3D volume more representative of the true object shape (right, gray background).



More example results like this >>>



<<< Back to Research main page