Fusion of intensity and range data

Recent work to improve the robustness of computer vision has included investigation of sensor fusion. The authors introduce a visual architecture, which has several parallel processes operating in a reconfigurable, concurrent architecture. It consists of a conventional, intensity based image interpretation system and the corresponding depth channel. Each route may be implemented as a cascade of parallel processes, each of which has been implemented on a processor farm. However, the architecture exhibits the potential for fusion at the pixel, primitive and matching levels. In order to control the several processes and to determine at which, if any, level fusion should occur, it is necessary to include a control process or processes, with explicit goals of object identification and location, as a pre-process for manipulation or inspection. They concentrate on studies of the three levels of fusion of depth and intensity data of a scene acquired from a single viewpoint.<>