Multisensor Image Fusion Using Redundant Wavelet Transform
Faculty of Mathematics and Computer Science, University of Bremen
given at strobl07 (17.06.07)
The standard data fusion methods may not be satisfactory to merge multisensor images. Wavelets as powerful signal processing tools in dealing with nonstationary signals provide an alternative for data fusion. In this paper a methodology for multisensor image fusion using redundant wavelet transform is proposed. The aim of our work is to use redundant wavelet transform for a more effective extraction of dominant features as observed in different scales. We have used wavelet transform and pyramid structure of wavelet analysis for signal representation in time-scale domain and to compute dominant signal components at different time/scale resolutions. Redundancy of the wavelet transform is used to enhance extraction and fusion of all significant features coming from multisensor images. A fusion rule will be developed that allows taking into account the dominant features of multisensor images at different scales. Simulations will be used to show the effectiveness of the proposed methodology.