Deep Learning Approach for SAR Image Fusion
Image Fusion is an application of digital image processing. Image Fusion is a phenomenon of amalgamating the substantial features from similar pair of images into a single image, where the fused image will be of superior quality than either of the source images.
This research work proposes a Pixel level Deep learning method using a 3-Channel convolutional neural network to fuse two multi focus Synthetic Aperture Radar (SAR) images obtain a high definition or high quality fused image. As the environment is in static condition and the radar is in floating, the radar imagery sensor captures source image at different time stamps.
Size reduction of the image is performed before the fusion procedure in order to highly reduce the computational time and make the method more immune to noise. In the proposed method, source images are decomposed into pixels using the deep learning framework.
After feature extraction, appropriate weights are assigned to all pixels. Then averaging and max pooling of pixel values of both the source images are done to get the resultant features of the fused image. Then smoothening filter is used to minimize noise in the fused image.