Research on Image Fusion Based on Deep Adversarial Learning
Keywords:
Image fusion, Dual path dual discriminator, Adversarial network, Master-auxiliary gradientAbstract
Image fusion, as an important task in computer vision, essentially extracts important features from source images to complement each other and generate fusion images with higher quality and richer information. Infrared and visible images contain different information due to different imaging quantity principles. The key of infrared and visible image fusion algorithm is to integrate the thermal radiation information extracted from infrared images with the captured details and texture information of visible images, so as to obtain a fusion image with complete structure and rich detailed information. Based on the generative adversarial network model, this paper proposes an infrared and visible image fusion method based on dual path dual discriminator generating adversarial network, aiming at the problems existing in the existing research algorithms, such as inadequate extraction of feature information, low efficiency of network model feature transfer, easy loss of shallow information in single-path feature extraction, fewer fusion levels caused by sub-path feature extraction and unbalance of discriminator modes. The gradient path and contrast path based on the difference stitching of source images are constructed at the generator side to improve the detail information and contrast of fused images. The feature information of infrared and visible images is extracted by multi-scale decomposition to solve the problem of incomplete feature extraction on a single scale. Then the source image is introduced into each layer of the double-path dense network, which can improve the efficiency of feature transmission and obtain more source image information. At the end of the discriminator, a double discriminator is used to estimate the region distribution of infrared image and visible image, so as to avoid the mode imbalance problem of the loss of infrared image contrast information in the single discriminator network. Finally, we construct the master-auxiliary gradient and the master-auxiliary strength loss function to improve the information extraction ability of the network model. Compared with other image fusion methods on public data sets, the experimental results show that the proposed method achieves good results on objective evaluation indexes (mean gradient, spatial frequency, structural similarity and peak signal-to-noise ratio).