首页 | 本学科首页   官方微博 | 高级检索  
     检索      

改进CGAN网络的光学遥感图像云去除方法
引用本文:裴傲,陈桂芬,李昊玥,王兵.改进CGAN网络的光学遥感图像云去除方法[J].农业工程学报,2020,36(14):194-202.
作者姓名:裴傲  陈桂芬  李昊玥  王兵
作者单位:吉林农业大学信息技术学院,长春 130118;吉林农业大学信息技术学院,长春 130118;吉林农业大学信息技术学院,长春 130118;吉林农业大学信息技术学院,长春 130118
基金项目:吉林省科技厅重点项目(20180201073SF);吉林省教育厅重点项目(JJKH20200328KJ)
摘    要:农业生产中使用的光学遥感图像在采集过程中时常受到云层的影响,导致获取到的图像清晰度低,影响地物信息的判读和后续的使用。针对这一问题,提出一种基于改进条件生成对抗网络(ConditionalGenerativeAdversarial Net-work,CGAN)的光学遥感图像去云方法。首先,在原始CGAN的生成器中引入空间池化层,通过增加网络的多尺度特征学习能力以提高生成图像的细节信息;其次,在改进CGAN网络中加入回归损失使生成图像与真实图像更加接近,进一步提高生成效果。在光学遥感图像数据集上的试验结果表明:相比原始CGAN,改进CGAN生成的无云光学遥感图像更接近真实无云光学遥感图像,与原始CGAN相比,改进CGAN在薄云和厚云光学遥感图像上的峰值信噪比(Peak Signal-to-NoiseRatio,PSNR)分别提升了1.64和1.05dB,结构相似性(StructuralSIMilarity,SSIM)分别提升了0.03和0.04。同时,相较于传统的去云方法和深度学习的Pix2Pix方法,该方法在光学遥感图像去云和保真上均取得了更好的效果。研究结果证明了改进的CGAN方法实现光学遥感图像去云的可行性,可为农用光学遥感图像的处理提供方法借鉴。

关 键 词:遥感  光学图像  去云  条件生成对抗网络  空间金字塔池化  多尺度特征提取
收稿时间:2020/3/25 0:00:00
修稿时间:2020/7/3 0:00:00

Method for cloud removal of optical remote sensing images using improved CGAN network
Pei Ao,Chen Guifen,Li Haoyue,Wang Bing.Method for cloud removal of optical remote sensing images using improved CGAN network[J].Transactions of the Chinese Society of Agricultural Engineering,2020,36(14):194-202.
Authors:Pei Ao  Chen Guifen  Li Haoyue  Wang Bing
Institution:College of Information and Technology, Jilin Agriculture University, Changchun 130118, China
Abstract:Abstract: The optical remote sensing images used in agricultural production are often affected by clouds during the acquisition process. The sharpness of acquired images can also be reduced, and thereby the decreased clarity of image makes difficult to interpret feature information. This process has posed a great challenge on subsequent applications in agricultural production, such as crop growth detection, crop classification and yield prediction. In this study, a method for cloud removal was proposed using the improved Conditional Generative Adversarial Net-work (CGAN), in order to enhance the detecting resolution of remote sensing images. A mapping relationship between pixels of the cloud and cloudless data was established by training the CGAN. The transformation of remote sensing image can be completed from cloud to cloudless on this basis. Eventually, the cloud component can be effectively removed from the optical remote sensing images. This method can also realize the data restoration of some details in optical remote sensing images during imaging processing. Therefore, a general system of network structure can be used to remove thin and thick clouds in optical remote sensing images by using the improved model. The modified generator of network can be utilized to enhance the image quality of generated data, particularly resulted from the single feature extraction of the original CGAN. The specific procedure of image processing can be: firstly, a series of convolutions were used to extract the feature information of the input images. Then, the multi-scale feature maps were obtained from the feature information using the spatial pyramid pooling operation. Finally, these feature maps with different size were restored to the original size, and thereby mixed together to generate the final cloudless optical remote sensing images. The scale of feature extraction by the generator can significantly increase in this method. Accordingly, the resulting effect on the resolution of images can also increase significantly. In order to verify the cloud removal method, some experiments were performed using the same optical remote sensing image data sets. Three types of methods were selected to compare, including the original CGAN, traditional cloud removal algorithm, and the Pix2Pix method in deep learning. Two indicators, including the Peak Signal-to-Noise Ratio (PSNR) and the Structural SIMilarity (SSIM), were introduced to make a quantitative assessment of experimental results for better evaluation. The experimental results show that: 1) the proposed method can be applied to the removal of thin and thick cloud in optical remote sensing images, indicating high resolution in both types of cloud removal. 2) Compared with the original CGAN, the quality of generated cloudless image was similar to that of the real cloudless remote sensing image. In the removal of thin cloud using the improved CGAN model, the PSNR increased by 1.64 dB, and the SSIM value increased by 0.03, whereas in the thick cloud, the PSNR increased by 1.05 dB, and the SSIM value increased by 0.04. 3) Compared with the traditional cloud removal method, the improved CGAN model removed cloud layer more thoroughly in optical remote sensing image, indicating much more realistic color of the features in the optical remote sensing image. Compared with the Pix2Pix method, some details were better recovered, particularly the landscape in the generated cloudless optical remote sensing images. The PSNR indicator for remote sensing images increased by 1.24 and 0.89 dB, after the removal of the thin cloud and thick cloud, respectively. The value of SSIM index has also increased in the generated remote sensing images. These results can prove that the improved CGAN is suitable to remove clouds from the optical remote sensing images. The findings can provide an insightful idea and promising method for the remote sensing image processing in modern agriculture.
Keywords:remote sensing  optical image  cloud removal  conditional generative adversarial net-work  spatial pyramid pooling  multi-scale feature extraction
本文献已被 CNKI 万方数据 等数据库收录!
点击此处可从《农业工程学报》浏览原始摘要信息
点击此处可从《农业工程学报》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号