- 无标题文档
查看论文信息

论文中文题名:

 基于强化学习的图像恢复方法研究    

姓名:

 席润韬    

学号:

 19208208045    

保密级别:

 保密(1年后开放)    

论文语种:

 chi    

学科代码:

 085212    

学科名称:

 工学 - 工程 - 软件工程    

学生类型:

 硕士    

学位级别:

 工程硕士    

学位年度:

 2022    

培养单位:

 西安科技大学    

院系:

 计算机科学与技术学院    

专业:

 软件工程    

研究方向:

 媒体计算与可视化    

第一导师姓名:

 马天    

第一导师单位:

 西安科技大学    

第二导师姓名:

 陈小林    

论文提交日期:

 2022-06-19    

论文答辩日期:

 2022-06-06    

论文外文题名:

 Research on Image Recovery Algorithm Based on Reinforcement Learning    

论文中文关键词:

 图像恢复 ; 图像去噪 ; 图像修饰 ; 个性化图像修饰 ; 深度强化学习    

论文外文关键词:

 Image recovery ; image denoising ; image retouching ; Personalized image retouching ; reinforcement learning    

论文中文摘要:

数字图像在传播过程中不可避免的会造成失真与退化,通常人类专家在恢复图像时会使用多种不同的图像恢复工具。受此启发,本文基于强化学习方法集成已经成熟的图像处理工具,通过合理的决策以寻找一种新的组合方式,完成对图像的全局或局部恢复。本文的主要工作和创新点如下:

(1) 针对现有的图像去噪方法大多是基于结构多样的卷积神经网络,同时网络结构设计复杂,去噪过程可解释性不强的问题,提出了一种基于强化学习和生成对抗网络的图像去噪方法。该方法为了实现高质量、可解释的图像降噪效果,所以将每一个像素都被视为一个强化学习智能体,像素智能体通过选取高斯滤波、双边滤波器、盒滤波器和中值滤波器等具有可解释的计算过程与物理意义的去噪算法作为动作并逐步的进行像素级去噪。同时,为了提升最后的去噪效果,本文采用对抗学习的方式提升了模型对图像细节的恢复能力,同时设计了一种全卷积判别器来解决像素级操作所需的像素级奖励这一问题。实验结果表明:相比于 CBDnet 算法,提出方法在 SIDD Medium测试集上的 1280 张测试图像上平均 PSNR 值提高了 1.12,在 Nam 测试数据集上平均PSNR 值提高了 0.17,同时参数量与计算量比 DnCNN 减少了 33%。

(2) 针对现有的强化学习方法在进行图像修饰时,只能进行全局修饰的问题,提出了一种基于强化学习的像素级图像修饰方法。受人类专家在 PhotoShop 等软件中修饰图像过程的启发,本文对图像的不同区域使用不同的修饰工具进行调整。首先,本文通过对局部区域进行特定的曝光与对比度调整;然后,通过高动态范围中的曝光融合算法避免直接分区域修饰产生的拼接边界;最后,在实现对不同区域进行特定曝光对比度修饰的同时达到了较好的效果,实验结果表明:在 MIT-Adobe-5K 的 500 张测试数据集上,在 SSIM 值上取得了较好的成绩,平均 SSIM 值比 DeepUPE 要高 0.018,在 LOL 数据集上平均 PSNR 值比EnlightenGAN 高 0.14,平均 SSIM 值比 Zero-DCE要高 0.057。

(3) 针对基于强化学习的单风格图像修饰算法无法有效解决多风格的个性化图像修饰的问题,提出了一种基于强化学习和生成对抗网络的个性化图像修饰方法。首先,通过对比学习预训练一个风格编码器,提取图像的滤镜风格;然后,将待修饰图像和参考图像输入风格编码器获得各自的滤镜风格编码,并通过自适应实例归一化将混合后的特征输入策略网络;最后,通过控制曝光,对比度,高光、阴影等 12 种不可微的滤波器工具,使目标图像滤镜风格与参考图像更为接近。实验结果表明:在单风格修饰上提出方法在平均 PSNR 值上比 Pix2Pix 高 0.62,平均 SSIM 值则高 0.056,在多风格修饰上,PSNR 值和∆00∗ 值则分别比 WCT2 高 0.66 和低 3.797。

论文外文摘要:

Digital image inevitably distorts and degrades in the process of transmission. Using a variety of different image restoration tools can restore the image. Inspired by this, this paper integrates mature image processing tools based on reinforcement learning, and finds a new combination way through reasonable decision-making, to complete the global or local restoration of the image. The main work and innovation of this paper are as follows:

(1) Most of image denoising methods are based on convolution neural networks with various structures. While the network structure design is complex and the explanation of denoising process is not strong. An image denoising method based on reinforcement learning and generating countermeasure network is proposed. In order to achieve high-quality and interpretable image denoising, each pixel is regarded as a reinforcement learning agent. Pixel agents select a Gaussian filter, a bilateral filter, a box filter, and a median filter as actions and carry out pixel-level denoising step by step. At the same time, in order to improve the final denoising effect, this paper uses the method of adversarial learning to improve the ability of the model by recovering the image details, and designing a full convolution discriminator. The experimental results show that: compared with the CBDnet algorithm, the average PSNR of the proposed method on 1280 test images of the SIDD Medium test set is increased by 1.12. On the Nam test data set, the average PSNR is increased by 0.17, while the number of parameters and the amount of calculation are 33% that less than that of DnCNN.

(2) In order to solve the problem that the reinforcement learning methods only modifies images globally, a pixel-level image modification method based on reinforcement learning is proposed. Inspired by the process of decorating the image in PhotoShop and other software, this paper uses different modification tools to adjust different areas of the image. First, this paper makes a specific exposure and contrast adjustment to the local area, and then avoids the splicing boundary caused by direct sub-region modification through the exposure fusion algorithm in high dynamic range. Second, good results are achieved by modifying the specific exposure contrast of different regions. The experimental results show that on the 500th test data set of MIT-Adobe-5K, the average SSIM is 0.018 which is higher than that of DeepUPE. The average PSNR of LOL data set is 0.14 which is higher than that of Enlighten GAN. And the average SSIM is 0.057 which is higher than Zero-DCE.

(3) Single-style image modification algorithm based on reinforcement learning can not solve the problem of multi-style personalized image modification. A personalized image modification method based on reinforcement learning and generating countermeasure network is proposed. First, a style encoder is pre-trained by comparative learning to extract the filter style of the image. Then the filter style encoders are input to the modified image. The reference image obtains their respective filter style coding. The mixed feature input strategy network is normalized by adaptive examples. Second, the filter style of the target image is closer to that of the reference image by controlling 12 kinds of non-differentiable filter tools, such as exposure, contrast, highlight, shadow. The experimental results show that the average PSNR of the proposed method is 0.62 which is higher than that of Pix2Pix. The average SSIM is 0.056 which is higher than that of WCT2 in single style modification. In multi-style modification, the PSNR and ∆00∗ are 0.66 higher and 3.797 lower than that of SSIM,respectively.

 

 

 

参考文献:

[1] 何龙军. 医学影像远程实时后处理方法研究[D].华中科技大学,2015.

[2] 李雪薇 . 基 于 美 学 的 图 像 质 量 评 价 与 提 升 算 法 研 究 [D]. 北 京 邮 电 大 学,2021.DOI:10.26969/d.cnki.gbydu.2021.000103.

[3] 申 红 . 小 波 变 换 域 井 下 视 频 监 控 图 像 改 进 阈 值 去 噪 方 法 [J]. 金 属 矿 山,2017(07):151-154.

[4] 刘帅奇,扈琪,刘彤,赵杰.合成孔径雷达图像去噪算法研究综述[J].兵器装备工程学报,2018,39(12):106-112+252.

[5] Mnih V, Kavukcuoglu K, Silver D, et al. Playing atari with deep reinforcement learning[J]. arXiv preprint arXiv:1312.5602, 2013.

[6] Yu K, Dong C, Lin L, et al. Crafting a toolchain for image restoration by deep reinforcement learning[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 2443-2452.

[7] Liao X, Li W, Xu Q, et al. Iteratively-refined interactive 3D medical image segmentation with multi-agent reinforcement learning[C]//Proceedings of the IEEE/CVF conference on

computer vision and pattern recognition. 2020: 9394-9402.

[8] Zhang K, Zuo W, Chen Y, et al. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising[J]. IEEE transactions on image processing, 2017, 26(7): 3142-3155.

[9] He K, Zhang X, Ren S, et al. Deep residual learning for image

recognition[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 770-778.

[10]Ioffe S, Szegedy C. Batch normalization: Accelerating deep network training by reducing internal covariate shift[C]//International conference on machine learning. PMLR, 2015: 448-456.

[11]Tian C, Xu Y, Zuo W. Image denoising using deep CNN with batch renormalization[J]. Neural Networks, 2020, 121: 461-473.

[12]Ioffe S. Batch renormalization: Towards reducing minibatch dependence in batch-normalized models[J]. Advances in neural information processing systems, 2017, 30.

[13]Guo S, Yan Z, Zhang K, et al. Toward convolutional blind denoising of real photographs[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019: 1712-1722.

[14]Huang T, Li S, Jia X, et al. Neighbor2neighbor: Self-supervised denoising from single noisy images[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021: 14781-14790.

[15]Moran S, Marza P, McDonagh S, et al. Deeplpf: Deep local parametric filters for image enhancement[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020: 12826-12835.

[16]He J, Liu Y, Qiao Y, et al. Conditional sequential modulation for efficient global image retouching[C]//European Conference on Computer Vision. Springer, Cham, 2020: 679-695.

[17]Kim H U, Koh Y J, Kim C S. Global and local enhancement networks for paired and unpaired image enhancement[C]//European Conference on Computer Vision. Springer, Cham, 2020: 339-354.

[18]Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation[C]//International Conference on Medical image computing and computer-assisted intervention. Springer, Cham, 2015: 234-241.

[19]Goodfellow I, Pouget-Abadie J, Mirza M, et al. Generative adversarial nets[J]. Advances in neural information processing systems, 2014, 27.

[20]Karras T, Laine S, Aila T. A style-based generator architecture for generative adversarial networks[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019: 4401-4410.

[21]Shi W, Caballero J, Huszár F, et al. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 1874-1883.

[22]Gulrajani I, Ahmed F, Arjovsky M, et al. Improved training of wasserstein gans[J]. Advances in neural information processing systems, 2017, 30.

[23]Mao X, Li Q, Xie H, et al. Least squares generative adversarial networks[C]//Proceedings of the IEEE international conference on computer vision. 2017: 2794-2802.

[24]Chen J, Chen J, Chao H, et al. Image blind denoising with generative adversarial network based noise modeling[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 3155-3164.

[25]Isola P, Zhu J Y, Zhou T, et al. Image-to-image translation with conditional adversarial networks[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 1125-1134.

[26]Mirza M, Osindero S. Conditional generative adversarial nets[J]. arXiv preprint arXiv:1411.1784, 2014.

[27]Zhu J Y, Park T, Isola P, et al. Unpaired image-to-image translation using cycle-consistent adversarial networks[C]//Proceedings of the IEEE international conference on computer vision. 2017: 2223-2232.

[28]Choi Y, Choi M, Kim M, et al. Stargan: Unified generative adversarial networks for multi-domain image-to-image translation[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 8789-8797.

[29]Mnih V, Kavukcuoglu K, Silver D, et al. Playing atari with deep reinforcement learning[J]. arXiv preprint arXiv:1312.5602, 2013.

[30]Schulman J, Wolski F, Dhariwal P, et al. Proximal policy optimization algorithms[J]. arXiv preprint arXiv:1707.06347, 2017.

[31]Yu K, Dong C, Lin L, et al. Crafting a toolchain for image restoration by deep reinforcement learning[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 2443-2452.

[32]Furuta R, Inoue N, Yamasaki T. Pixelrl: Fully convolutional network with reinforcement learning for image processing[J]. IEEE Transactions on Multimedia, 2019, 22(7): 1704-1719.

[33]Park J, Lee J Y, Yoo D, et al. Distort-and-recover: Color enhancement using deep reinforcement learning[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 5928-5936.

[34]Hu Y, He H, Xu C, et al. Exposure: A white-box photo post-processing framework[J]. ACM Transactions on Graphics (TOG), 2018, 37(2): 1-17.

[35]郭业才,周腾威.基于深度强化对抗学习的图像增强方法[J].扬州大学学报(自然科学版),2020,23(02):42-46+51.DOI:10.19411/j.1007-824x.2020.02.009.

[36]Kosugi S, Yamasaki T. Unpaired image enhancement featuring

reinforcement-learning-controlled image editing software[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2020, 34(07): 11296-11303.

[37]Yang H, Wang B, Vesdapunt N, et al. Personalized exposure control using adaptive metering and reinforcement learning[J]. IEEE transactions on visualization and computer graphics, 2018, 25(10): 2953-2968.

[38]Bellman R. A Markovian decision process[J]. Journal of mathematics and mechanics, 1957: 679-684.

[39]Abdelhamed A, Lin S, Brown M S. A high-quality denoising dataset for smartphone cameras[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 1692-1700.

[40]Nam S, Hwang Y, Matsushita Y, et al. A holistic approach to cross-channel image noise modeling and its application to image denoising[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 1683-1691.

[41]Bychkovsky V, Paris S, Chan E, et al. Learning photographic global tonal adjustment with a database of input/output image pairs[C]//CVPR 2011. IEEE, 2011: 97-104.

[42]Wei C, Wang W, Yang W, et al. Deep retinex decomposition for low-light enhancement[J]. arXiv preprint arXiv:1808.04560, 2018.

[43]De Boer J F, Cense B, Park B H, et al. Improved signal-to-noise ratio in spectral-domain compared with time-domain optical coherence tomography[J]. Optics letters, 2003, 28(21): 2067-2069.

[44]Wang Z, Bovik A C, Sheikh H R, et al. Image quality assessment: from error visibility to structural similarity[J]. IEEE transactions on image processing, 2004, 13(4): 600-612.

[45]Ma K, Duanmu Z, Wu Q, et al. Waterloo exploration database: New challenges for image quality assessment models[J]. IEEE Transactions on Image Processing, 2016, 26(2): 1004-1016.

[46]Agustsson E, Timofte R. Ntire 2017 challenge on single image super-resolution: Dataset and study[C]//Proceedings of the IEEE conference on computer vision and pattern recognition workshops. 2017: 126-135.

[47]Kingma D P, Ba J. Adam: A method for stochastic optimization[J]. arXiv preprint arXiv:1412.6980, 2014.

[48]Watkins C J C H, Dayan P. Q-learning[J]. Machine learning, 1992, 8(3): 279-292.

[49]Cho K, Van Merriënboer B, Gulcehre C, et al. Learning phrase representations using RNN encoder-decoder for statistical machine translation[J]. arXiv preprint arXiv:1406.1078, 2014.

[50]Mnih V, Badia A P, Mirza M, et al. Asynchronous methods for deep reinforcement learning[C]//International conference on machine learning. PMLR, 2016: 1928-1937.

[51]Miyato T, Kataoka T, Koyama M, et al. Spectral normalization for generative adversarial networks[J]. arXiv preprint arXiv:1802.05957, 2018.

[52]Dabov K, Foi A, Katkovnik V, et al. Image denoising by sparse 3-D transform-domain collaborative filtering[J]. IEEE Transactions on image processing, 2007, 16(8): 2080-2095.

[53]Gu S, Zhang L, Zuo W, et al. Weighted nuclear norm minimization with application to image denoising[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2014: 2862-2869.

[54]Zoran D, Weiss Y. From learning models of natural image patches to whole image restoration[C]//2011 International Conference on Computer Vision. IEEE, 2011: 479-486.

[55]Schmidt U, Roth S. Shrinkage fields for effective image restoration[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2014: 2774-2781.

[56]Zhang K, Zuo W, Zhang L. FFDNet: Toward a fast and flexible solution for CNN-based image denoising[J]. IEEE Transactions on Image Processing, 2018, 27(9): 4608-4622.

[57]Lebrun M, Colom M, Morel J M. Multiscale image blind denoising[J]. IEEE Transactions on Image Processing, 2015, 24(10): 3149-3161.

[58]Zhang K, Zuo W, Gu S, et al. Learning deep CNN denoiser prior for image restoration[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 3929-3938.

[59]Tai Y, Yang J, Liu X, et al. Memnet: A persistent memory network for image restoration[C]//Proceedings of the IEEE international conference on computer vision. 2017: 4539-4547.

[60]Zhang Y, Li K, Zhong B, et al. Residual Non-local Attention Networks for Image Restoration[C]//International Conference on Learning Representations. 2019.

[61]Anwar S, Barnes N. Real image denoising with feature attention[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2019: 3155-3164.

[62]Fan Z, Su R, Zhang W, et al. Hybrid Actor-Critic Reinforcement Learning in Parameterized Action Space[J].

[63]Mertens T, Kautz J, Van Reeth F. Exposure fusion[C]//15th Pacific Conference on Computer Graphics and Applications (PG'07). IEEE, 2007: 382-390.

[64]Guo X, Li Y, Ling H. LIME: Low-light image enhancement via illumination map estimation[J]. IEEE Transactions on image processing, 2016, 26(2): 982-993.

[65]Wang R, Zhang Q, Fu C W, et al. Underexposed photo enhancement using deep illumination estimation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019: 6849-6857.

[66]Chen Y S, Wang Y C, Kao M H, et al. Deep photo enhancer: Unpaired learning for image enhancement from photographs with gans[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 6306-6314.

[67]Jiang Y, Gong X, Liu D, et al. Enlightengan: Deep light enhancement without paired supervision[J]. IEEE Transactions on Image Processing, 2021, 30: 2340-2349.

[68]Moran S, McDonagh S, Slabaugh G. Curl: Neural curve layers for global image enhancement[C]//2020 25th International Conference on Pattern Recognition (ICPR). IEEE, 2021: 9796-9803.

[69]Guo C, Li C, Guo J, et al. Zero-reference deep curve estimation for low-light image enhancement[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020: 1780-1789.

[70]Kim H U, Koh Y J, Kim C S. PieNet: Personalized image enhancement

network[C]//European Conference on Computer Vision. Springer, Cham, 2020: 374-390.

[71]Schroff F, Kalenichenko D, Philbin J. Facenet: A unified embedding for face recognition and clustering[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2015: 815-823.

[72]Huang X, Belongie S. Arbitrary style transfer in real-time with adaptive instance normalization[C]//Proceedings of the IEEE international conference on computer vision. 2017: 1501-1510.

[73]Arjovsky M, Bottou L. Towards principled methods for training generative adversarial networks[J]. arXiv preprint arXiv:1701.04862, 2017.

[74]Johnson J, Alahi A, Fei-Fei L. Perceptual losses for real-time style transfer and super-resolution[C]//European conference on computer vision. Springer, Cham, 2016: 694-711.

[75]Van der Maaten L, Hinton G. Visualizing data using t-SNE[J]. Journal of machine learning research, 2008, 9(11).

[76]Yoo J, Uh Y, Chun S, et al. Photorealistic style transfer via wavelet transforms[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019: 9036-9045.

[77]Reinhard E, Adhikhmin M, Gooch B, et al. Color transfer between images[J]. IEEE Computer graphics and applications, 2001, 21(5): 34-41.

中图分类号:

 TP391.4    

开放日期:

 2023-06-28    

无标题文档

   建议浏览器: 谷歌 火狐 360请用极速模式,双核浏览器请用极速模式