论文中文题名: | 基于生成对抗网络的低照度图像增强方法研究 |
姓名: | |
学号: | 19208049013 |
保密级别: | 保密(1年后开放) |
论文语种: | chi |
学科代码: | 0812 |
学科名称: | 工学 - 计算机科学与技术(可授工学、理学学位) |
学生类型: | 硕士 |
学位级别: | 工学硕士 |
学位年度: | 2022 |
培养单位: | 西安科技大学 |
院系: | |
专业: | |
研究方向: | 媒体计算与可视化 |
第一导师姓名: | |
第一导师单位: | |
论文提交日期: | 2022-06-21 |
论文答辩日期: | 2022-06-06 |
论文外文题名: | Research on low-light image enhancement method based on generative adversarial networks |
论文中文关键词: | |
论文外文关键词: | Low Light Image Enhancement ; Generative Adversarial Networks ; Retinex ; UNet |
论文中文摘要: |
随着科学技术的快速发展,图像和视频已然成为了人们获取外部信息不可或缺的工具。其在摄影行业、安全生产和计算机视觉等领域中,有着极大的现实意义和商业价值。然而,在生活中,受到周围环境光的影响,拍摄的图像往往存在亮度、对比度低和图像细节丢失等问题。这种低质量的图像不仅难以满足人眼的视觉感受,也给后续的图像处理带来极大困难。因此,本文研究了近年来较为经典的低照度图像增强技术和生成对抗网络模型,设计了两种基于生成对抗网络的低照度图像增强方法。 本文的主要工作和创新点如下: (1)针对低照度图像增强方法中,深度学习方法存在网络模型复杂和成对数据集难以构造的问题,提出了一个结构更加简单的低照度图像增强方法,RetinexGAN。该方法基于生成对抗网络和Retinex理论,包含一个生成器网络和两个判别器网络,使用非成对的数据集进行无监督训练。此外,为了减少网络的参数,在生成器中只使用了两层卷积,大大提升了计算速度。在该网络中,由生成器生成反射图和光照图,判别器判断反射图和光照图的真假,然后将反射图和光照图通过本文改进的线性校正Retinex公式重构成增强之后的图像。在实验部分,在一些常见的低照度数据集上进行了测试,定量的评估和视觉效果都表明了 RetinexGAN低照度图像增强的有效性。 (2)针对低照度图像增强后含有局部噪声的问题,提出了一种两阶段的训练方式,分别致力于提升图像的亮度和去除噪声。在第一阶段中,改进了RetinexGAN中的生成器和判别器,构建了一个更加简单且端到端的低照度图像增强网络,用以提升图像的亮度和对比度。在第二阶段中,结合DropBlock策略和UNet网络,提出了一种D-UNet网络结构。并采用对抗损失、颜色一致性损失和感知损失来恢复第一阶段增强后图像的颜色和纹理,以达到噪声去除的目的。在实验部分,定量的评估和视觉效果都表明,本文提出的两阶段低照度增强方法并不逊色于当前最先进的方法。此外,第二阶段的训练方法在图像去噪和去雨这两个领域中也有良好的表现。 |
论文外文摘要: |
With the rapid development of science and technology, images and videos have become indispensable tools for people to obtain external information. They have great practical significance and commercial value in the fields of photography industry, safe production and computer vision. However, in people’s life, affected by the ambient light, the captured images have presented such problems as brightness, low contrast and loss of image details. Such low-quality images are not only make it hard to satisfy the visual perception of human eyes, but also bring great difficulties to image processing. Therefore, this paper studies the classic low-light image enhancement techniques and Generative Adversarial Networks models in recent years, and designs two low-light image enhancement methods based on Generative Adversarial Networks. The major work and innovations of this paper are as follows: (1) In the low-light image enhancement, the deep learning method has posed problems as the complexity of network model and the difficulty to obtain paired datasets. To solve these problems, a simpler low-light image enhancement method, RetinexGAN, is proposed in this paper. Based on Generative Adversarial Networks and Retinex Theory, the proposed method consists of one generator network and two discriminator networks, and uses unpaired datasets for unsupervised training. Besides, in order to reduce the parameters of the network, only two layers of convolution are used in the generator, which greatly improve the calculation speed. In this network, the generator generates the reflection map and the illumination map, and the discriminator judges the truth or falsehood of the reflection map and the illumination map, and then reconstructs the reflection map and the illumination map into the enhanced image through the improved linear correction Retinex formula in this paper. In the experimental part, test has been run on challenging datasets, both quantitative evaluation and visual effects demonstrate the effectiveness of RetinexGAN in low-light image enhancement. (2) Aiming at the problem that low-light images have local noise after enhancement, a two-stage training method is proposed, which is dedicated to increasing image brightness and removing noise respectively. In the first stage, the generator and discriminator in RetinexGAN are improved, and a simpler and end-to-end low-light image enhancement network is constructed to increase the brightness and contrast of the image. In the second stage, a D-UNet network is devised by combining the DropBlock strategy and the UNet network. Furthermore, in order to achieve the goal of noise removal, the color and texture of the image after the first stage enhancement are recovered by means of adversarial loss, color consistency loss and perceptual loss. In the experimental part, both quantitative evaluation and visual effects show that the proposed two-stage low-light enhancement method is no less inferior than the current state-of-the-art methods. In addition, the second-stage training method also performs well in the domains of image de-noising and de-raining. |
中图分类号: | TP391.41 |
开放日期: | 2023-06-28 |