Multi-scale Attention Dilated Residual Image Denoising Network based on Skip Connection
PDF

Keywords

image denoising; deep learning; dilated residual block; sparse residual block

How to Cite

Du, Z., Zhou, X., Lü, M., Chen, Y., & Tang, B. (2024). Multi-scale Attention Dilated Residual Image Denoising Network based on Skip Connection. Instrumentation, 11(3), 41–53. https://doi.org/10.15878/j.instr.202400187

Abstract

In the field of image denoising, deep learning technology holds a dominance. However, the current network model tends to lose fine-grained information with the depth of the network. To address this issue, this paper proposes a Multi-scale Attention Dilated Residual Image Denoising Network (MADRNet) based on skip connection, which consists of Dense Interval Transmission Block(DTB), Sparse Residual Block(SRB), Dilated Residual Attention Reconstruction Block(DRAB) and Noise Extraction Block(NEB). The DTB enhances the classical dense layer by reducing information redundancy and extracting more accurate feature information. Meanwhile, SRB improves feature information exchange and model generalization through the use of sparse mechanism and skip connection strategy with different expansion factors. The NEB is primarily responsible for extracting and estimating noise. Its output, together with that of the sparse residual module, acts on the DRAB to effectively prevent loss of shallow feature information and improve denoising effect. Furthermore, the DRAB integrates an dilated residual block into an attention mechanism to extract hidden noise information while using residual learning technology to reconstruct clear images. We respectively examined the performance of MADRNet in gray image denoising, color image denoising and real image denoising. The experiment results demonstrate that proposed network outperforms some excellent image denoising network in terms of peak signal-to-noise ratio, structural similarity index measurement and denoising time. The proposed network effectively addresses issues associated with the loss of detail information.

https://doi.org/10.15878/j.instr.202400187
PDF
Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.

Copyright (c) 2024 Zhiting Du, Xianchun Zhou, Mengnan Lü, Yuze Chen, Binxin Tang

Downloads

Download data is not yet available.