Academic Journal of Computing & Information Science, 2025, 8(2); doi: 10.25236/AJCIS.2025.080202.
Xianwei Qiu1, Jun Li1, Wuyang Shan1, Wuyang Fan2
1College of Computer Science and Cybersecurity, Chengdu University of Technology, Chengdu, China
2College of Architecture and Civil Engineering, Chengdu University, Chengdu, China
This paper presents a No-Reference Image Quality Assessment (NR-IQA) method inspired by human visual perception. It integrates a dual-branch multi-level residual network that combines two complementary streams: one processing HSV images to capture content features aligned with human perception, and another utilizing contrast-sensitive weighted gradient (CSG) images to extract structural and texture features. This dual-branch architecture provides a comprehensive assessment of image quality from both content and structural perspectives. An advanced feature fusion strategy is introduced, employing a dedicated weight module to assign varying importance to content and structural features, ensuring an accurate final quality score. Experimental results on benchmark datasets, including LIVE, CSIQ, TID2013, LIVEC, and KonIQ-10k, showcase the superiority of our method over existing state-of-the-art NR-IQA techniques in key metrics like PLCC and SROCC. Our work holds significant practical value in advancing image quality assessment and has potential applications in multimedia compression, image restoration, enhancement, and related image processing technologies.
NR-IQA, Dual-Branch Network, Feature Fusion Strategy
Xianwei Qiu, Jun Li, Wuyang Shan, Wuyang Fan. No-Reference Image Quality Assessment Based on Human Visual System and Dual-Branch Multi-Level Residual Network. Academic Journal of Computing & Information Science (2025), Vol. 8, Issue 2: 7-19. https://doi.org/10.25236/AJCIS.2025.080202.
[1] Li, F., Shuang, F., Liu, Z., Qian, X.: A cost-constrained video quality satisfaction study on mobile devices. IEEE Transactions on Multimedia 20(5), 1154–1168(2017)
[2] Moorthy, A.K., Bovik, A.C.: Blind image quality assessment: From natural scene statistics to perceptual quality. IEEE transactions on Image Processing 20(12), 3350–3364 (2011)
[3] Saad, M.A., Bovik, A.C., Charrier, C.: A dct statistics-based blind image quality index. IEEE Signal Processing Letters 17(6), 583–586 (2010)
[4] Saad, M.A., Bovik, A.C., Charrier, C.: Blind image quality assessment: A natural scene statistics approach in the dct domain. IEEE transactions on Image Processing 21(8), 3339–3352 (2012)
[5] Mittal, A., Moorthy, A.K., Bovik, A.C.: No-reference image quality assessment in the spatial domain. IEEE Transactions on image processing 21(12), 4695–4708(2012)
[6] Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal processing letters 20(3), 209–212 (2012)
[7] Kang, L., Ye, P., Li, Y., Doermann, D.: Convolutional neural networks for no-reference image quality assessment. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1733–1740 (2014)
[8] Bosse, S., Maniry, D., M¨uller, K.-R., Wiegand, T., Samek, W.: Deep neural networks for no-reference and full-reference image quality assessment. IEEE Transactions on image processing 27(1), 206–219 (2017)
[9] Kim, J., Nguyen, A.-D., Lee, S.: Deep cnn-based blind image quality predictor. IEEE transactions on neural networks and learning systems 30(1), 11–24 (2018)
[10] Su, S., Yan, Q., Zhu, Y., Zhang, C., Ge, X., Sun, J., Zhang, Y.: Blindly assess image quality in the wild guided by a self-adaptive hyper network. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.3667–3676 (2020)
[11] Zhang, W., Ma, K., Yan, J., Deng, D., Wang, Z.: Blind image quality assessment using a deep bilinear convolutional neural network. IEEE Transactions on Circuits and Systems for Video Technology 30(1), 36–47 (2020) https://doi.org/10.1109/TCSVT.2018.2886771
[12] Yan, Q., Gong, D., Zhang, Y.: Two-stream convolutional networks for blind image quality assessment. IEEE Transactions on Image Processing 28(5), 2200–2211(2018)
[13] Mannos, J., Sakrison, D.: The effects of a visual fidelity criterion of the encoding of images. IEEE transactions on Information Theory 20(4), 525–536 (1974)
[14] Campbell, F.W., Robson, J.G.: Application of fourier analysis to the visibility of gratings. The Journal of physiology 197(3), 551 (1968)
[15] Gao, X., Lu, W., Tao, D., Li, X.: Image quality assessment based on multiscale geometric analysis. IEEE Transactions on Image Processing 18(7), 1409–1423(2009)
[16] Saha, A., Wu, Q.M.J.: Utilizing image scales towards totally training free blind image quality assessment. IEEE Transactions on Image Processing 24(6), 1879–1892 (2015)
[17] Sheikh, H.R., Bovik, A.C., De Veciana, G.: An information fidelity criterion for image quality assessment using natural scene statistics. IEEE Transactions on image processing 14(12), 2117–2128 (2005)
[18] Yue, G., Hou, C., Zhou, T., Zhang, X.: Effective and efficient blind quality evaluator for contrast distorted images. IEEE Transactions on Instrumentation and measurement 68(8), 2733–2741 (2018)
[19] Shen, L., Zhang, C., Hou, C.: Saliency-based feature fusion convolutional network for blind image quality assessment. Signal, Image and Video Processing 16(2),419–427 (2022)
[20] Liu, C., Zheng, Y., Liao, K., Chen, B., Wang, K., Zhong, C., Xie, B., Miao, Y.: No-reference image quality assessment of multi-level residual feature augmentation.Signal, Image and Video Processing 17(4), 1275–1283 (2022)
[21] Ye, Z., Wu, Y., Liao, D., Yu, T., Yang, J., Hu, J.: Driqa-nr: no-reference image quality assessment based on disentangled representation. Signal, Image and Video Processing, 1–9 (2022)
[22] Zhu, P., Liu, S., Liu, Y., Yap, P.-T.: Meter: Multi-task efficient transformer for no-reference image quality assessment. Applied Intelligence 53, 29974–29990 (2023)
[23] Chen, Y., Chen, Z., Yu, M., Tang, Z.: Dual-feature aggregation network for no-reference image quality assessment. MultiMedia Modeling, 149–161 (2023)
[24] Zhang, Y., Wang, C., Lv, X., Song, Y.: Attention-driven residual-dense network for no-reference image quality assessment. Signal, Image and Video Processing 18(Suppl 1), 537–551 (2024)
[25] Woo, S., Park, J., Lee, J.-Y., Kweon, I.S.: Cbam: Convolutional block attention module. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 3–19 (2018)
[26] He, Y., Xin, M., Wang, Y., Xu, C.: Automatic edge detection method of power chip packaging defect image based on improved canny algorithm (2024).IEEEXplore
[27] Achanta, R., Hemami, S., Estrada, F., Susstrunk, S.: Frequency-tuned salient region detection. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1597–1604 (2009). IEEE
[28] Jayaraman, D., Mittal, A., Moorthy, A.K., Bovik, A.C.: Objective quality assessment of multiply distorted images. In: 2012 Conference Record of the Forty Sixth Asilomar Conference on Signals, Systems and Computers (ASILOMAR), pp.1693–1697 (2012). IEEE
[29] Eric, C., Larson, Damon, M.:Most apparent distortion: full-reference image quality assessment and the role of strategy[J].Journal of Electronic Imaging 19(1) (2009).
[30] Ponomarenko, N., Jin, L., Ieremeiev, O., Lukin, V., Egiazarian, K., Astola, J., Vozel, B., Chehdi, K., Carli, M., Battisti, F., et al.: Image database tid2013: Peculiarities, results and perspectives. Signal processing: Image communication 30, 57–77 (2015)
[31] Ghadiyaram, D., Bovik, A.C.: Massive online crowdsourced study of subjective and objective picture quality. IEEE Transactions on Image Processing 25(1),372–387 (2015)
[32] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: Koniq-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020)
[33] Kim, J., Lee, S.: Fully deep blind image quality predictor. IEEE Journal of selected topics in signal processing 11(1), 206–220 (2016)
[34] Yan, Q., Gong, D., Zhang, Y.: Two-stream convolutional networks for blind image quality assessment. IEEE Transactions on Image Processing 28(5), 2200–2211(2019)