Welcome to Francis Academic Press

Academic Journal of Computing & Information Science, 2023, 6(13); doi: 10.25236/AJCIS.2023.061327.

CandyCycleGAN: Candy Color Coloring Algorithm Based on Chromaticity Verification

Author(s)

Shisong Zhu, Mei Xu, Bibo Lu, Huan Xu

Corresponding Author:
Mei Xu
Affiliation(s)

School of Computer Science and Technology, Henan Polytechnic University, Jiaozuo City, Henan Province, 454003, China

Abstract

Candy color is a new way of existence in the field of photography, and its characteristics of high brightness, low saturation, and low contrast bring a different color experience to the world. However, no relevant algorithm dedicated to candy color processing has been found yet, for this reason, the CandyCycleGAN network based on color verification is proposed to realize candy color recoloring. Based on the CycleGAN network, we implement multi-scale fusion to enhance the detailed features of the output image and improve the quality of the output image; we design the Chromaticity verification process to constrain the range of the generated chromaticity values to ensure that the final effect meets the expectation; we use the Smooth L1 Loss as the loss function of the Chromaticity verification to measure the gap between the generated image and the expected image, and at the same time, compare the coloring quality of the image with that of the image using different loss functions; we add a gradient penalty to the coloring quality; we add a gradient penalty to the coloring quality. The gradient penalty is added to construct a new data distribution between the generated image and the expected image, and the gradient penalty is applied to each input data, which changes the gradient limitation method of the discriminator network and improves the stability of the network during the training process; the output of two different sizes of discriminant matrices allows the generator to generate images with higher resolution and better details. In comparison experiments with five algorithms such as CycleGAN, AdaAttN, etc., the CandyCycleGAN network reduces the computation by 37.95%, improves the PSNR by 49.83%, improves the SSIM by 54.77%, and improves the COLORFUL by 29.09% compared to the basic CycleGAN network model, and compared to the suboptimal AdaAttN model, the computation rises by 0.93%, but PSNR improves by 7.36%, SSIM improves by 7.14%, and COLORFUL improves by 17.30%. Comparative experimental results show that the proposed CandyCycleGAN network can achieve the optimal effect of high brightness, low saturation, and low contrast of candy color compared to the existing algorithms, which further validates the effectiveness of the algorithm.

Keywords

CandyCycleGAN; Candy color; color verification; multiscale fusion; CycleGAN

Cite This Paper

Shisong Zhu, Mei Xu, Bibo Lu, Huan Xu. CandyCycleGAN: Candy Color Coloring Algorithm Based on Chromaticity Verification. Academic Journal of Computing & Information Science (2023), Vol. 6, Issue 13: 196-207. https://doi.org/10.25236/AJCIS.2023.061327.

References

[1] Liqin Cao, Yongxing Shang, and Tingting Liu. Novelimage colorization of a local adaptive weighted average filter [J]. Journal of Image and Graphics, 2019, 24(08): 1249-1257. [DOI: 10.11834/jig.180608]

[2] Zhang R, Zhu J Y and Isola P. Real-time user-guided image colorization with learned deep priors[J]. ACM Transactions on Graphics, 2017, 36(4): Article No.119. [DOI: 10.1145/3072959.3073703]

[3] Sangkloy P, Lu J W, Fang C. Scribbler: controlling deep image synthesis with sketch and color[C] //Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Los Alamitos: IEEE Computer Society Press, 2017, 6836-6845 [DOI: 10.1109/CVPR.2017.723]

[4] Yuechuan Zhou, Jianxun Zhang, and Wenxin Dong. 2022. Unsupervised Landscape Painting Style Transfer Network with Network with Multiscale Semantic Information[J/OL]. Computer Engineering and Applications: 1-15[2023-07-05].

[5] Shengxiong Wang, Ruian Liu, and Da Yan. 2023, Image Style Transfer Algorithm Based on Improved Generative Adversarial Network[J/O L]. Electronic Science and Technology, 1-9[2023-07-05].

[6] Wenhua Ding, Junwei Du, and Lei Hou. Fashion Content and Style Transfer based on Generative Adversarial Network[J/OL]. Computer Engineering and Applications.: 2023, 1-11.

[7] Wenhui Tan, and Yilai Zhang. Design and Implementation of Multi Collection Style Transfer Algorithm[J] Journal of Fujian Computer. 2023, 39(04):42-48

[8] J. Y. Zhu, T. Park, P. Isola and A. A. Efros. Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 2017, pp. 2242-2251, [DOI: 10.1109/ICCV.2017.244]. 

[9] Y. Peng and Y. Zhou. Specific Emitter Identification via Squeeze-and-Excitation Neural Network in Frequency Domain. 2021 40th Chinese Control Conference (CCC), Shanghai, China, pp. 8310-8314. [DOI: 10.23919/CCC52363.2021.9549470]. 

[10] Dunlang Luo, Min Jiang, Linjun Yuan, et al. Research on Image Coloring Based on Conditional Generative Adversarial Network.[J]. Computer Engineering and Applications. 2021, 57(13):193-198.

[11] P. Isola, J. -Y. Zhu, T. Zhou and A. A. Efros. Image-to-Image Translation with Conditional Adversarial Networks. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 2017, pp. 5967-5976. [DOI: 10.1109/CVPR.2017.632].

[12] Arjovsky M, Chintala S, and Bottou L. Wasserstein GAN[OL]. [2020-08-20]. https://arxiv.org/pdf/ 1701.07875.pdf

[13] Gulrajani I, Ahmed F, and Arjovsky M. Improved training of Wasserstein GANs[C] //Proceedings of the 31st International Conference on Neural Information Processing Systems. New York: ACM Press, 2017: 5769-5779.

[14] Zhou Wang, A. C. Bovik, H. R. Sheikh and E. P. Simoncelli. Image quality assessment: from error visibility to structural similarity. in IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600-612, April 2004. [DOI: 10.1109/TIP.2003.819861].

[15] David Hasler and Sabine E. Suesstrunk. Measuring colorfulness in natural images[J].  Volume 5007, Issue. 2003. PP 87-95. [DOI: 10.1117/12.477378].

[16] Songhua Liu, Tianwei Lin, and Dongliang He, et al. AdaAttN: Revisit Attention Mechanism in Arbitrary Neural Style Transfer. IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 2021, pp. 6629-6638, doi: 10.1109/ICCV48922.2021.00658.

[17] Nguyen R, Kim S J , and Brown M S. Illuminant Aware Gamut-Based Color Transfer[J]. Computer Graphics Forum, 2015, 33(7):319-328.

[18] Hongan Li, Qiaoxue Zheng, and Jing Zhang, et al. Pix2Pix-Based Grayscale Image Coloring Method [J] Journal of Computer-Aided Design & Computer Graphics. 2021, 33(06):929-938.

[19] Zhang R, Zhu Junyan, Isola P etal. Real-time user-guided image colorization with learned deep priors [J]. ACM Transactions on Graphics, 2017, 36(4): 1-11. https://doi.org/10.1145/3072959.3073703

[20] Vitoria P, Raad L, Ballester C. ChromaGAN: Adversarial Picture Colorization with Semantic Class Distribution[C]// Work-shop on Applications of Computer Vision. IEEE, 2020. pp. 2434-2443. [DOI: 10.1109/WACV45572.2020.9093389].