Reinforcement Learning–Guided Hyperparameter Tuning for U-Net-Based Super-Resolution of Brain MRI Under Synthetic Degradation

Authors

  • Suci Ramadini Sriwijaya University, Indonesia
  • Julian Supardi Sriwijaya University, Indonesia
Pages Icon

DOI:

https://doi.org/10.63158/journalisi.v8i2.1565

Keywords:

Medical image super-resolution, Brain MRI, Synthetic degradation, U-Net, Reinforcement learning, Hyperparameter optimization, PSNR, SSIM

Abstract

Low-resolution magnetic resonance imaging (MRI) may reduce visibility of fine anatomical details, motivating computational super-resolution (SR) to enhance perceived image quality. This study proposes an SR pipeline for 2D brain MRI images using a U‑Net baseline model and a reinforcement learning (RL) agent to automate hyperparameter tuning. Because the selected public dataset does not provide paired low-resolution/high-resolution (LR–HR) images, LR inputs are generated synthetically using a controlled degradation process (blur–downsample–upsample–noise), with deterministic degradation for validation and testing to ensure stable evaluation. The baseline U‑Net is trained using an L1 objective (optionally mixed with differentiable SSIM loss), AdamW optimizer, and ReduceLROnPlateau scheduler guided by validation PSNR. A Double Deep Q‑Network (Double DQN) agent then selects discrete action combinations of learning rate and SSIM-weighted loss mixing to fine-tune the baseline. For the held-out test set (n=60), the baseline improves degraded inputs from 27.04±3.21 dB to 30.10±3.59 dB PSNR and from 0.706±0.132 to 0.875±0.064 SSIM, respectively. RL fine-tuning yields a modest additional PSNR gain to 30.20±3.58 dB and SSIM remains comparable at 0.873±0.066. The paired statistical tests confirm that the PSNR improvement is significant (p<0.01), while changes in SSIM are not statistically significant, suggesting that for the tested synthetic degradation setting RL can provide reliable but incremental refinement when the baseline is already strong.

Downloads

Download data is not yet available.

References

[1] T. C. Arnold, C. W. Freeman, B. Litt, and J. M. Stein, “Low-field MRI: Clinical promise and challenges,” J. Magn. Reson. Imaging, vol. 57, no. 1, pp. 25–44, Jan. 2023, doi: 10.1002/jmri.28408.

[2] M. W. Haskell, J. F. Nielsen, and D. C. Noll, “Off-resonance artifact correction for MRI: A review,” NMR Biomed., vol. 36, no. 5, May 2023, doi: 10.1002/nbm.4867.

[3] C. Sarasaen, S. Chatterjee, M. Breitkopf, G. Rose, A. Nürnberger, and O. Speck, “Fine-tuning deep learning model parameters for improved super-resolution of dynamic MRI with prior-knowledge,” Artif. Intell. Med., vol. 121, Nov. 2021, doi: 10.1016/j.artmed.2021.102196.

[4] K. Chauhan et al., “Deep Learning-Based Single-Image Super-Resolution: A Comprehensive Review,” IEEE Access, vol. 11, pp. 21811–21830, 2023, doi: 10.1109/ACCESS.2023.3251396.

[5] H. Su et al., “A review of deep-learning-based super-resolution: From methods to applications,” Pattern Recognit., vol. 157, Jan. 2025, doi: 10.1016/j.patcog.2024.110935.

[6] D. Passos and P. Mishra, “A tutorial on automatic hyperparameter tuning of deep spectral modelling for regression and classification tasks,” Chemom. Intell. Lab. Syst., vol. 223, Apr. 2022, doi: 10.1016/j.chemolab.2022.104520.

[7] F. M. Talaat and S. A. Gamel, “RL based hyper-parameters optimization algorithm (ROA) for convolutional neural network,” J. Ambient Intell. Humaniz. Comput., vol. 14, no. 10, pp. 13349–13359, Oct. 2023, doi: 10.1007/s12652-022-03788-y.

[8] P. Nandal, S. Pahal, A. Khanna, and P. Rogerio Pinheiro, “Super-Resolution of Medical Images Using Real ESRGAN,” IEEE Access, vol. 12, pp. 176155–176170, 2024, doi: 10.1109/ACCESS.2024.3497002.

[9] J. Y. Lee, M. I. Hussain, K. H. Lee, H. S. Shim, S. H. Han, and D. Yang, “Transfer Learning-Based Super-Resolution for High-Precision Medical Imaging,” IEEE Access, vol. 13, pp. 124776–124791, 2025, doi: 10.1109/ACCESS.2025.3587263.

[10] Y. Yu, Y. Liu, J. Wang, N. Noguchi, and Y. He, “Obstacle avoidance method based on double DQN for agricultural robots,” Comput. Electron. Agric., vol. 204, Art. no. 107546, 2023, doi: 10.1016/j.compag.2022.107546.

[11] A. Ullah, I. Ullah, Q. M. ul Haq, S. Rubab, J. Baili, and M. A. Khan, “SDN-driven multi-objective task offloading in IoT-enabled UAVs in edge-cloud computing using double DQN,” IEEE Trans. Consum. Electron., 2025.

[12] Y. Chen, R. Xia, K. Yang, and K. Zou, “MICU: Image super-resolution via multi-level information compensation and U-net,” Expert Syst. Appl., vol. 245, p. 123111, Jul. 2024, doi: 10.1016/j.eswa.2023.123111.

[13] A. Ly, R. Dazeley, P. Vamplew, F. Cruz, and S. Aryal, “Elastic step DQN: A novel multi-step algorithm to alleviate overestimation in Deep Q-Networks,” Neurocomputing, vol. 576, Apr. 2024, doi: 10.1016/j.neucom.2023.127170.

[14] B. Luo, Z. Wu, F. Zhou, and B.-C. Wang, “Human-in-the-loop reinforcement learning in continuous-action space,” IEEE Trans. Neural Netw. Learn. Syst., vol. 35, no. 11, pp. 15735–15744, 2023.

[15] Antor Mahamudul Hashan, “Brain MRI Images.” Kaggle. doi: 10.34740/KAGGLE/DS/1250260.

[16] S. Ye, S. Zhao, Y. Hu, and C. Xie, “Single-Image Super-Resolution Challenges: A Brief Review,” Electron. Switz., vol. 12, no. 13, Jul. 2023, doi: 10.3390/electronics12132975.

[17] B. Hemanth Sai, S. Mukherjee, and S. R. Dubey, “Adaptive adam-based optimizers using second-order weight decoupling and gradient-aware weight decay for vision transformer,” Mach. Vis. Appl., vol. 36, no. 3, p. 68, May 2025, doi: 10.1007/s00138-025-01686-9.

[18] Y. Chen et al., “Deep-learning–based optical coherence tomography reconstruction for high-speed and contrast morphology and vasculature imaging,” J. Biomed. Opt., vol. 31, no. 2, Art. no. 025001, 2026, doi: 10.1117/1.JBO.31.2.025001.

[19] N. Siddique, S. Paheding, C. P. Elkin, and V. Devabhaktuni, “U-net and its variants for medical image segmentation: A review of theory and applications,” IEEE Access, vol. 9, pp. 82031–82057, 2021.

[20] I. Boucherit and H. Kheddar, “Reinforced Residual Encoder–Decoder Network for Image Denoising via Deeper Encoding and Balanced Skip Connections,” Big Data Cogn. Comput., vol. 9, no. 4, Apr. 2025, doi: 10.3390/bdcc9040082.

[21] S. H. Park, Y. S. Moon, and N. I. Cho, “Perception-oriented single image super-resolution using optimal objective estimation,” in Proc. IEEE/CVF Conf Computer Vision and Pattern Recognition (CVPR), 2023, pp. 1725–1735. doi: 10.1109/CVPR52729.2023.00172.

[22] L. Lin, H. Chen, E. E. Kuruoglu, and W. Zhou, “Robust structural similarity index measure for images with non-Gaussian distortions,” Pattern Recognit. Lett., vol. 163, pp. 10–16, 2022.

[23] Q. Ning, W. Dong, X. Li, J. Wu, and G. Shi, “Uncertainty-driven loss for single image super-resolution,” Adv. Neural Inf. Process. Syst., vol. 34, pp. 16398–16409, 2021.

[24] J. Feng, Y. Shi, G. Qu, S. H. Low, A. Anandkumar, and A. Wierman, “Stability constrained reinforcement learning for decentralized real-time voltage control,” IEEE Trans. Control Netw. Syst., vol. 11, no. 3, pp. 1370–1381, 2023.

[25] Y. Y. Tsai, B. Xiao, E. Johns, and G. Z. Yang, “Constrained-Space Optimization and Reinforcement Learning for Complex Tasks,” IEEE Robot. Autom. Lett., vol. 5, no. 2, pp. 682–689, Apr. 2020, doi: 10.1109/LRA.2020.2965392.

[26] R. Yu et al., “Improved double DQN with deep reinforcement learning for UAV indoor autonomous obstacle avoidance,” Sci. Rep., vol. 15, no. 1, p. 28133, 2025.

[27] C. Chen, Y. Wang, N. Zhang, Y. Zhang, and Z. Zhao, “A Review of Hyperspectral Image Super-Resolution Based on Deep Learning,” Remote Sens., vol. 15, no. 11, Jun. 2023, doi: 10.3390/rs15112853.

[28] L. Zhu, B. Zhong, and K.-K. Ma, “APSNR: Artifact Peak Signal-to-Noise Ratio for Image Quality Assessment,” IEEE Trans. Image Process., vol. 34, pp. 7180–7192, 2025.

Downloads

Published

2026-04-12

Issue

Section

Articles

How to Cite

[1]
S. Ramadini and J. Supardi, “Reinforcement Learning–Guided Hyperparameter Tuning for U-Net-Based Super-Resolution of Brain MRI Under Synthetic Degradation”, journalisi, vol. 8, no. 2, pp. 1693–1713, Apr. 2026, doi: 10.63158/journalisi.v8i2.1565.