Rethinking Image Evaluation in
Super-Resolution

1Computer Vision Center, 2Universitat Autònoma de Barcelona, 3 INSAIT, Sofia University

In a nutshell: try RQI to evaluate Super-Resolution models
and get fair comparisons

Descriptive caption
We show that even Ground Truth (GT) images (middle) in existing SR datasets can show relatively poor quality. As a result, image metrics tend to favor outputs that more resemble the reference GTs, even when they are perceptually poorer (left side), leading to contradictory evaluations with human preferences (right side). Here, we analyze how GT quality affects the evaluation of SR models and propose RQI to fairly assess SR models with imperfect GTs.

Abstract

While recent advancing image super-resolution (SR) techniques are continually improving the perceptual quality of their outputs, they can usually fail in quantitative evaluations. This inconsistency leads to a growing distrust in existing image metrics for SR evaluations. Though image evaluation depends on both the metric and the reference ground truth (GT), researchers typically do not inspect the role of GTs, as they are generally accepted as `perfect' references. However, due to the data being collected in the early years and the ignorance of controlling other types of distortions, we point out that GTs in existing SR datasets can exhibit relatively poor quality, which leads to biased evaluations. Following this observation, in this paper, we are interested in the following questions: Are GT images in existing SR datasets 100% trustworthy for model evaluations? How does GT quality affect this evaluation? And how to make fair evaluations if there exist imperfect GTs? To answer these questions, this paper presents two main contributions. First, by systematically analyzing seven state-of-the-art SR models across three real-world SR datasets, we show that SR performances can be consistently affected across models by low-quality GTs, and models can perform quite differently when GT quality is controlled. Second, we propose a novel perceptual quality metric, Relative Quality Index (RQI), that measures the relative quality discrepancy of image pairs, thus issuing the biased evaluations caused by unreliable GTs. Our proposed model achieves significantly better consistency with human opinions. We expect our work to provide insights for the SR community on how future datasets, models, and metrics should be developed.

How GT quality affects SR model evaluations?

Descriptive caption
By gradually discarding low quality GTs from testing datasets (images are from DIV2K, RealSR and DRealSR) and evaluate on the remaining high quality GTs, we make several observations: 1. Challenging images will always be challenging (for all the models, similar performance fluctuations occur when the same image is discarded). 2. High quality GTs are more challenging for SR models (by observing a consistent performance drop for all the models and on all the metrics). 3. Evaluation results can be different when GT quality is controlled (for example, SeeSR moves from Rank #6 to Rank #2 by LPIPS and from Rank #6 to Rank #1 by DISTS, by considering only high quality GTs). 4. The perception-distortion tradeoff also exists.

What about GT quality in existing SR datasets?

Descriptive caption
We compare GT images and varying model outputs in four SR test sets, and show the statistics of the perceptually best images from a well-controlled user-study. Three observations can be made from the statistics: 1. There exist model outputs better than GTs, and the percentage increases when the dataset contains poorer GTs (e.g. a small percentage for DIV2K and more than a half for Set5 & Set14). 2. Diffusion-based models generally perform better as most of the preferred model outputs are from diffusion-based models. 3. SeeSR is generally preferred by users, which is also consistent with the observation in the above study.

How to make fair evaluations with imperfect GTs?

Descriptive caption
Our solution is straightforward and simple: Since we do not recognize GTs as perfect references, we allow cases in which model outputs can achieve better quality than the GT. Therefore, we propose RQI (Relative Quality Index) to measure the relative quality from target images to GTs. RQI differs from traditional FR-IQA scheme in three aspects: 1. RQI is dependent with the input order. 2. We substitute reference image $I_0$ to any image $I_i$ in the distorted image sequence to cover complicated cases where GTs contain varying distortions. 3. We calculate relative quality discrepancy as label.

Quantitative Results

Qualitative Results

BibTeX


        @article{su2025rethinking,
        title={Rethinking Image Evaluation in Super-Resolution},
        author={Su, Shaolin and Rocafort, Josep M and Xue, Danna and Serrano-Lozano, David and Sun, Lei and Vazquez-Corral, Javier},
        journal={arXiv preprint arXiv:2503.13074},
        year={2025}
      }

Acknowledgements

This work was supported by the HORIZON MSCA Project funded by the European Union (project number 101152858), Grant PID2021-128178OB-I00 funded by MCIN/AEI/10.13039/501100011033, ERDF ``A way of making Europe'', the Departament de Recerca i Universitats from Generalitat de Catalunya with ref. 2021SGR01499. Shaolin Su was supported by the HORIZON MSCA Postdoctoral Fellowships. Danna Xue was supported by the grant Càtedra ENIA UAB-Cruïlla (TSI-100929-2023-2) from the Ministry of Economic Affairs and Digital Transition of Spain. David Serrano-Lozano was supported by the FPI grant from Spanish Ministry of Science and Innovation (PRE2022-101525). Lei Sun was partially funded by the Ministry of Education and Science of Bulgaria’s support for INSAIT as part of the Bulgarian National Roadmap for Research Infrastructure.