Abstract
Underwater images often suffer from luminance attenuation, structural degradation, and color distortion due to light absorption and scattering in water. The variations in illumination and color distribution across different water bodies further increase the uncertainty of these degradations, making traditional enhancement methods that rely on fixed parameters, such as underwater dark channel prior (UDCP) and histogram equalization (HE), unstable in such scenarios. To address these challenges, this paper proposes a multi-operator underwater image enhancement framework with adaptive parameter optimization. To achieve luminance compensation, structural detail enhancement, and color restoration, a collaborative enhancement pipeline was constructed using contrast-limited adaptive histogram equalization (CLAHE) with highlight protection, texture-gated and threshold-constrained unsharp masking (USM), and mild saturation compensation. Building upon this pipeline, an adaptive multi-operator parameter optimization strategy was developed, where a unified scoring function jointly considers feature gains, geometric consistency of feature matches, image quality metrics, and latency constraints to dynamically adjust the CLAHE clip limit, USM gain, and Gaussian scale under varying water conditions. Subjective visual comparisons and quantitative experiments were conducted on several public underwater datasets. Compared with conventional enhancement methods, the proposed approach achieved superior structural clarity and natural color appearance on the EUVP and UIEB datasets, and obtained higher quality metrics on the RUIE dataset (Average Gradient (AG) = 0.5922, Underwater Image Quality Measure (UIQM) = 2.095). On the UVE38K dataset, the proposed adaptive optimization method improved the oriented FAST and rotated BRIEF (ORB) feature counts by 12.5%, inlier matches by 9.3%, and UIQM by 3.9% over the fixed-parameter baseline, while the adjacent-frame matching visualization and stability metrics such as inlier ratio further verified the geometric consistency and temporal stability of the enhanced features.
Keywords: underwater image enhancement, adaptive parameter optimization, multi-operator enhancement framework, feature robustness
1. Introduction
Underwater imaging technologies play a crucial role in acquiring detailed information about submarine environments and have been widely adopted in applications such as ocean exploration, biological research, and underwater robotics [1]. At present, the two primary approaches for underwater environment perception are acoustic sensing and optical vision. The former relies on sonar and other acoustic devices, offering advantages such as long detection range and strong penetrability, but it suffers from low resolution [2]. The latter typically employs underwater cameras or other optical sensors to obtain high-resolution images, making it well-suited for short-range object localization and environmental understanding [3].
Vision-based underwater object detection and recognition are essential for enabling intelligent underwater robotic systems; however, the quality of underwater optical imaging is significantly degraded due to wavelength-dependent attenuation and path scattering [4], as illustrated in Figure 1. Different wavelengths of light exhibit varying attenuation rates in water: red light, which has the longest wavelength, attenuates the fastest, whereas blue and green wavelengths attenuate more slowly, resulting in the bluish–green color cast commonly observed in underwater images [5]. Meanwhile, suspended particles in the water induce both forward and backward scattering, which further reduces image contrast and blurs fine details [6]. These physical degradation phenomena often lead to color distortion, contrast reduction, and texture weakening in underwater images, severely impairing visual perception and subsequent high-level applications.
Figure 1.
Underwater optical imaging model.
Extensive research has been conducted on underwater image enhancement and degradation compensation, particularly exploring image processing methods based on deep learning, physical models, and non-physical models [7]. Deep learning methods include end-to-end enhancement models based on CNN [8], GAN [9], and Transformer [10], as well as more recent learning-based approaches such as Ucolor [11] and DeepSeeColor [12], which can generate high-quality images in specific scenarios; however, their performance heavily depends on large-scale training data and computational resources, and they often exhibit limited generalization across different water domains and illumination conditions. Physical-model-based methods such as UDCP [13], GDCP [14], and Red Channel [15] restore image clarity by estimating the underwater attenuation model, but they rely on prior information such as water type, transmission estimation, and illumination direction. When these priors are unknown or vary significantly, color distortions are likely to occur. Non-physical-model-based methods such as HE [16], CLAHE [17], Retinex [18], and fusion-based method [19] do not require such priors and are computationally efficient. Nevertheless, their enhancement strength is controlled by fixed parameters, which often leads to over-enhancement, noise amplification, or color shifts in complex underwater environments. Overall, existing methods commonly suffer from reliance on a single type of degradation assumption, lack effective mechanisms for adaptive parameter tuning, and unstable performance across diverse scenes, making it challenging to maintain consistent visual quality and feature usability under different water conditions.
To address these issues, the main contributions of this study are as follows:
A multi-operator collaborative enhancement pipeline tailored to the degradation characteristics of underwater optical images is proposed. This pipeline addresses various degradation effects caused by light absorption and scattering from three dimensions—luminance compensation, structural detail restoration, and color correction. It employs contrast-limited adaptive histogram equalization (CLAHE), a local contrast enhancement technique designed to improve image details under non-uniform illumination while mitigating over-enhancement and noise amplification, combined with highlight protection to suppress excessive local brightness enhancement; applies controlled unsharp masking (USM), a classical image sharpening technique for enhancing local contrast and edge details, together with texture gating to reinforce true structural details; and incorporates mild saturation compensation to alleviate bluish–green color bias, thereby producing enhanced images that appear more natural and are more suitable for feature extraction.
An adaptive parameter optimization mechanism based on a unified scoring function is developed. By incorporating four constraints—ORB (Oriented FAST and Rotated BRIEF) feature gain, inlier matching gain, improvement in UIQM (Underwater Image Quality Measure), and latency penalty—a scoring scheme associated with real-time scene conditions is constructed. Using this scheme, global parameter search and real-time neighborhood refinement are performed for the three key operators—CLAHE clip limit, USM gain, and Gaussian scale—enabling the enhancement parameters to dynamically and robustly adapt to variations in water conditions, illumination, and degradation severity.
Extensive experiments on public underwater datasets are conducted to evaluate both subjective visual quality and objective metrics. The superiority of the proposed enhancement method in terms of visual perception is validated on the EUVP and UIEB datasets. Quality indicators such as AG and UIQM are assessed on the RUIE dataset. The effectiveness of the adaptive multi-operator parameter tuning mechanism is verified on the UVE38K dataset, and its stability and consistency are further demonstrated through adjacent-frame matching visualizations and metrics including the two-hop survival ratio.
The remainder of this paper is organized as follows. Section 2 provides a detailed description of the proposed underwater image enhancement framework, including the multi-operator collaborative enhancement pipeline and the adaptive parameter adjustment mechanism. Section 3 presents the experimental design and evaluates visual quality, objective metrics, and dynamic adaptability across multiple public datasets. Section 4 concludes the paper and discusses potential directions for future research.
2. Methodology
2.1. Overall Framework
The overall structure of the proposed underwater image enhancement framework is illustrated in Figure 2. First, the input underwater image is converted from the RGB color space to the LAB space, where the luminance channel (L) and chromatic channels (a and b) are separated, allowing brightness, structural, and chromatic information to be processed independently in different color domains. The enhancement pipeline then sequentially applies CLAHE together with highlight protection in the L channel to prevent excessive amplification in bright regions, followed by controlled USM with texture gating and threshold constraints to recover structural details while suppressing noise. Mild saturation compensation is applied in the ab channels to mitigate the bluish–green color bias caused by wavelength-dependent attenuation. Through this pipeline, image quality is comprehensively improved across the luminance, structure, and color dimensions.
Figure 2.
Overall framework diagram of underwater image enhancement based on adaptive parameter tuning mechanism.
To accommodate varying water conditions and scene dynamics, an adaptive parameter adjustment mechanism based on a unified scoring function is introduced. This mechanism performs offline global optimization and online neighborhood refinement of key parameters, including the CLAHE clip limit, USM gain, and Gaussian scale, enabling dynamic adjustment of the multi-operator enhancement parameters. Ultimately, the enhanced images produced by the framework exhibit more stable visual quality and feature usability, thereby providing a reliable input for subsequent vision-based tasks.
2.2. Image Enhancement Method
2.2.1. Local Contrast Enhancement and Highlight Protection
In the luminance channel, this study introduces CLAHE into local regions to achieve contrast-limited histogram equalization [20], thereby enhancing contrast in low-luminance areas while preventing over-amplification in bright regions. Unlike global histogram equalization, it operates on small local regions rather than the entire image, enabling more effective enhancement of local details under non-uniform illumination conditions. By introducing a clip limit into the histogram redistribution process, CLAHE effectively suppresses noise amplification and over-enhancement in homogeneous regions.
Specifically, the luminance channel is partitioned into 8 × 8 sub-blocks, and in each sub-block, a monotonic mapping function is constructed based on the clip limit c. Subsequently, bilinear interpolation is applied to the CLAHE output to obtain the locally enhanced result as defined in Equation (1):
| (1) |
A highlight protection factor is then constructed on to modulate the enhancement strength, preventing excessive stretching in bright regions. First, in underwater images, extreme luminance values are frequently caused by specular reflections, backscatter, or sensor noise. Using the maximum luminance value as a threshold would therefore be highly sensitive to such outliers and lead to unstable highlight suppression. Instead, using a high-percentile luminance value provides a robust estimate of the highlight threshold by attenuating the influence of isolated extreme values while preserving representative bright regions, therefore using the 97th percentile of the luminance distribution is computed to obtain the highlight threshold [21], as shown in Equation (2):
| (2) |
Next, a smooth protection function is defined based on the relative position of the pixel luminance with respect to the threshold [22], as expressed in Equation (3):
| (3) |
where controls the width of the smooth transition region [23]. The final highlight protection factor is defined in Equation (4):
| (4) |
where denotes the suppression coefficient for bright regions, and represents the protection factor. By applying this factor to modulate the CLAHE output, the enhancement strength in high-luminance regions can be effectively suppressed, avoiding artifacts caused by excessive amplification while preserving the contrast enhancement effect of CLAHE in non-bright regions.
This luminance enhancement module provides a more stable luminance distribution for the subsequent structural enhancement and color correction steps and ensures more consistent enhancement performance across regions with different luminance levels.
2.2.2. Controlled Sharpening and Edge Gating
In underwater imaging, scattering effects attenuate the high-frequency responses of true structural edges, while random noise can be erroneously amplified and misinterpreted as fine textures after CLAHE enhancement. Therefore, a controlled sharpening module [24] based on residual suppression and second-order gradient constraints is constructed on , and a dual gating mechanism combining luminance and edge information is incorporated to achieve robust structural enhancement.
First, high-frequency residuals are extracted from to obtain structural details, and a luminance-based threshold is used to suppress noise-dominant regions. The residual map and the residual mask are defined in Equation (5):
| (5) |
where represents the high-frequency component of the CLAHE-enhanced luminance image, and denotes a Gaussian kernel with standard deviation , which determines the spatial scale for separating meaningful structural components from high-frequency noise. The binary mask is an indicator function: when the residual magnitude exceeds the threshold , and otherwise. The threshold is used to suppress noise-dominant residual responses while preserving meaningful structural details. Next, the second-order gradient [25] is used to construct a noise-suppression term. The second-order gradient energy is defined in Equation (6):
| (6) |
Its physical meaning lies in the fact that real edges produce stable second-order responses, while noise exhibits random fluctuations in the second-order gradient; thus, noise can be effectively suppressed through this term. To further limit excessive enhancement, a normalized and truncated energy map is constructed, as shown in Equation (7):
| (7) |
An edge gating weight is then constructed using smoothstep [22], as defined in Equation (8):
| (8) |
where and denote the lower and upper bounds of the smooth transition interval for the edge-gating function, defining the activation range of structural enhancement (these parameters were respectively set to 0.2 and 0.5 in the experiments). The smoothstep operator generates a continuous weight in the range [0, 1], enabling a gradual activation of enhancement in regions with sufficient structural energy while suppressing enhancement in texture-poor regions. This edge gating is combined with the highlight protection and the residual mask to form the overall gain control. The combined gain function is given in Equation (9):
| (9) |
Finally, the luminance sharpening output is defined in Equation (10):
| (10) |
where is the USM gain coefficient that regulates the overall strength of structural amplification, balancing detail enhancement and noise suppression. This parameter is optimized within a predefined range during the adaptive parameter optimization stage. is the sharpened luminance output. This controlled sharpening module achieves a balance between luminance protection, noise suppression, and edge response, effectively suppressing noise-induced artifacts in underwater imaging while enhancing true structural details.
2.2.3. Mild Saturation Compensation
Due to the wavelength-dependent selective absorption of long wavelengths in underwater environments, images often exhibit an overall bluish–green color cast [26], even after the luminance channel has been restored using CLAHE and USM. To achieve natural color reproduction, a small-gain chromaticity adjustment is applied to the color channels in the Lab space [27], as shown in Equation (11):
| (11) |
where and are the gain-adjusted chromaticity values, and are the clipped chromaticity outputs, and is the mild saturation gain coefficient [28]. The gain-adjusted chromaticity channels are then fused with the previously enhanced luminance channel , followed by inverse color-space conversion to produce the final enhanced image, as defined in Equation (12):
| (12) |
2.3. Adaptive Parameter Tuning Mechanism
The underwater image enhancement method proposed above can effectively improve luminance layering, structural details, and chromatic balance. However, its core operator parameters—CLAHE clip limit , USM sharpening strength , and Gaussian smoothing scale —are fixed, which limits the ability to maintain consistent performance under varying water conditions, color casts, and changes in turbidity and illumination. Therefore, this study introduces an adaptive parameter optimization method that employs a unified scoring function integrating feature gains, geometric matching consistency, quality improvement, and latency constraints. This function guides both global parameter search and local refinement of the CLAHE clip limit, USM gain, and Gaussian scale, enabling dynamic adjustment of multiple operator parameters in response to scene variations.
Figure 3 illustrates the process framework of the adaptive parameter tuning mechanism, which consists of two stages: global parameter optimization and real-time local parameter refinement. The former evaluates candidate parameter configurations within a predefined parameter grid to determine a robust initialization, while the latter updates parameters gradually during runtime based on neighborhood search strategies to accommodate changes in water conditions, illumination, and scene texture characteristics.
Figure 3.
Process diagram of adaptive parameter tuning mechanism.
2.3.1. Global Parameter Optimization
In this stage, a global search is performed over a predefined parameter grid to determine robust initialization parameters. The parameter vector is defined as , where denotes the CLAHE clip limit controlling brightness and contrast, denotes the USM gain factor regulating sharpening strength, and denotes the Gaussian smoothing scale determining the structural response range. The candidate sets for each dimension are defined as , , and , respectively, forming a discrete parameter grid . The Cartesian product of the three sets enumerates all candidate parameter combinations, as shown in Equation (13):
| (13) |
where , , and . These interval ranges were chosen based on the typical parameter settings reported in [17,25,29], covering enhancement strengths from mild to strong to ensure robustness and representativeness.
Let denote the -th frame. A neighbor-pair set was selected uniformly from a continuous sequence of images. For the raw image sequence (raw) and the enhanced sequence (enh), three relative improvements were defined: ORB feature gain , inlier match gain , and image quality gain . Specifically, ORB features were extracted using the OpenCV implementation, and inlier correspondences were identified via RANSAC-based fundamental matrix estimation after descriptor matching. Image quality gain was computed on a frame-wise basis using the UIQM metric, and all relative gains were obtained by comparing enhanced images with the corresponding raw images under identical settings. These three terms are further mapped by tanh to obtain the gain term , as shown in Equations (14) and (15):
| (14) |
| (15) |
where , , and denote the preference weights for the three improvement terms and were fixed at 0.5, 0.3, and 0.2 in the experiments, respectively; Parameters and control the saturation ranges of the tanh functions and were set to 0.5 and 0.1, respectively [30]. Considering that the enhancement pipeline introduces additional processing time, the enhancement time and its ratio to the time budget are used to compute the delay ratio . By applying a delay penalty to , the final scoring function is obtained, as defined in Equations (16) and (17):
| (16) |
| (17) |
where denotes the predefined maximum allowable per-frame enhancement time, which was set to 45 ms in the experiments, is the delay penalty weight; and = 1.2 represents the tolerance exponent, controlling the strength and curvature of the time-delay penalty. For each pair , the comprehensive score is computed for every parameter . To evaluate the expected performance under an “average scene”, the scores over the pair set are aggregated by averaging, as shown in Equation (18):
| (18) |
Finally, the parameter with the maximum average score is selected as the global optimal parameter , as shown in Equation (19):
| (19) |
This global optimization stage provides a principled baseline for parameter initialization and serves as a warm-start for the subsequent online tuning mechanism, ensuring stability and convergence under complex and dynamic underwater scenes.
2.3.2. Real-Time Neighborhood Refinement
Based on the globally optimal parameter obtained in the previous stage, this section introduces an adaptive local refinement mechanism during runtime to accommodate gradual variations in water turbidity, illumination, and texture distribution. For the current parameter , a neighborhood set is defined. A local search is then performed within this neighborhood, and the scoring function is evaluated to estimate the optimal parameter in real-time, as shown in Equations (20) and (21):
| (20) |
| (21) |
where , , and are parameter step sizes defining the adjustment range for local refinement, and denotes the optimal parameter within the current neighborhood.
To balance responsiveness and parameter stability, a cooldown interval H = 30 frames was set, such that neighborhood evaluation is performed once every H frames, and the parameter update is frozen until the next evaluation to avoid excessive oscillation. Additionally, a trigger condition is introduced before updating to ensure stable refinement: an update is performed only when and the cooldown interval has elapsed. The threshold is estimated based on the distribution of score differences during the global search stage: specifically, the 75th percentile of the absolute score differences is taken as the update boundary. The 75th percentile is adopted as a conservative update gate for the adaptive tuning process, such that only statistically significant score improvements trigger parameter updates, while minor fluctuations caused by noise or transient scene variations are suppressed. Compared with fixed absolute thresholds, this percentile-based criterion provides a data-driven and distribution-aware mechanism to balance adaptation responsiveness and temporal stability. Thus, the real-time parameter update rule is defined in Equation (22):
| (22) |
In summary, the real-time neighborhood refinement mechanism enables dynamic parameter adjustment in response to scene variations, complementing the global optimization stage. The global stage ensures robust initialization, while the refinement stage provides continuous adaptability during runtime.
3. Experimental Results and Analysis
3.1. Experimental Setup
The experiments in this study were implemented using Python 3.8.10 and open-source image processing libraries such as OpenCV 4.12.0, and were conducted on a system equipped with an Intel i7-12700 CPU and 16 GB of memory running Ubuntu 22.04. To verify the practicality and generalizability of the proposed enhancement method, all experiments were conducted under the same parameter configurations and hardware conditions.
3.2. Comparative Experiments on Enhancement Methods
To comprehensively evaluate the effectiveness of the proposed enhancement pipeline, experiments were conducted from two complementary perspectives: visual comparison and quantitative metric evaluation. The visual comparison focused on typical degraded underwater scenes in the EUVP and UIEB datasets, examining improvements in brightness, structural clarity, and color appearance produced by the enhancement method. The quantitative comparison, based on the RUIE dataset, evaluated standardized metrics to assess improvements in image sharpness, structural consistency, and overall visual quality. Together, these two experimental components complement each other and validate the effectiveness of the proposed method from both subjective and objective viewpoints.
3.2.1. Visual Comparison on Public Datasets
The visual quality evaluation in this section uses two publicly available underwater image datasets: the EUVP dataset created by Islam et al. [31] and the UIEB dataset constructed by Li et al. [32]. Eight representative images covering diverse underwater scenes were selected for comparison. The proposed enhancement method was compared against several existing enhancement approaches, including Shades of gray [33], UDCP [13], WB_stretch, HE [16], CLAHE [17], and CLAHE + USM.
As shown in Figure 4, Shades of gray [33] can remove part of the color cast, but its overall visual improvement is limited. UDCP [13] and WB_stretch can enhance brightness and contrast to a certain degree; however, they often introduce cold tones, resulting in bluish or greenish color shifts. HE [16] enhances brightness but tends to produce over-enhancement and color distortion. CLAHE [17] and CLAHE + USM both improved the brightness distribution and sharpness, but the latter may exhibit slight color shifts due to over-sharpening in bright regions. In contrast, the proposed method effectively suppresses noise and color bias while preserving structural details, resulting in an overall visual appearance that is more natural and closer to real underwater illumination conditions.
Figure 4.
Qualitative comparison results of eight underwater images. From left to right: (a) Raw; (b) Shades of gray; (c) UDCP; (d) WB_stretch; (e) HE; (f) CLAHE; (g) CLAHE + USM; (h) ours.
In summary, the visual comparison results indicate that the proposed method achieves more consistent and natural visual performance across various underwater scenes, providing higher-quality inputs for subsequent feature extraction and matching modules.
3.2.2. Quantitative Evaluation on the RUIE Dataset
In this section, we evaluate the image quality using the RUIE dataset [34], which consists of three subsets: UCCS, UIQS, and UTTS. The UCCS subset provides various images with challenging color casts; the UIQS subset serves as a benchmark for assessing image enhancement performance under standardized conditions; and the UTTS subset contains underwater target images for evaluating detection-oriented enhancement methods [35]. Both qualitative and quantitative evaluations were conducted based on these datasets.
To objectively evaluate the effectiveness of the proposed method, quantitative experiments were conducted on the RUIE dataset. Specifically, 100 underwater images with diverse degradation characteristics were selected from the RUIE dataset, and commonly used objective image quality metrics, including Average Gradient (AG) [36], Perceptual Contrast Quality Index (PCQI) [37], Underwater Image Quality Measure (UIQM) [38], and Underwater Color Image Quality Evaluation (UCIQE) [39], were computed for each method. The results were then statistically averaged, and the corresponding quantitative comparisons are summarized in Table 1.
Table 1.
Average quantitative evaluation results on 100 underwater images from the RUIE dataset (The best performance for each metric is highlighted in bold).
| Method | AG [34] | PCQI [35] | UIQM [36] | UCIQE [37] |
|---|---|---|---|---|
| Raw | 0.2102 | 0.9974 | 1.0449 | 0.3649 |
| Shades of gray [31] | 0.2337 | 0.9606 | 1.1643 | 0.2821 |
| HE [14] | 0.4038 | 0.6428 | 1.9616 | 0.5024 |
| CLAHE [15] | 0.4100 | 0.7285 | 1.3717 | 0.4182 |
| CLAHE + USM | 0.5281 | 0.5997 | 1.6224 | 0.4607 |
| UDCP [11] | 0.2651 | 0.7231 | 1.9641 | 0.4506 |
| Ours | 0.5922 | 0.5185 | 2.0895 | 0.4367 |
As shown in Table 1, the proposed method achieved the highest scores on both AG and UIQM (0.5922 and 2.0895, respectively), indicating significant improvement in image sharpness, structural clarity, and overall perceptual quality while maintaining a stable and natural appearance. Although the proposed method showed a decrease in the UCIQE score (0.4367), it preserved the original color relationships more effectively in most scenes and avoided the color shifts caused by excessive enhancement. In contrast, HE achieved the highest UCIQE score (0.5024), but Figure 5 clearly shows that it introduced severe over-enhancement and color distortion. With respect to PCQI, the Shades of gray method achieved the highest score (0.9606), whereas the proposed method obtained a lower value (0.5185) compared with the other methods. This is mainly because the proposed enhancement process systematically reconstructs local contrast structures of the original image to emphasize structural details and improve feature discriminability. Since PCQI evaluates the consistency of contrast structures between enhanced and original images, significant changes to the original contrast relationships naturally lead to a decrease in PCQI values. Therefore, this result reflects differences in structural enhancement orientation rather than a degradation in image quality.
Figure 5.
Visual comparison of five representative images selected from the RUIE dataset. From left to right: (a) Raw; (b) Shades of gray; (c) HE; (d) CLAHE; (e) CLAHE + USM; (f) UDCP; (g) ours. From top to bottom: (1) bluish-biased image; (2) bluish–green biased image; (3) greenish-biased image data in the UCCS dataset with different color biases; (4) underwater image quality data in the UIQS dataset that contains underwater images of various qualities for specific underwater mission and (5) underwater target mission data in the image dataset UTTS for a specific underwater mission.
On this basis, to provide an intuitive visual illustration of the performance of different enhancement methods under typical degradation scenarios, five representative images were selected from the above image set for qualitative comparison, as shown in Figure 5. It presents a visual comparison between the proposed method and several representative underwater image enhancement approaches on the RUIE dataset [34]. The five experimental samples from top to bottom correspond to bluish images, greenish images, mixed bluish–green images, task-oriented underwater quality data, and underwater target-oriented data, respectively. From left to right, the comparison methods include Raw, Shades of gray [33], HE [16], CLAHE [17], CLAHE + USM, UDCP [13], and the proposed method (Ours). The proposed method effectively enhances underwater visibility, suppresses scene haze, preserves fine structural details, and maintains clearer edge information in target regions.
3.3. Validation of the Adaptive Parameter Tuning Mechanism
3.3.1. Overall Performance Analysis
To comprehensively evaluate the effectiveness of the proposed adaptive parameter tuning mechanism, seven representative underwater sequences (turtle_2,mobula_4,marine_r10,marine_r8,marine_r3,marine_r2,coral_1) from the UVE38K dataset [40] were selected for experimentation. For each sequence, an initial segment was first processed using the global parameter optimization stage to determine a unified globally optimal initial parameter vector, which served as the starting configuration for the subsequent enhancement process. Based on this initialization, a comparative analysis was conducted between the proposed adaptive parameter tuning mechanism (Ours) and a fixed-parameter strategy (Baseline) to verify the practical role of the adaptive tuning mechanism under scene variations along video sequences. This comparison is explicitly formulated as an ablation study on the adaptive parameter optimization module.
Specifically, the proposed method (Ours) and Baseline shared an identical enhancement operator structure, image enhancement pipeline, and scoring function definition. Both methods were implemented within the same multi-operator collaborative enhancement framework, and their only difference was in the parameter update strategy. The Baseline method adopts the globally optimal initial parameter vector obtained via grid search in the global optimization stage and keeps the parameters fixed throughout the entire enhancement process for each sequence. In contrast, the proposed method dynamically updates the enhancement parameters based on the same optimal initialization through neighborhood search combined with scoring feedback, enabling adaptive parameter adjustment in response to scene variations along the video sequence.
Table 2 presents the results of executing the adaptive parameter tuning strategy (Ours) and the fixed-parameter strategy (Baseline) on each sequence. For each sequence, the table reports the optimal initial parameter vector, followed by the relative improvements of the number of ORB keypoints, the number of geometrically consistent inlier matches, and the image quality score t, denoted by ΔN, ΔI, and ΔQ, which are reported to quantify the performance gains of the adaptive strategy over the fixed-parameter baseline. As shown in Table 2, when both strategies started from the same optimal initial parameter vector, the adaptive tuning strategy achieved an average improvement of approximately 12.5% in ORB keypoint count, 9.3% in geometric inlier count, and 3.9% in the UIQM image-quality metric. These results demonstrate the effectiveness and robustness of the proposed adaptive optimization strategy in enhancing feature detectability, geometric consistency, and overall visual quality under diverse underwater conditions.
Table 2.
Quantitative comparison between the adaptive parameter tuning strategy (Ours) and the fixed-parameter baseline (Baseline) on seven sequences from the UVE38K dataset. (The ↑ indicates relative improvement, where higher values represent better performance).
| Sequence | Category | Kp_enh | I_enh | Q_enh | Relative Improvement | |||
|---|---|---|---|---|---|---|---|---|
| ΔN | ΔI | ΔQ | ΔN ↑ | ΔI ↑ | ΔQ ↑ | |||
| turtle_2 | Ours | (2.4, 0.8, 1.0) | 1.193 | 1.023 | 0.515 | 7.3% | 4.2% | 0.7% |
| Baseline | 1.112 | 0.981 | 0.511 | |||||
| mobula_4 | Ours | (1.8, 0.4, 1.2) | 0.554 | 0.464 | 0.042 | 22.2% | 14.9% | 2.4% |
| Baseline | 0.454 | 0.404 | 0.041 | |||||
| marine_r10 | Ours | (2.2, 0.8, 1.0) | 4.379 | 3.035 | 0.538 | 13.6% | 8.4% | 7.2% |
| Baseline | 3.856 | 2.798 | 0.502 | |||||
| marine_r8 | Ours | (2.0, 0.8, 1.2) | 12.730 | 9.017 | 0.822 | 20.6% | 20.6% | 5.7% |
| Baseline | 10.555 | 7.477 | 0.778 | |||||
| marine_r3 | Ours | (1.8, 0.8, 1.2) | 6.368 | 4.860 | 0.815 | 8.0% | 6.7% | 10.1% |
| Baseline | 5.895 | 4.552 | 0.740 | |||||
| marine_r2 | Ours | (1.8, 0.8, 1.2) | 21.295 | 15.495 | 1.025 | 10.3% | 6.4% | 1.7% |
| Baseline | 19.298 | 14.569 | 1.008 | |||||
| coral_1 | Ours | (2.2, 0.8, 1.0) | 0.332 | 0.252 | 0.002 | 5.7% | 4.1% | 0% |
| Baseline | 0.314 | 0.242 | 0.002 | |||||
| Overall Average Improvement | 12.5% | 9.3% | 3.9% | |||||
3.3.2. Computational Efficiency Analysis
To evaluate the computational efficiency of the proposed adaptive enhancement framework, we further analyzed the runtime overhead introduced by the adaptive parameter tuning mechanism. Figure 6 presents a comparison of the average per-frame processing time between the fixed-parameter baseline strategy (Baseline) and the proposed adaptive parameter tuning mechanism (Ours) on seven representative sequences listed in Table 2.
Figure 6.
Average runtime per-frame comparison between the adaptive parameter tuning strategy (Ours) and the fixed-parameter baseline on seven sequences from the UVE38K dataset. From left to right: Seq 1-turtle_2; Seq 2-mobula_4; Seq 3-marine_r10; Seq 4-marine_r8; Seq 5-marine_r3; Seq 6-marine_r2; Seq 7-coral_1.
As shown in Figure 6, the baseline strategy (Baseline) achieved an average processing time ranging from approximately 29 ms to 40 ms per frame, while the proposed method (Ours) operates within a range of approximately 31 ms to 44 ms per frame. In comparison, the adaptive tuning mechanism introduces an additional average overhead of approximately 2–4 ms per frame. Moreover, the adaptive parameter optimization is executed at a controlled interval and is further constrained by a score-based update threshold, as described in Section 2.3. These design choices effectively limit unnecessary parameter updates and prevent excessive runtime fluctuations. This runtime analysis demonstrates that the proposed adaptive enhancement framework achieves a good balance between performance improvement and computational cost, making it suitable for real-time or near-real-time underwater vision applications.
3.3.3. Validation of Parameter Dynamics and Scoring Response
To verify whether the proposed adaptive tuning strategy can achieve genuine dynamic responsiveness during runtime, this section conducts an analysis of parameter variation and scoring trends on two representative sequences, turtle_2 and marine_r2, selected from the UVE38K dataset [40]. Figure 7 illustrates the sequential variation in the enhancing parameters , , and , as well as the comprehensive scoring curves throughout the frame sequence.
Figure 7.
Trend chart of enhancing parameters c, k, σ and comprehensive scoring curve with frame sequence variation. (a) Trend of parameter c; (b) trend of parameter k; (c) trend of parameter σ; (d) trend of parameter comprehensive scoring curve.
From the parameter variations shown in Figure 7, it can be observed that all parameters adaptively adjust during sequential processing in response to changes in underwater illumination and environmental conditions. The turtle_2 sequence corresponds to a deep-water distant scene, where the overall illumination exhibits a pronounced bluish cast, and the distant region shows noticeable color attenuation and low contrast. Under such conditions, it is necessary to increase the contrast-enhancement parameter and the sharpening weight to compensate for brightness loss and color bias, while a relatively higher helps enhance edge clarity. However, strong enhancement operations may cause slight over-enhancement and amplified noise in local regions, leading to a lower final score. The marine_r2 sequence represents a mildly greenish underwater scene with relatively balanced brightness distribution and stable chromaticity. In this case, natural color restoration can be achieved within a lower parameter range, resulting in an average score of approximately 0.70. Overall, the amplitude of parameter variation reflects the strength of the adaptive response required for different environments, whereas the scoring curve reflects the quality and stability of the enhancement results.
3.3.4. Feature Matching Visualization and Stability Metrics
To further verify the effectiveness of the proposed method in improving feature stability and matching quality from multiple perspectives, feature matching visualizations and stability metrics were analyzed across all test sequences.
Table 3 reports, for each sequence, the number of matched keypoints, the inlier ratio, the median Sampson error, and the two-hop survival ratio. From the table, it can be observed that the number of matched keypoints exceeded 3000 for most sequences, indicating that the enhanced images provide abundant and stable features across various scenes. The inlier ratio remained stable between 0.78–0.89, showing that the enhanced features exhibited strong geometric consistency during matching and did not easily result in large-scale mismatches. The Sampson error median serves as a geometric error indicator, where smaller values indicate more precise matches. Across the seven sequences, the median Sampson error lies between 0.16 and 0.33, indicating that the matched features obtained after enhancement were not only sufficient in quantity but also maintained relatively high matching accuracy. The two-hop survival ratio remained between 0.50–0.70, which demonstrates that the enhanced features can still maintain strong continuity under cross-frame propagation.
Table 3.
Stability evaluation of seven sequences from the UVE38K dataset.
| Sequence | Feature Matching and Stability Metrics | |||
|---|---|---|---|---|
| Matched Keypoints | Inlier Ratio | Sampson Error Median | Two-Hop Survival Ratio | |
| turtle_2 | 3991 | 0.8251 | 0.2535 | 0.5432 |
| mobula_4 | 3399 | 0.8914 | 0.1662 | 0.7055 |
| marine_r10 | 3477 | 0.8160 | 0.3064 | 0.5366 |
| marine_r8 | 2458 | 0.7843 | 0.2981 | 0.4998 |
| marine_r3 | 3385 | 0.8143 | 0.2918 | 0.5305 |
| marine_r2 | 2053 | 0.7820 | 0.2830 | 0.5079 |
| coral_1 | 14,052 | 0.8515 | 0.2555 | 0.5696 |
Furthermore, this study selected the two sequences turtle_2 and marine_r2 above-mentioned and performed a visualization comparison of feature matching before and after enhancement. As shown in Figure 8. Matching visualization of turtle_2 and marine_r2 sequences. (a) turtle_2-Raw; (A) turtle_2-Enh; (b) marine_r2-Raw; (B) marine_r2-Enh, the number of feature correspondences increased significantly after enhancement, and the distribution became more uniform. In the bluish underwater scene turtle_2, enhancement yielded clearer textures on distant rocks, and the feature distribution expanded over a wider region. In the weak-texture greenish scene marine_r3, the algorithm balanced brightness suppression and structural enhancement, enabling the previously indistinct sandy bottom textures to be successfully identified. As a result, the matching correspondences became more continuous and stable across the entire image region. Combining the quantitative metrics and visualization results, it can be concluded that the proposed method consistently improved the feature density, geometric consistency, and temporal stability across various typical underwater scenes, providing more reliable feature inputs for subsequent vision-based SLAM front-end processing.
Figure 8.
Matching visualization of turtle_2 and marine_r2 sequences. (a) turtle_2-Raw; (b) marine_r2-Raw; (c) turtle_2-Enh; (d) marine_r2-Enh.
Despite the stable performance observed in most typical underwater scenarios, the proposed method still exhibits limitations under certain challenging conditions. Since the enhancement framework is primarily constructed upon traditional image processing operators, its behavior essentially relies on redistributing brightness, contrast, and structural details. In underwater environments involving artificial light sources or reflective surfaces, strong local highlights, glare, and severely non-uniform illumination may occur. Such degradations are characterized by extreme local intensity stretching rather than simple global brightness shifts. Under these conditions, enhancement strategies based on thresholding or percentile statistics are required to balance highlight suppression and dark-region detail recovery. When luminance distributions become excessively extreme, a single-parameter adjustment may struggle to simultaneously satisfy both objectives, potentially leading to insufficient local structure restoration or unstable structural responses. Although the proposed method introduces highlight suppression and constrains enhancement strength to mitigate these effects in common scenarios, performance degradation may still arise when high-intensity regions dominate the scene or illumination conditions change abruptly. These failure cases highlight the need for more adaptive and context-aware enhancement strategies, which will be explored in future work.
4. Conclusions and Future Work
4.1. Conclusions
-
(1)
This paper proposes an underwater image enhancement method tailored for diverse aquatic environments. By integrating brightness enhancement, structural detail recovery, and chromaticity compensation into a collaborative multi-operator pipeline, the method effectively addresses degradation phenomena caused by underwater light absorption and scattering, including brightness attenuation, structural weakening, and color shifts.
-
(2)
To further improve the robustness of the enhancement pipeline under varying water conditions, this paper introduces an adaptive parameter optimization mechanism based on multi-metric evaluation. Through offline global parameter search and online neighborhood refinement, key parameters including the CLAHE clip limit, USM gain, and Gaussian scale are dynamically adjusted according to scene variations, thereby overcoming the limitations of fixed-parameter strategies in multi-scene environments.
-
(3)
Experimental results on public datasets such as RUIE demonstrate that the proposed method achieved the best performance in structural sharpness (AG = 0.5922) and overall visual quality (UIQM = 2.0895). On the UVE38K multi-sequence benchmark, compared with fixed-parameter baselines, the proposed approach yielded an average improvement of 12.5% in ORB keypoint count, 9.3% in inlier matches, and 3.9% in UIQM, while also exhibiting higher geometric consistency and temporal stability in correspondence visualization and inlier-ratio analysis. These results validate the effectiveness of the proposed method in improving both visual quality and feature stability, thereby providing more reliable input images for vision-based SLAM front-end processing.
4.2. Future Work
Although the proposed method demonstrated stable performance across diverse underwater conditions, several limitations remain and motivate future research directions. First, the current enhancement pipeline primarily relies on traditional image-processing operators and manually designed parameter optimization rules. While this design ensures efficiency and interpretability, it may limit adaptability under highly complex or previously unseen underwater environments. A promising direction for future work is to incorporate learning-based or hybrid approaches for adaptive parameter estimation. For example, lightweight deep models may be introduced to learn underwater optical properties and scene distributions, enabling more accurate structural restoration and color correction. In addition, learning-based or reinforcement-learning strategies could be explored to replace or complement the manually designed scoring function, allowing for more flexible and data-driven parameter adjustment while maintaining interpretability. Future studies will investigate its performance in real-world deployments, where enhanced visual inputs may support more stable three-dimensional reconstruction and improve the robustness of SLAM front-end feature tracking. Finally, further research may investigate temporal consistency within enhanced sequences, constructing cross-frame constraints to strengthen geometric continuity and structural stability.
Author Contributions
Conceptualization, Z.Y.; methodology, Z.Y.; software, S.Y.; formal analysis, Y.F.; investigation, Y.F. and H.J.; resources, Y.F.; writing—original draft preparation, S.Y.; writing—review and editing, S.Y.; visualization, S.Y. and H.J.; funding acquisition, Z.Y. All authors have read and agreed to the published version of the manuscript.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
The data presented in this study are available from the corresponding author upon reasonable request. The data are not publicly available due to privacy restrictions.
Conflicts of Interest
The authors declare no conflicts of interest.
Funding Statement
This work was funded by (1) Key Projects of Hubei Provincial Department of Education, D20221404; (2) Hubei Provincial Talent Plan for Scientific and Technological Innovation Project, Grant No. 2023DJC092; (3) Hubei Provincial Natural Science Foundation Innovation Team Project, Grant No. 2023AFA037; and (4) National Natural Science Foundation of China, 51907055, 52075152.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
References
- 1.Raveendran S., Patil M.D., Birajdar G.K. Underwater image enhancement: A comprehensive review, recent trends, challenges and applications. Artif. Intell. Rev. 2021;54:5413–5467. doi: 10.1007/s10462-021-10025-z. [DOI] [Google Scholar]
- 2.Zhao L., Ren X., Fu L., Yun Q., Yang J. UWS-YOLO: Advancing underwater sonar object detection via transfer learning and orthogonal-snake convolution mechanisms. J. Mar. Sci. Eng. 2025;13:1847. doi: 10.3390/jmse13101847. [DOI] [Google Scholar]
- 3.Xu R., Zhu D., Pang W., Chen M. An underwater low-light image enhancement algorithm based on image fusion and color balance. J. Mar. Sci. Eng. 2025;13:2049. doi: 10.3390/jmse13112049. [DOI] [Google Scholar]
- 4.Han M., Lyu Z., Qiu T., Xu M. A review on intelligence dehazing and color restoration for underwater images. IEEE Trans. Syst. Man Cybern. Syst. 2020;50:1820–1832. doi: 10.1109/TSMC.2017.2788902. [DOI] [Google Scholar]
- 5.Zheng L., Wang Y., Ding X., Mi Z., Fu X. Single underwater image enhancement by attenuation map guided color correction and detail preserved dehazing. Neurocomputing. 2021;425:160–172. doi: 10.1016/j.neucom.2020.03.091. [DOI] [Google Scholar]
- 6.Qiang H., Zhong Y., Zhu Y., Zhong X., Xiao Q., Dian S. Underwater image enhancement based on multichannel adaptive compensation. IEEE Trans. Instrum. Meas. 2024;73:1–10. doi: 10.1109/TIM.2024.3378290. [DOI] [Google Scholar]
- 7.Huang Y., Yuan F., Xiao F., Lu J., Cheng E. Underwater image enhancement based on zero-reference deep network. IEEE J. Ocean. Eng. 2023;48:903–924. doi: 10.1109/JOE.2023.3245686. [DOI] [Google Scholar]
- 8.Wang Y., Zhang J., Cao Y., Wang Z. A deep CNN method for underwater image enhancement; Proceedings of the IEEE International Conference on Image Processing (ICIP); Beijing, China. 17–20 September 2017; pp. 1382–1386. [DOI] [Google Scholar]
- 9.Li J., Skinner K.A., Eustice R.M., Johnson-Roberson M. WaterGAN: Unsupervised generative network to enable real-time color correction of monocular underwater images. IEEE Robot. Autom. Lett. 2018;3:387–394. doi: 10.1109/LRA.2017.2730363. [DOI] [Google Scholar]
- 10.Peng L., Zhu C., Bian L. U-Shape transformer for underwater image enhancement. IEEE Trans. Image Process. 2023;32:3066–3079. doi: 10.1109/TIP.2023.3276332. [DOI] [PubMed] [Google Scholar]
- 11.Li C., Anwar S., Hou J., Cong R., Guo C., Ren W. Underwater Image Enhancement via Medium Transmission-Guided Multi-Color Space Embedding. IEEE Trans. Image Process. 2021;30:4985–5000. doi: 10.1109/TIP.2021.3076367. [DOI] [PubMed] [Google Scholar]
- 12.Jamieson S., How J.P., Girdhar Y. DeepSeeColor: Realtime Adaptive Color Correction for Autonomous Underwater Vehicles via Deep Learning Methods; Proceedings of the 2023 IEEE International Conference on Robotics and Automation (ICRA); London, UK. 29 May–2 June 2023; pp. 3095–3101. [DOI] [Google Scholar]
- 13.Drews P., Jr., do Nascimento E., Moraes F., Botelho S., Campos M. Transmission estimation in underwater single images; Proceedings of the IEEE International Conference on Computer Vision Workshops (ICCVW); Sydney, Australia. 2–8 December 2013; pp. 825–830. [DOI] [Google Scholar]
- 14.Peng Y.-T., Cao K., Cosman P.C. Generalization of the dark channel prior for single image restoration. IEEE Trans. Image Process. 2018;27:2856–2868. doi: 10.1109/TIP.2018.2813092. [DOI] [PubMed] [Google Scholar]
- 15.Galdrán A., Pardo D., Picón A., Alvarez-Gila A. Automatic red-channel underwater image restoration. J. Vis. Commun. Image Represent. 2015;26:132–145. doi: 10.1016/j.jvcir.2014.11.006. [DOI] [Google Scholar]
- 16.Pizer S.M., Amburn E.P., Austin J.D., Cromartie R., Geselowitz A., Greer T., ter Haar Romeny B., Zimmerman J.B., Zuiderveld K. Adaptive histogram equalization and its variations. Comput. Vis. Graph. Image Process. 1987;39:355–368. doi: 10.1016/S0734-189X(87)80186-X. [DOI] [Google Scholar]
- 17.Zuiderveld K. Contrast limited adaptive histogram equalization. In: Heckbert P.S., editor. Graphics Gems IV. Academic Press; Cambridge, MA, USA: 1994. pp. 474–485. [Google Scholar]
- 18.Land E.H. The Retinex theory of color vision. Sci. Am. 1977;237:108–129. doi: 10.1038/scientificamerican1277-108. [DOI] [PubMed] [Google Scholar]
- 19.Ancuti C., Ancuti C.O., Haber T., Bekaert P. Enhancing underwater images and videos by fusion; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR); Providence, RI, USA. 16–21 June 2012; pp. 81–88. [DOI] [Google Scholar]
- 20.Reza A.M. Realization of the contrast limited adaptive histogram equalization (CLAHE) for real-time image enhancement. J. VLSI Signal Process. Syst. Signal Image Video Technol. 2004;38:35–44. doi: 10.1023/B:VLSI.0000028532.53893.82. [DOI] [Google Scholar]
- 21.Eilertsen G., Mantiuk R.K., Unger J. A comparative review of tone-mapping algorithms for high dynamic range video. Comput. Graph. Forum. 2017;36:565–592. doi: 10.1111/cgf.13148. [DOI] [Google Scholar]
- 22.Lischinski D., Farbman Z., Uyttendaele M., Szeliski R. Interactive local adjustment of tonal values. ACM Trans. Graph. 2006;25:646–653. doi: 10.1145/1141911.1141936. [DOI] [Google Scholar]
- 23.Reinhard E., Stark M., Shirley P., Ferwerda J. Photographic tone reproduction for digital images. ACM Trans. Graph. 2002;21:267–276. doi: 10.1145/566654.566575. [DOI] [Google Scholar]
- 24.Gonzalez R.C., Woods R.E. Digital Image Processing. 4th ed. Pearson International; London, UK: 2017. [(accessed on 1 July 2025)]. Available online: https://elibrary.pearson.de/book/99.150005/9781292223070. [Google Scholar]
- 25.Marr D., Hildreth E. Theory of edge detection. Proc. R. Soc. Lond. B Biol. Sci. 1980;207:187–217. doi: 10.1098/rspb.1980.0020. [DOI] [PubMed] [Google Scholar]
- 26.Chiang J.Y., Chen Y.-C. Underwater image enhancement by wavelength compensation and dehazing. IEEE Trans. Image Process. 2012;21:1756–1769. doi: 10.1109/TIP.2011.2179666. [DOI] [PubMed] [Google Scholar]
- 27.Zhang W., Dong L., Zhang T., Xu W. Enhancing underwater image via color correction and bi-interval contrast enhancement. Signal Process. Image Commun. 2021;90:116030. doi: 10.1016/j.image.2020.116030. [DOI] [Google Scholar]
- 28.Fu X., Zhuang P., Huang Y., Liao Y., Zhang X.-P., Ding X. A Retinex-based enhancing approach for single underwater image; Proceedings of the IEEE International Conference on Image Processing (ICIP); Paris, France. 27–30 October 2014; pp. 4572–4576. [DOI] [Google Scholar]
- 29.Fattal R., Lischinski D., Werman M. Edge-preserving decompositions for tone and detail manipulation. ACM Trans. Graph. 2007;26:67. [Google Scholar]
- 30.Ma K., Zeng K., Wang Z. Perceptual quality assessment for multi-exposure image fusion. IEEE Trans. Image Process. 2015;24:3345–3356. doi: 10.1109/TIP.2015.2442920. [DOI] [PubMed] [Google Scholar]
- 31.Islam M.J., Xia Y., Sattar J. Fast underwater image enhancement for improved visual perception. IEEE Robot. Autom. Lett. 2020;5:3227–3234. doi: 10.1109/LRA.2020.2974710. [DOI] [Google Scholar]
- 32.Li C., Guo J., Guo C., Cong R., Wan S., Hou J., Kwong S., Li J. An underwater image enhancement benchmark dataset and beyond. IEEE Trans. Image Process. 2020;29:4376–4389. doi: 10.1109/TIP.2019.2955241. [DOI] [PubMed] [Google Scholar]
- 33.Finlayson G.D., Trezzi E. Shades of gray and colour constancy; Proceedings of the Color and Imaging Conference (CIC12); Scottsdale, AZ, USA. 9–12 November 2004; pp. 37–41. [Google Scholar]
- 34.Liu R., Fan X., Zhu M., Hou M., Luo Z. Real-world underwater enhancement: Challenges, benchmarks, and solutions under natural light. IEEE Trans. Circuits Syst. Video Technol. 2020;30:4861–4875. doi: 10.1109/TCSVT.2019.2963772. [DOI] [Google Scholar]
- 35.Gao X., Jin J., Lin F., Huang H., Yang J., Xie Y., Zhang B. Enhancing underwater images through multi-frequency detail optimization and adaptive color correction. J. Mar. Sci. Eng. 2024;12:1790. doi: 10.3390/jmse12101790. [DOI] [Google Scholar]
- 36.Zhang W., Dong L., Pan X., Zou P., Qin L., Xu W. A survey of restoration and enhancement for underwater images. IEEE Access. 2019;7:182259–182279. doi: 10.1109/ACCESS.2019.2959560. [DOI] [Google Scholar]
- 37.Wang S., Ma K., Yeganeh H., Wang Z., Lin W. A patch-structure representation method for quality assessment of contrast changed images. IEEE Signal Process. Lett. 2015;22:2387–2390. doi: 10.1109/LSP.2015.2487369. [DOI] [Google Scholar]
- 38.Panetta K., Gao C., Agaian S. Human-visual-system-inspired underwater image quality measures. IEEE J. Ocean. Eng. 2016;41:541–551. doi: 10.1109/JOE.2015.2469915. [DOI] [Google Scholar]
- 39.Yang M., Sowmya A. An underwater color image quality evaluation metric. IEEE Trans. Image Process. 2015;24:6062–6071. doi: 10.1109/TIP.2015.2491020. [DOI] [PubMed] [Google Scholar]
- 40.Zhang Y., Qi Q., Li K., Liu D. Underwater video consistent enhancement: A real-world dataset and solution with progressive quality learning. Multimed. Tools Appl. 2024;83:7335–7361. doi: 10.1007/s11042-023-15542-3. [DOI] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
The data presented in this study are available from the corresponding author upon reasonable request. The data are not publicly available due to privacy restrictions.








