Skip to main content
PLOS One logoLink to PLOS One
. 2025 Sep 18;20(9):e0332195. doi: 10.1371/journal.pone.0332195

Visual edge feature enhancement of product appearance design images based on improved retinex algorithm

Cheng-jie Chen 1,*, Guo-rui Tang 2
Editor: Yongjie Li3
PMCID: PMC12445546  PMID: 40966214

Abstract

Under the influence of complex factors such as lighting, color distortion, and suspended solids, there is a problem of losing edge feature information and blurring edges in product appearance design images. In order to improve the clarity and visual effect of product appearance design, a visual edge feature enhancement method for product appearance design images based on an improved Retinex algorithm is proposed. By using a color correction method based on depth of field estimation, the blue tone of the product appearance design image is removed, and color correction and contrast are applied to the product appearance design image. Improve the Gray Wold algorithm and design an edge attenuation compensation method to solve the problem of edge color attenuation under noise interference, and obtain clearer product appearance design images. On the basis of clarity processing, convert the original RGB image into HSV. On the basis of the Retinex model, multi-level decomposition of brightness is carried out, and different filtering parameters are set to obtain multiple illumination and reflection images with different scale information; Using exponential function and Sigmoid function to process reflection images and illumination images separately, reducing external interference on images of different scales, and solving the difficulty of enhancing images with uneven illumination, high noise, low illumination, and loss of details. At the same time, adaptive nonlinear correction is applied to the saturation component, and the corrected saturation, brightness, and hue are fused and converted into RGB, expanding the edge grayscale feature information in various spatial domains. Improve the weights of traditional bilateral filtering methods, reduce the depth difference between information at different scales, and enhance the visual edge features of product appearance design images. The experimental results show that the proposed method enhances the image with a PCQI of 1.033, an IQE of 0.610, an IQM of 1.830, and an information entropy higher than 0.7. The above data proves that this method has a high richness of edge feature information after image enhancement, significantly improving the visual edge feature enhancement effect of product appearance design images.

1. Introduction

In today’s visual dominated era, product appearance design is not only a manifestation of functionality, but also an intuitive display of brand philosophy and aesthetic pursuit. With the rapid development of technology and the increasing aesthetic demands of consumers, how to make product appearance design stand out among many competitors has become the focus of attention for enterprises. The visual effect of product appearance design images, as an important medium for conveying design concepts and product characteristics, directly affects consumers’ first impression and purchasing decisions.

Among numerous visual elements, edge features play an indispensable role in the expressive power of product appearance design as the basis for constructing object contours and forms. The clarity, continuity, and uniqueness of the edges directly affect the refinement and recognition of the product’s appearance. Therefore, enhancing the visual edge features of product appearance design images has become an important means to improve design quality and enhance market competitiveness.

By using advanced image processing techniques to finely enhance the edge features in the image [1,2], the details of product design can be made more vivid and the form more three-dimensional, thereby generating stronger visual impact and attraction. The above processing methods not only help designers better express their design concepts, but also facilitate consumers to more intuitively understand product features and enhance user experience.

Related researchers have conducted extensive research on the enhancement of visual edge features in product appearance design images [3,4], and have achieved significant research results. Ren and Liu [5] found that there is information redundancy and feature correlation in the spatial neighborhood of polarimetric SAR images, and fully utilizing spatial neighborhood information can help improve the discriminability and robustness of sample features; By introducing an adaptive superpixel clustering algorithm based on polarization statistics HSV color features, a method is proposed to enhance image features using neighborhood correlation. The noise and interference factors present in polarimetric SAR images can cause attenuation of edge color features, leading to instability of color features and ultimately affecting the effectiveness of feature enhancement. Liu et al. [6] proposed using a recurrent generative adversarial network to transform the underwater image enhancement problem into a style transfer problem, achieving unsupervised learning; On the other hand, combining feature decoupling methods to extract style and structural features of the image separately ensures the consistency of the structure before and after enhancement, completing feature enhancement. When processing underwater images, the model is more prone to getting stuck in local optima due to complex factors such as lighting, color distortion, and suspended solids, resulting in poor image quality. Pang et al. [7] designed a structural feature mapping network and a dual scale feature extraction network. The structural feature mapping network is used to establish global structural feature weights and maintain the spatial structural information of the original image. The dual scale feature extraction network uses multi-scale convolutional layers and fused multi hole convolutions to enhance the network’s attention to contextual information, improve its feature extraction ability for regions of interest, and learn feature information at different scales to exchange information between the two scales, generate target enhancement maps, and achieve adaptive enhancement of target region details and textures. When this method is applied to the processing of images with uneven lighting or loss of details, it is easily interfered by redundant reflection images, and the edge enhancement effect of the image is insufficient. Zhao et al. [8] proposed a network shared edge low resolution feature extraction network that effectively extracts features from two edge decoded images with the same content and different details. They also designed a residual recursive compensation network structure and applied it to edge and middle low resolution feature extraction networks. Secondly, a multi description edge upsampling reconstruction network is designed, which adopts a partial network layer parameter sharing strategy. This strategy can reduce the number of network model parameters. A multi description middle path upsampling reconstruction network is proposed, which combines the low resolution features of two edges with the low resolution features of the middle path to achieve deep feature fusion for enhancing multi description compressed images. This method only combines the low resolution features of two edges and the low resolution characteristics of the middle path, and cannot optimize the grayscale features of multiple spatial domains, making it difficult to effectively preserve the edge details of the image. The enhancement effect needs to be improved. In addition to the processing of product design images, the processed images can be used to predict the photovoltaic system illumination and ultraviolet irradiance, detect solar cell defects, and detect the dust condition on the surface of photovoltaic panels. For example, Abdelsatar, M et al. [9] compared the effectiveness of machine learning models such as support vector classification, linear regression, extreme gradient enhancement, gradient enhancement, random forest, and CatBoost, and evaluated their predictive performance in estimating the amount of light and ultraviolet radiation in photovoltaic systems. This method directly utilizes machine learning models for image enhancement, lacking the process of image clarity, and the application effect needs to be improved. Abdelsatar M et al. [10] utilized extensive deep learning techniques for automatic defect recognition in solar cell images. Using 24 different convolutional neural network architectures to classify solar cells into defect and non defect categories. This method did not effectively add to the image before defect category recognition. Abdelsatar M et al. [11] studied the effectiveness of three MobileNet variants based on training image data in correctly classifying dusty and perfect photovoltaic surfaces. This method is limited to the application effect of MobileNet variants, and the enhancement effect is limited.

On the basis of previous research methods, a visual edge feature enhancement method for product appearance design images based on an improved Retinex algorithm is proposed. Improve the Gray Wold algorithm and design an edge attenuation compensation method to solve the problem of edge color attenuation under noise interference, and obtain clearer product appearance design images. Introducing the Retinex algorithm, the image is decomposed into two parts, and fusion and decomposition are carried out separately. Multi scale enhanced reflection images and illumination images are used to reduce different types of interference in images of different scales, and to solve the enhancement problems of images with uneven illumination, high noise, low illumination, and loss of details. Simultaneously optimize the grayscale features of each spatial domain to achieve edge feature enhancement. On the basis of traditional bilateral filtering, enhance the ability to preserve image edge detail information.

2. Method for enhancing visual edge features of product appearance design images

2.1. Product appearance design, image clarity scheme design

Different product materials have different optical properties, such as the high reflectivity of metals, the texture absorption and scattering of light in wood, and the transparency of plastics, which result in uneven lighting and exhibit different behaviors on different materials. For example, the enamel on the surface of ceramic products may produce specular reflection, while textiles are more diffuse reflection. This requires the consideration of complex factors such as lighting, color distortion, and suspended solids in the application process of complex images. In general, due to the small distance between the object and the camera, the influence of forward scattering components can be ignored. Therefore, only color correction and contrast enhancement need to be integrated to propose a new solution for clarifying product appearance design images. The image sharpening process is shown in Fig 1.

Fig 1. Image Clarity Processing Flow.

Fig 1

Simplify the imaging model into the form of formula (1):

I(x)=J(x)t(x)+B(1t(x)) (1)

In the above equation, J(x)t(x) represents the direct component; B(1t(x)) represents the background scattering component; I(x) represents the original image; J(x) represents clear images; B represents background light; t(x) represents transmittance, which is expressed in the form of formula (2):

t(x)=exp(cd(x)) (2)

In the above equation, c represents the attenuation coefficient of the medium; d(x) represents the distance between scene point x and the camera.

Due to color cast in product appearance design images, if the background has complex textures or is similar in color to the product, the edges of the product appearance design image will attenuate. To address this, an improved Gray Wold algorithm is introduced to design a compensation method. The main ideas include:

The attenuation process of product appearance design images is represented in the form of formula (2):

Iλ(x)=J(x)exp(zλd(x)) (3)

In the above equation, exp(zλd(x)) represents the attenuation factor of light with a wavelength of λ.

Thus, color correction of product appearance design images can be achieved by estimating and removing the color of the light source.

Calculate the color attenuation factors of each channel and compensate for the color attenuation caused by the medium: that is, treat the blue tone in the image as the color of the light source, use the Gray World algorithm to estimate the color of the light source and remove it. The estimation of light source color can be expressed in the form of formula (4):

((Iλ(x)attλ)dxIλ(x)dx)=aλe (4)

In the above equation, attλ represents the attenuation factor; Iλ(x) represents the attenuated image; aλ represents the output gain of each color channel; e represents the color of the light source.

In equation (4), the key to obtaining the attenuation factors of each channel lies in the depth of field of the scene and the estimation of the attenuation coefficient. Therefore, based on the characteristics of the product appearance design image, a depth of field function is proposed to estimate the attenuation differences of each channel, and the attenuation factors of the three channels are calculated separately.

For different scenarios, due to the severe attenuation of red light, the intensity of the red channel in the background is very low, and the scattering of light will result in relatively high intensity of the blue or green channel. Considering the difference in attenuation among the three channels, based on the maximum prior value of the red, blue, and green dark channels, calculate the background light as shown in formula (5):

C=\argmaxx(Idark(R)(x)maxx(Idark(G)(x),Idark(B)(x))) (5)

In the above equation, Idark(R)(x), Idark(G)(x), and Idark(B)(x) represent the dark channel values of the R, G, and B channels of image I, respectively.

In order to make the depth of field estimation more accurate, the ratio βλ(x) of the global depth of field function R(x) to the depth of field functions Rλ(x) of the R, G, and B component maps is first obtained, which is called the channel depth ratio, that is:

βλ(x)=CR(x)Rλ(x) (6)

Due to the fact that βλ(x) is not a constant in practical applications, and its values are also different in R, G, and B channels, the larger the variance, the more relative information it contains. In order to make the depth of field estimation more accurate, more information needs to be obtained from the input image. Therefore, the channel with the largest βλ(x) -variance is selected as λ, and the depth of field dλ(x) is obtained by combining formula (6):

dλ(x)zλd(x)Rλ(x)R(x)βλ(x) (7)

In order to restore more details, guided filtering is used to optimize the obtained depth of field function dλ(x) and obtain the attenuation factor of the λ channel. Taking blue light as an example, there are:

{dB(x)=dλ(x)attB=exp(dB(x)) (8)

Based on relevant prior knowledge, the attenuation ratio of red and green light relative to blue light can be calculated, as shown in formula (9):

{zRzB=bRABbRAB[10pt]zGzB=bGABbBAG (9)

In the above formula, zR, zG, and zB respectively represent the attenuation factors corresponding to each color channel; bR. bG and bB represent the color components corresponding to different color channels, respectively; AR. AG and AB represent the attenuation ratio of different color channels.

The attenuation factors attR and attG of the other two channels can be obtained from formula (10):

{attR=attBzRzB[10pt]attG=attBzGzB (10)

Substitute formula (10) into formula (4), perform color compensation on the product appearance design image [12], estimate the light source color e, and delete it to obtain the color corrected product appearance design image.

In most product appearance design images, certain pixels always have at least one color channel with a very low intensity value, even approaching 0, that is:

Idark(x)=minyΩ(x)(minλ{R,G,B}Iλ(x))0 (11)

In the above equation, Ω(x) represents a local block centered around x; Idark(x) represents the dark channel value of image I. Due to the neglect of the influence of the red channel when estimating the depth of field using dark channel priors, the dark channel value of the product appearance design image is relatively small, which affects the depth of field estimation. Therefore, it is proposed to set a reasonable threshold for the red channel to determine whether to add red channel information to the dark channel calculation.

Firstly, for the intensity of the red channel, a threshold is set, and then the mean of the red channel is calculated. If the mean is greater than the set threshold, the red channel information is added to the dark channel prior. Otherwise, only the blue-green channel is considered, and the dark channel is corrected to:

Idark(x)=min(minyΩ(x)pIR(y)),(minyΩ(x)pIG(y)),(minyΩ(x)pIB(y))0 (12)

In the above equation, yΩ(x) represents the image block centered around x; IR. IG and IB represent the R, G, and B channels in the observed image, respectively; p represents the threshold; y represents the restoration effect.

During the transmission of light, red light attenuates the most severely, while blue light attenuates relatively less, resulting in product design images often presenting a blue tone. Usually, the output outR of the red channel can be expressed in the form of formula (13):

{kR=MNMeanΣR[6pt]Mean=ΣR+ΣG+ΣB3MN[6pt]outR=RkR=RΣR+ΣG+ΣB3ΣR (13)

In the above equation, kR represents the output gain of the red channel; Mean represents the average pixel count of the product’s exterior design image. By analyzing formula (13), it can be seen that as the value of R approaches 0, it indicates that outR tends towards infinity, causing the restored image to have a red channel that is not too long, and the processed product appearance design image to appear light red.

Make the gain of each channel positively correlated with its ratio to the intensity values of the three channels in the image. At the same time, in order to fully utilize the information of the red channel in the image, make it inversely correlated with the proportion of that channel.

Firstly, calculate the intensity value Sum of channel 3 in the image as:

Sum=ΣR+ΣG+ΣB (14)

Then, calculate the ratios ϕR, ϕG, and ϕB of channels R, G, and B respectively:

{ϕR=ΣRSum[6pt]ϕG=ΣGSum[6pt]ϕB=ΣBSum (15)

Finally, calculate the gains kR, kG, and kB for channels R, G, and B respectively:

{kR=1ϕR=1ΣRSum=ΣG+ΣBSum[6pt]kG=ΣGSum[6pt]kB=ΣBSum (16)

The outputs outR, outG, and outB of channels R, G, and B can be obtained from formula (16):

{outR=RkR=RΣG+ΣBΣR+ΣG+ΣB[6pt]outG=RkG=RΣGΣR+ΣG+ΣB[6pt]outB=RkB=RΣBΣR+ΣG+ΣB (17)

In order to further improve the quality of product appearance design image restoration, feature information of the two input images is extracted [13, 14], and fusion weight maps are defined, namely brightness map, saliency map, local contrast map, and saturation map. The input image information can be adaptively preserved according to the local features of the image, ensuring that the fused image has high brightness, saliency, local contrast, saturation, etc.

Perform adaptive histogram equalization on the three channels of the product appearance design image to obtain input image I2; Using formula (18) to extract features from input images I1 and I2 [15, 16], obtain brightness WLk, saliency map WNk, contrast WLC, and saturation WS, respectively:

{WLk=[(RkLk)+(GkLk)+(BkLk)2]/[(RkLk)+(GkLk)+(BkLk)2]3\nulldelimiterspace3WNk=[(LkLmk)+(akamk)+(bkbmk)2]/[(LkLmk)+(akamk)+(bkbmk)2]3\nulldelimiterspace3WLC(x,y)=LkLbckWS=exp((S1)22σ2) (18)

In the above formula, L represents brightness; k represents the input image number; a and b represent the brightness of input image I1 in the a and b color channels; amk and bmk represent the mean values of the a and b color channels; Lmk represents the average brightness of the Lab space; Lbck represents the brightness channel obtained after low-pass filtering; S represents the saturation value of each pixel; σ represents the standard deviation.

After completing the above operations, obtain the standardized weight maps Wk and W¯k of the two input images using formula (19):

{Wk=WLk+WNk+WLC+WSW¯k=Wkk=12Wk (19)

Based on the standardized weight map obtained, perform multi-scale fusion on two input images [17, 18] to obtain a clear product appearance design image J(x,y):

J(x,y)=kW¯kIk(x,y) (20)

Therefore, improving the unevenness of lighting caused by light refraction, removing noise, and preserving the sharpness of product appearance design image edges.

2.2. Enhancement algorithm based on improved Retinex algorithm

The edge enhancement of product appearance images aims to reduce the grayscale variation in the spatial domain. Therefore, the Retinex algorithm is introduced to decompose the image into two parts, unfold and process them separately, fuse them, and then decompose them to obtain the reflection image and the illumination image [19,20]. Different sharpness image enhancement functions are used to unfold and enhance from their respective perspectives, while optimizing each spatial domain to complete the enhancement.

According to the Retinex model [21, 22], the product appearance design image is divided into two parts, corresponding to the following expression:

U(x,y)=J(x,y)R(x,y)L(x,y) (21)

In the above formula, U(x,y) represents the original product appearance design image; R(x,y) and L(x,y) represent reflection and illumination images, respectively; represents convolution operation.

In reality, there is no situation where all light is absorbed or reflected by objects. In order to meet the saturation, brightness, and color tone requirements of product appearance design images, the range of the reflection component is set to R(x,y)[0,1], and the dynamic range of the illumination component is obtained as follows:

L(x,y)=U(x,y)R(x,y)U(x,y) (22)

The enhancement model for Retinex images is represented in the form of formula (23):

U(x,y)=l(R(x,y))g(L(x,y)) (23)

In the above formula, U(x,y) represents the processed product appearance design image; l(·) and g(·) represent functions used to enhance reflection images and illumination images, respectively. The Retinex algorithm is applied to various situations such as uneven lighting, high noise, low illumination, and loss of details. Different enhancement functions can be selectively used for image processing according to the required requirements. Decompose the original image to obtain the decomposed illumination and reflection images, respectively expand and enhance them to obtain the enhanced illumination and reflection images [1, 23], reduce the redundant interference of illumination and reflection images with different scales of information, and fuse them to obtain the final enhanced image.

On the basis of traditional bilateral filtering, the weight ω is improved to enhance the ability to preserve image edge details. The improved ω is represented in the form of formula (24):

ω={1(im)2+(jn)2r,(im)2+(jn)2r0,otherwise (24)

In the above equation, r represents the size of the filtering window; m represents the size of the product’s appearance design image.

Expand the visual edge feature enhancement of product appearance design images [24, 25], and the specific steps are as follows:

  • (1)

    Convert the input raw image from RGB model to HSV model through color model, and then extract the saturation V and brightness S from the model.

  • (2)

    Using improved bilateral filtering, multi-level decomposition is performed on brightness V to obtain the first, second, and third layer reflection images R1, R2, and R3 with different scales of information, as well as the third layer illumination image L3.

  • (3)

    The reflection image decomposed by Retinex may have insufficient dynamic range due to noise, and it is necessary to expand the dynamic range of the reflection image components through nonlinear transformation, while sharpening details and improving local contrast to restore more realistic surface characteristics. The exponential function has a non-uniform stretching effect on the reflection component of the input value, which can maintain the enhanced natural transition. By using an exponential function to expand and enhance the decomposed reflection images at different levels, the processed reflection image f(R1,R2,R3) is obtained:

f(R1,R2,R3)=θpV (25)

In the above equation, θp represents the indicator function.

(4) The third layer illumination image decomposed by Retinex contains global illumination components with a large dynamic range, but visually sensitive areas are often concentrated in the medium brightness range. The Sigmoid function is a smooth S-shaped curve that can nonlinearly compress a wide range of input brightness values into the [0,1] interval, while enhancing the contrast of the middle brightness region, flexibly controlling the intensity and range of enhancement, and adapting to different lighting conditions. Therefore, using the Sigmoid function to enhance the third layer illumination image L3, the enhanced illumination image g(L3) is obtained:

g(L3)=1(1+α1L3/1L3(L3+ρ)\nulldelimiterspace(L3+ρ)) (26)

In the above equation, α represents the illuminance adjustment coefficient; ρ represents the slack variable.

(5) Integrating the enhanced reflection and illumination images to obtain the enhanced brightness component V:

V=f(R1,R2,R3)g(L3) (27)

(6) After adaptive nonlinear correction, the processed saturation S is obtained:

S=RnLn (28)

(7) Integrating H, S, and V components H, S, and V to obtain an enhanced image, converting the image back to RGB to obtain the final visual edge feature enhancement result Z:

Z=l(R1)l(R2)l(Rn)g(Ln)H+S+V (29)

The core algorithm steps of the visual edge feature enhancement method for the appearance design image of the above product are pseudocode, as shown in Fig 2.

Fig 2. Algorithm pseudocode.

Fig 2

3. Experiment

Using a product appearance design image acquisition device, sample 500 product appearance design images for expansion search and training, and select 100 images for expansion testing, as shown in Fig 3.

Fig 3. Product appearance design image collection environment.

Fig 3

The experimental data of 600 images in the dataset is sourced from the industrial grade image acquisition device for product appearance design shown in Fig 3, which uses a 25 megapixel CMOS sensor. The experimental data of 600 images includes products of bottled, bagged, barreled, boxed and other types, with materials including paper, plastic, etc. The data has differentiated and diverse characteristics. Each product includes front view, oblique view, side view and other angles, covering the complete appearance features of the product’s design image. The original image is infused with interference factors such as reflective spots, occlusion, and texture blur, which can demonstrate the enhanced robustness of our method. Therefore, it is necessary for application.

In the Matlab platform, the software and hardware development environment is as follows:

  • (1)

    The experimental hardware platform is Intel (R) Core (TM) i7-8700KCPU 3 70 UHz processor, 16 A desktop computer with 00 UB RAM and 64 bit operating system.

  • (2)

    The image size is 256 × 256 × 3, with a resolution of 25 million pixels, a frame rate of 20FPS, an exposure time of 1/60s, and a focal length of 30mm.

  • (3)

    Based on the number of images and resolution characteristics, in order to fully train the algorithm, the number of iterations is set to 200.

  • (4)

    Improve Gray World algorithm parameters: use Gaussian attenuation function, standard deviation σ=1.5, compensation radius r=3.

Retinex multi-scale decomposition level: Level 3. Gaussian filtering parameters at all levels: Layer 1 (coarse scale): Window size 31 × 31 (capturing global illumination). Layer 2 (Medium Scale): Window size 15 × 15 (Medium Structure). Layer 3 (fine scale): Window size 7 × 7 (retaining high-frequency details).

  • (5)

    Running time (single 1080P image): CPU mode: approximately 0.8 seconds. GPU acceleration: about 0.3 seconds (achieving bilateral filtering and Retinex decomposition). Multi scale decomposition adopts multi threading, and each level of filtering is independently calculated.

Based on existing experience, bilateral filtering weights ∈ {3, 5, 7, 9, 11, 13, 15} were set and modified for verification. By calculating the feature retention rate under different filtering weights, it was found that when the weight was greater than 5, the feature retention rate changed, effectively suppressing the detail blurring phenomenon caused by large window filtering. Considering the computational complexity, we have chosen a bilateral filtering weight of 7.

The experimental sample is shown in Fig 4.

Fig 4. Sample of product appearance design images.

Fig 4

Experiment 1: Clarity Performance Test.

In order to verify the superiority of the proposed method, a comparison was made between the physical model based method and the Tetrolet transform based method. Four different product appearance design images were selected for testing, and the experimental results are shown in Fig 5.

Fig 5. Comparison of image clarity results of product appearance design using different methods.

Fig 5

From Fig 5, it can be seen that the background of the original image sample has complex textures and similar texture colors. Therefore, during the image transmission process, the red light attenuation is more severe, and the product appearance design image presents a blue tone color deviation, with not only more noise but also blurred edges. The method proposed in this article corrects the red and blue channel values using a depth estimation function, and designs an edge attenuation compensation method using the grey world algorithm. By estimating the color of the light source, the color attenuation of the product appearance design image is compensated and corrected to remove blue color cast and noise, making the texture edges of the image clearer. However, other methods neglect the depth difference of different color channels and do not compensate for color attenuation, resulting in relatively poor image clarity. Therefore, the image clarity results demonstrate the effectiveness of the fusion of depth estimation and grey world algorithm in this paper.

Use four evaluation metrics, namely Block Contrast Quality Index (PCQI), Image Quality Evaluation (IQE), Image Quality Measurement (IQM), and Clarity Time, to evaluate the performance of each method. Among them, PCQI is used to measure the contrast of general degraded images, IQE is used to measure the comprehensive indicators of color cast, blur, and contrast of images, and IQM is used to measure the comprehensive indicators of color, clarity, and contrast of images. The higher the values of PCQI, IQE, and IQM indicators, the higher the quality of the image. The lower the clarity time indicator value, the higher the computational efficiency of the method. Table 1 presents the mean, median, and variance values of the evaluation indicators for the processing results of four images using the aforementioned methods.

Table 1. Comparison of PCQI, IQE, and IQM experimental results using different methods.

Test image number Test indicators test method
Proposed method Based on physical modeling methods Based on the Tetrolet transformation method
01 PCQI 1.033(Median:1.032, Variance:0.0001) 0.889(Median:0.888, Variance:0.0002) 0.782(Median:0.781, Variance:0.0001)
IQE 0.602(Median:0.601, Variance:0.0001) 0.595(Median:0.594, Variance:0.0002) 0.579(Median:0.578, Variance:0.0002)
IQM 1.640(Median:1.639, Variance:0.0003) 1.633(Median:1.631, Variance:0.0001) 1.604(Median:1.602, Variance:0.0001)
Clarity Time/ms 13.671(Median:13.669, Variance:0.0002) 19.392(Median:19.395, Variance:0.0002) 23.341(Median:23.340, Variance:0.0002)
02 PCQI 1.007(Median:1.006, Variance:0.0001) 0.920(Median:0.923, Variance:0.0001) 0.885(Median:0.883, Variance:0.0001)
IQE 0.610(Median:0.609, Variance:0.0002) 0.597(Median:0.593, Variance:0.0002) 0.591(Median:0.590, Variance:0.0001)
IQM 1.773(Median:1.772, Variance:0.0001) 1.760(Median:1.757, Variance:0.0001) 1.748(Median:1.747, Variance:0.0002)
Clarity Time/ms 11.892(Median:11.891, Variance:0.0002) 17.983(Median:17.979, Variance:0.0002) 20.194(Median:20.192, Variance:0.0002)
03 PCQI 1.020(Median:1.021, Variance:0.0001) 0.923(Median:0.922, Variance:0.0001) 0.891(Median:0.889, Variance:0.0001)
IQE 0.598(Median:0.597, Variance:0.0002) 0.590(Median:0.591, Variance:0.0004) 0.582(Median:0.581, Variance:0.0004)
IQM 1.830(Median:1.831, Variance:0.0002) 1.757(Median:1.756, Variance:0.0001) 1.719(Median:1.717, Variance:0.0002)
Clarity Time/ms 12.006(Median:12.008, Variance:0.0001) 18.692(Median:18.691, Variance:0.0002) 21.359(Median:21.357, Variance:0.0001)
04 PCQI 1.005(Median:1.003, Variance:0.0002) 0.884(Median:0.882 Variance:0.0001) 0.803(Median:0.801, Variance:0.0002)
IQE 0.577(Median:0.576, Variance:0.0001) 0.523(Median:0.521 Variance:0.0002) 0.504(Median:0.501, Variance:0.0001)
IQM 1.652(Median:1.651, Variance:0.0003) 1.536(Median:1.535 Variance:0.0001) 1.515(Median:1.511, Variance:0.0002)
Clarity Time/ms 12.837(Median:12.838, Variance:0.0001) 19.252(Median:19.250 Variance:0.0002) 21.015(Median:21.014, Variance:0.0001)

From Table 1, it can be seen that the PCQI, IQE, and IQM values of the proposed method are higher than the other two methods, indicating that the clarity performance of the proposed method is the best, which can significantly improve the clarity of product appearance design images, and the details of the images are also more prominent, with good visual effects. The calculation efficiency of this method is higher than that of other methods.

Experiment 2: Performance testing of enhancing visual edge features in product appearance design images:

In order to further verify the effectiveness of the proposed method, different methods were used in the experiment to enhance the visual edge features of product appearance design images. The experimental results obtained are shown in Fig 6.

Fig 6. Comparison of visual edge feature enhancement results of product appearance design images using different methods.

Fig 6

As shown in Fig 6, after the proposed method is used to enhance the visual edge features of the product appearance design image, the image is more in line with human vision, and the edges of the image are clearer, effectively preserving more detailed information; However, after the other two methods were enhanced, the image still had blurriness and color cast, and the overall enhancement effect was unsatisfactory. It can be seen that the proposed method enhances the edge features of product appearance images more effectively.

The richness of visual edge feature enhancement in product appearance design images is measured using Entropy. The larger the Entropy value, the more information is contained in the image. Add modern deep learning methods currently leading image enhancement, such as GAN-based enhancement methods as a contrast method. The specific experimental test results are shown in Fig 7.

Fig 7. Comparison of experimental results of information entropy using different methods.

Fig 7

From Fig 7, it can be seen that the information entropy of the proposed method is the highest among the three methods, indicating that the proposed method can achieve more satisfactory image visual edge feature enhancement effects and significantly improve the richness of information in the image.

Design ablation experiments to verify the contributions of key steps in the article, such as depth based color correction, weight map fusion, improved bilateral filters, exponential functions, and S-shaped functions. The information entropy comparison results of different steps are shown in Table 2..

Table 2. Ablation Experiment.

Method steps Entropy of information
Color correction + weight map fusion + improved bilateral filter + exponential function and S-shaped function 0.9
Color correction + weight map fusion + improved bilateral filter 0.7
Color correction + weight map fusion 0.6
Color correction 0.3
Color correction + improved bilateral filter 0.5

According to Table 2, the absence of each key step leads to the loss of image edge feature information, resulting in a decrease in image information entropy. Therefore, it is necessary to improve the results through depth based color correction, weight map fusion, improved bilateral filters, exponential functions, and S-shaped functions.

4. Conclusion

Considering the clarity and visual effect defects of traditional product appearance design images, a method for enhancing visual edge features of product appearance design images based on an improved Retinex algorithm is proposed. Obtain the following conclusion:

  • (1)

    By using a color correction method based on depth estimation, the processed saturation, brightness, and hue are fused and converted into RGB, reducing the redundancy of lighting and reflection images with different scales of information while reducing the depth difference between different scales of information. The clarity is significantly improved, and the block contrast quality index (PCQI), image quality evaluation (IQE), and image quality measurement (IQM) are also significantly improved, resulting in better overall clarity performance.

  • (2)

    By using exponential function and Sigmoid function to process the reflection image and illumination image respectively, and adaptively non-linear correction of the saturation component, the overall quality is significantly improved while reducing the loss of edge feature information. This can effectively preserve more detailed information in the image, and the overall enhancement effect is more stable. At the same time, the increase in information entropy is also quite significant.

Although the proposed method has achieved significant research results, there are still a series of shortcomings. The following provides future work prospects for the proposed method:

  • (1)

    In the application process of Retinex algorithm, it is of great research value to achieve a balance between filtering effect, refinement of enhancement processing, and computation time, while ensuring that all indicators are relatively optimal. In the future, further research will be conducted in this area.

  • (2)

    The visual system of the human eye is currently the best object imaging processing system in the field of image processing, and the Retinex algorithm used in the proposed method only imitates a part of the features of the human eye’s visual system that can be practically applied and integrated before being published. How to simulate the human visual system more accurately and accurately, and meet the needs of digital image processing in various scenarios, has become the primary focus of future research.

Data Availability

All relevant data are within the manuscript.

Funding Statement

The author(s) received no specific funding for this work.

References

  • 1.Kang Y, Gupta S. Retinex algorithm and mathematical methods based texture detail enhancement method for panoramic images. Mathematical Problems in Engineering: Theory, Methods and Applications. 2022;2022(Pt.42):1.1-1.8. doi: 10.1155/2022/6490393 [DOI] [Google Scholar]
  • 2.Cai C, Qiang F, Fu-Cheng B, Xian-Song G, You-Fei H, Yong Z, et al. Joint polarization detection and degradation mechanisms for underwater image enhancement. Appl Opt. 2023;62(24):6389–400. doi: 10.1364/AO.496014 [DOI] [PubMed] [Google Scholar]
  • 3.Zhang J, Zhang Y. Research on algorithm for unordered enhancement of edge details of machine vision images. Computer Simulation. 2023;40(8):211–4. [Google Scholar]
  • 4.Peng Y, Yan Y, Chen G, Feng B, Gao X. An underwater attenuation image enhancement method with adaptive color compensation and detail optimization. J Supercomputing. 2023;79(2):1544–70. [Google Scholar]
  • 5.Ren J, Liu C. An adaptive superpixel-based polarimetric feature enhancement method for polarimetric SAR image classification with limited labeled data. Application of Electronic Technique. 2022;48(10):144–9. [Google Scholar]
  • 6.Liu Y, Dong Z, Zhu P, Liu S. Unsupervised underwater image enhancement based on feature disentanglement. Journal of Electronics & Information Technology. 2022;44(10):3389–98. [Google Scholar]
  • 7.Pang Z, Liu X, Liu G, Gong L, Zhou H, Luo H. Parallel multifeature extracting network for infrared image enhancement. Infrared and Laser Engineering. 2022;51(08):297–305. [Google Scholar]
  • 8.Zhao L, Cao C, Zhang J. Multiple description coded image enhancement method with joint learning of side-decoding and central-decoding features. Appl Res Comput. 2022;39(09):2873–80. [Google Scholar]
  • 9.Abdelsattar M, AbdelMoety A, Emad-Eldeen A. Machine learning-based prediction of illuminance and ultraviolet irradiance in photovoltaic systems. Int J Holistic Res. 2024;1–14. doi: 10.21608/ijhr.2024.308523.1025 [DOI] [Google Scholar]
  • 10.Abdelsattar M, AbdelMoety A, Ismeil MA. Automated defect detection in solar cell images using deep learning algorithms. IEEE Access. 2025;13(3):2169–3536. doi: 10.1109/ACCESS.2024.3525183 [DOI] [Google Scholar]
  • 11.Abdelsattar M, Rasslan AAMA, Emad-Eldeen A. Detecting dusty and clean photovoltaic surfaces using MobileNet variants for image classification. SVU-International J Engineering Sci Appls. 2025;6(1):9–18. doi: 10.21608/svusrc.2024.308832.1232 [DOI] [Google Scholar]
  • 12.Hu Z, Wang D, Shen Y. Reactive power compensation method in modern power system facing energy conversion. IET Generation, Transmission & Distribution. 2022;16(8):1582–91. doi: 10.1049/gtd2.12374 [DOI] [Google Scholar]
  • 13.Liu Y, Shang H, Zhu Q, Wang F. A color restoration technique in low-light image enhancement processing name. Computer Informatization and Mechanical System. 2024;7(2):17–9. [Google Scholar]
  • 14.Zhang R, Chen C, Peng J. Multi-scale graph feature extraction network for panoramic image saliency detection. Vis Comput. 2023;40(2):953–70. doi: 10.1007/s00371-023-02825-x [DOI] [Google Scholar]
  • 15.Su J, Lu S, Li L. A dual quantum image feature extraction method: PSQIFE. IET Image Processing. 2022;16(13):3529–43. doi: 10.1049/ipr2.12561 [DOI] [Google Scholar]
  • 16.Liang W. Feature extraction method of art visual communication image based on 5G intelligent sensor network. Journal of Sensors. 2022;2022(Pt.13):8545345-1-8545345–11. [Google Scholar]
  • 17.Lu R, Gao F, Yang X, Fan J, Li D. A novel infrared and visible image fusion method based on multi-level saliency integration. The Visual Computer, 2023, 39(6): 2321–35. doi: 10.1007/s00371-022-02438-w [DOI] [Google Scholar]
  • 18.Ma N, Cao Y, Zhang Z, Fan Y, Ding M. A CSR-based visible and infrared image fusion method in low illumination conditions for sense and avoid. Aeronautical Journal. 2024;128(Mar. TN.1321):489–503. doi: 10.1017/aer.2023.51 [DOI] [Google Scholar]
  • 19.Li W, Wozniak M. A hole filling and optimization algorithm of remote sensing image based on bilateral filtering. Mobile Netw Appl. 2022;27(2):743–51. doi: 10.1007/s11036-021-01904-4 [DOI] [Google Scholar]
  • 20.Zhang C, Li X. De-noising method of aquatic product image with foreign matters based on improved wavelet transform and bilateral filter. Journal of Food Process Engineering. 2023;46(9):e14412.1-e14412.10. [Google Scholar]
  • 21.Chen F, Zhu Y. A variant of two-step modulus-based matrix splitting iteration method for Retinex problem. Computational and Applied Mathematics. 2022;41(6):244-1-244–13. doi: 10.1007/s40314-022-01952-w [DOI] [Google Scholar]
  • 22.Li N, Gong X, Jia P. Segmentation method for low-quality images of coal and gangue based on Retinex and local texture features with multifractal. J Electronic Imaging. 2022;31(6):61820.1-61820.18. [Google Scholar]
  • 23.Tang Z, Wang J, Yuan B, Li H, Zhang J, Wang H. Markov-GAN: Markov image enhancement method for malicious encrypted traffic classification. IET Information Security. 2022;16(6):442–58. doi: 10.1049/ise2.12071 [DOI] [Google Scholar]
  • 24.Xu S, Wang J, He N, Hu X, Sun F. Underwater image enhancement method based on a cross attention mechanism. Multimedia Systems. 2024;30(1):26.1-26.13. doi: 10.21203/rs.3.rs-3285291/v1 [DOI] [Google Scholar]
  • 25.Jia F, Wang T, Zeng T. Low-light image enhancement via dual reflectance estimation. J Scientific Comput. 2024;98(2):36.1-36.32. doi: 10.1007/s10915-023-02431-y [DOI] [Google Scholar]

Decision Letter 0

Yongjie Li

15 May 2025

PONE-D-25-02402

Visual Edge Feature Enhancement of Product Appearance Design Images Based on Improved Retinex Algorithm

PLOS ONE

Dear Dr. Chen,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Jun 28 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org . When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols .

We look forward to receiving your revised manuscript.

Kind regards,

Yongjie Li

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1.Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please note that PLOS ONE has specific guidelines on code sharing for submissions in which author-generated code underpins the findings in the manuscript. In these cases, we expect all author-generated code to be made available without restrictions upon publication of the work. Please review our guidelines at https://journals.plos.org/plosone/s/materials-and-software-sharing#loc-sharing-code and ensure that your code is shared in a way that follows best practice and facilitates reproducibility and reuse.

3. PLOS requires an ORCID iD for the corresponding author in Editorial Manager on papers submitted after December 6th, 2016. Please ensure that you have an ORCID iD and that it is validated in Editorial Manager. To do this, go to ‘Update my Information’ (in the upper left-hand corner of the main menu), and click on the Fetch/Validate link next to the ORCID field. This will take you to the ORCID site and allow you to create a new iD or authenticate a pre-existing iD in Editorial Manager.

Additional Editor Comments:

Comments from the Editorial Office: One or more of the reviewers has recommended that you cite specific previously published works. Members of the editorial team have determined that the works referenced are not directly related to the submitted manuscript. As such, please note that it is not necessary or expected to cite the works requested by the reviewer.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: N/A

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: 1. Abstract: The abstract is overly technical and lacks clarity on the novelty of the method. It should include quantifiable improvements and avoid subjective terms like "better and more beautiful."

2.. Introduction: The research gap is unclear. A critical comparison with existing methods is missing. Additionally, cite these papers to highlight the broader impact of image processing beyond product design:

1- Machine Learning-Based Prediction of Illuminance and Ultraviolet Irradiance in Photovoltaic Systems., 2- Automated Defect Detection in Solar Cell Images Using Deep Learning Algorithms., 3- Detecting Dusty and Clean Photovoltaic Surfaces Using MobileNet Variants for Image Classification.

3. Related Work: The literature review is descriptive rather than analytical. It lacks a quantitative comparison of previous approaches and their limitations.

4. Methodology (Section 2.1):

The depth of field estimation and Gray World algorithm are not well justified.

There is no clear explanation or diagram for the algorithm’s workflow.

Mathematical derivations lack validation or comparative analysis.

5. Methodology (Section 2.2):

The Retinex modification is unclear—how is it improved over standard methods?

The choice of exponential and Sigmoid functions for enhancement is not justified.

The bilateral filtering modification lacks empirical validation.

6. Experimental Setup:

No computational efficiency analysis or comparison with existing benchmarks.

The choice of 200 iterations is arbitrary and should be explained.

Reviewer #2: 1. Baseline choices such as “physical modelling method”, “Tetrolet transform” are outdated. I suggest the authors include state-of-the-art learning-based methods a few of which include RetinexNet, Zero-DCE, SCI, MIRNet-v2, U-Retinex.

2. There is no ablation study verifying the contribution of each component (depth-based color correction, weight-map fusion, improved bilateral filter, exponential vs. sigmoid functions, etc.). Including such results may improve the results.

3. The manuscript lacks comparison with modern deep-learning approaches (e.g., GAN-based or CNN-based enhancement) that currently dominate image enhancement literature. Without such benchmarks, it is unclear if the proposed algorithm advances the state of the art beyond classical methods

4. The origin, diversity, and availability of the 500/100 image dataset are not described. The authors should include such descriptions in the manuscript.

5. Claims such as “significantly improve … bring better and more beautiful product appearance design results” in the Abstract are qualitative and unsupported.

6. Key hyper-parameters such as window size , decomposition levels etc., are not reported; neither are hardware, running time, nor implementation details

7. Please also correct grammatical errors in the paper as there were a few.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean? ). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy .

Reviewer #1: No

Reviewer #2: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/ . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org . Please note that Supporting Information files do not need this step.

PLoS One. 2025 Sep 18;20(9):e0332195. doi: 10.1371/journal.pone.0332195.r002

Author response to Decision Letter 1


9 Jun 2025

Modification instructions:

Reviewer Modification instructions for #1

1. Abstract: The abstract is overly technical and lacks clarity on the novelty of the method. It should include quantifiable improvements and avoid subjective terms like "better and more beautiful."

Re: The abstract has been revised to highlight innovative improvements, such as improving the Gray Wold algorithm, designing an edge attenuation compensation method to solve the problem of edge color attenuation under noise interference, and obtaining clearer product appearance design images. Improve the weights of traditional bilateral filtering methods and reduce the depth of field differences between information at different scales. At the same time, increasing quantitative data reflects the advantages of the proposed method: the enhanced image PCQI reaches 1.033, IQE reaches 0.610, IQM reaches 1.830, and information entropy is higher than 0.7.

2. Introduction: The research gap is unclear. A critical comparison with existing methods is missing. Additionally, cite these papers to highlight the broader impact of image processing beyond product design:

1- Machine Learning-Based Prediction of Illuminance and Ultraviolet Irradiance in Photovoltaic Systems., 2- Automated Defect Detection in Solar Cell Images Using Deep Learning Algorithms., 3- Detecting Dusty and Clean Photovoltaic Surfaces Using MobileNet Variants for Image Classification.

Re: The three research results you listed have been compared with the method proposed in this paper. The existing method can predict the illumination and ultraviolet irradiance of photovoltaic systems, detect defects in solar cells, and detect dust on the surface of photovoltaic panels using processed images. But these three methods focus on the application of images and ignore the enhancement processing effect of images. This paper's method can widen the gap with these methods in image addition methods.

3. Related Work: The literature review is descriptive rather than analytical. It lacks a quantitative comparison of previous approaches and their limitations.

Re: The analytical content of the literature review has been supplemented to analyze the limitations of existing methods: The noise and interference factors present in polarimetric SAR images can cause attenuation of edge color features, leading to instability of color features and ultimately affecting the effectiveness of the feature enhancement method proposed in reference [5]. When processing underwater images, the model proposed in reference [6] is more prone to getting stuck in local optima due to complex factors such as lighting, color distortion, and suspended solids, resulting in poor image quality. When the method described in reference [7] is applied to the processing of images with uneven lighting or loss of details, it is easily interfered by redundant reflection images, and the edge enhancement effect of the image is insufficient. The method in reference [8] only combines the low resolution features of two edges and the low resolution characteristics of the middle path, which cannot optimize the grayscale features of multiple spatial domains and effectively preserve the edge details of the image. The enhancement effect needs to be improved.

In response to the shortcomings of the existing methods mentioned above, this paper proposes a solution. The difference between this paper's method and existing methods lies in the improvement of the Gray Wold algorithm, the design of an edge attenuation compensation method to solve the problem of edge color attenuation under noise interference, and the acquisition of clearer product appearance design images. Introducing the Retinex algorithm, the image is decomposed into two parts, and fusion and decomposition are carried out separately. Multi scale enhanced reflection images and illumination images are used to reduce different types of interference in images of different scales, and to solve the enhancement problems of images with uneven illumination, high noise, low illumination, and loss of details. Simultaneously optimize the grayscale features of each spatial domain to achieve edge feature enhancement. On the basis of traditional bilateral filtering, enhance the ability to preserve image edge detail information.

4. Methodology (Section 2.1):

The depth of field estimation and Gray World algorithm are not well justified.

Re: This paper applied the depth of field estimation function and grey world algorithm to clarify the image, and the experimental results in Figure 3 can prove the effectiveness of the depth of field estimation function and grey world algorithm. As shown in Figure 3, the background of the original image sample has complex textures and similar texture colors. Therefore, during the image transmission process, the red light attenuation is more severe, and the product appearance design image presents a blue tone color deviation, with not only more noise but also blurred edges. The method proposed in this paper corrects the red and blue channel values using a depth estimation function, and designs an edge attenuation compensation method using the grey world algorithm. By estimating the color of the light source, the color attenuation of the product appearance design image is compensated and corrected to remove blue color cast and noise, making the texture edges of the image clearer.

There is no clear explanation or diagram for the algorithm’s workflow.

Re: The workflow diagram of the algorithm has been supplemented clearly. Please see Figure 1 for details.

Mathematical derivations lack validation or comparative analysis.

Re: The advantages of the mathematical derivation process of depth estimation and grey world algorithm compared to other methods have been compared through Figure 3. Figure 3 compares the clarity results of product appearance design images using different methods. This paper uses a depth estimation function to correct the red and blue channel values, and designs an edge attenuation compensation method using the grey world algorithm. By estimating the color of the light source, the color attenuation of the product appearance design image is compensated and corrected to remove blue color cast and noise, making the image texture edges clearer. However, other methods neglect the depth difference of different color channels and do not compensate for color attenuation, resulting in relatively poor image clarity. Therefore, the image clarity results demonstrate the effectiveness of the mathematical derivation that combines depth estimation and grey world algorithm in this paper.

5. Methodology (Section 2.2):

The Retinex modification is unclear—how is it improved over standard methods?

Re: The improvement of Retinex algorithm compared to conventional standard methods is that it can selectively use different enhancement functions for image processing according to the required requirements. Decompose the original image to obtain the decomposed illumination and reflection images, and enhance them separately to obtain the enhanced illumination and reflection images, reducing the redundant interference of illumination and reflection images with different scales of information. After fusing the enhanced illumination and reflection images, the enhancement problems of uneven illumination, high noise, low illumination, and loss of details in the image can be solved.

In addition to the Retinex algorithm, this method improves the filtering weights on the basis of traditional bilateral filtering, obtaining information at different scales and enhancing the ability to preserve image edge details.

The choice of exponential and Sigmoid functions for enhancement is not justified.

Re: The reflection image decomposed by Retinex may have insufficient dynamic range due to noise, and it is necessary to expand the dynamic range of the reflection image components through nonlinear transformation, while sharpening details and improving local contrast to restore more realistic surface characteristics. The exponential function has a non-uniform stretching effect on the reflection component of the input value, which can maintain the enhanced natural transition. Therefore, an exponential function is used for image enhancement. The third layer illumination image decomposed by Retinex contains global illumination components with a large dynamic range, but visually sensitive areas are often concentrated in the medium brightness range. The Sigmoid function is a smooth S-shaped curve that can nonlinearly compress a wide range of input brightness values into the [0,1] interval, while enhancing the contrast of the middle brightness region, flexibly controlling the intensity and range of enhancement, and adapting to different lighting conditions. Therefore, the Sigmoid function is applied to enhance the third layer illumination image after Retinex decomposition.

The bilateral filtering modification lacks empirical validation.

Re: Additional experience verification has been added: Based on existing experience, bilateral filtering weights ∈ {3, 5, 7, 9, 11, 13, 15} were set and modified for verification. By calculating the feature retention rate under different filtering weights, it was found that when the weight was greater than 5, the feature retention rate changed, effectively suppressing the detail blurring phenomenon caused by large window filtering. Considering the computational complexity, we have chosen a bilateral filtering weight of 7.

6. Experimental Setup:

No computational efficiency analysis or comparison with existing benchmarks.

Re: The calculation efficiency of each method has been verified through clear time evaluation indicators. The lower the clarity time indicator value, the higher the computational efficiency of the method. The comparison results are shown in Table 1.

The choice of 200 iterations is arbitrary and should be explained.

Re: The selection of 200 iterations is not arbitrary. The explanation is as follows: Based on the number of images and resolution characteristics, in order to fully train the algorithm, the number of iterations is set to 200.

Reviewer Modification instructions for #2

1. Baseline choices such as “physical modelling method”, “Tetrolet transform” are outdated. I suggest the authors include state-of-the-art learning-based methods a few of which include RetinexNet, Zero-DCE, SCI, MIRNet-v2, U-Retinex.

Re: This paper adopts the Retinex algorithm, which is in line with the most advanced learning methods. In addition, this paper innovatively improves the Gray Wold algorithm and traditional bilateral filtering methods. In order to demonstrate the advanced nature of this method, the results in Figure 5 show that the image enhancement effect of the proposed method is better than that of the classical latest method.

2. There is no ablation study verifying the contribution of each component (depth-based color correction, weight-map fusion, improved bilateral filter, exponential vs. sigmoid functions, etc.). Including such results may improve the results.

Re: Design ablation experiments to verify the contributions of key steps in the paper, such as depth based color correction, weight map fusion, improved bilateral filters, exponential functions, and S-shaped functions. The information entropy comparison results of different steps are shown in Table 2. According to Table 2, the absence of each key step leads to the loss of image edge feature information, resulting in a decrease in image information entropy. Therefore, it is necessary to improve the results through depth based color correction, weight map fusion, improved bilateral filters, exponential functions, and S-shaped functions.

3. The manuscript lacks comparison with modern deep-learning approaches (e.g., GAN-based or CNN-based enhancement) that currently dominate image enhancement literature. Without such benchmarks, it is unclear if the proposed algorithm advances the state of the art beyond classical methods

Re: To demonstrate whether the method proposed in this paper surpasses the latest techniques of classical methods, add modern deep learning methods currently leading image enhancement, such as GAN-based enhancement methods as a contrast method. The specific experimental test results are shown in Figure 5. The results in Figure 5 show that the image enhancement effect of the proposed method is better than that of the classical latest method.

4. The origin, diversity, and availability of the 500/100 image dataset are not described. The authors should include such descriptions in the manuscript.

Re:The experimental data of 600 images in the dataset is sourced from the industrial grade image acquisition device for product appearance design shown in Figure 2, which uses a 25 megapixel CMOS sensor. The experimental data of 600 images includes products of bottled, bagged, barreled, boxed and other types, with materials including paper, plastic, etc. The data has differentiated and diverse characteristics. Each product includes front view, oblique view, side view and other angles, covering the complete appearance features of the product's design image. The original image is infused with interference factors such as reflective spots, occlusion, and texture blur, which can demonstrate the enhanced robustness of our method. Therefore, it is necessary for application.

5. Claims such as “significantly improve … bring better and more beautiful product appearance design results” in the Abstract are qualitative and unsupported.

Re: Specific qualitative descriptions and quantitative data support have been added to the abstract: The experimental results show that the proposed method enhances the image with a PCQI of 1.033, an IQE of 0.610, an IQM of 1.830, and an information entropy higher than 0.7. The above data proves that this method has a high richness of edge feature information after image enhancement, significantly improving the visual edge feature enhancement effect of product appearance design images.

6. Key hyper-parameters such as window size , decomposition levels etc., are not reported; neither are hardware, running time, nor implementation details

Re: Experimental hardware settings: The experimental hardware platform is Intel (R) Core (TM) i7-8700KCPU 3 70 UHz processor, 16 A desktop computer with 00 UB RAM and 64 bit operating system.

Algorithm implementation details settings: Improve Gray World algorithm parameters: use Gaussian attenuation function, standard deviation σ=1.5, compensation radius r=3.

Decomposition level parameter setting: Retinex multi-scale decomposition level: Level 3

Window size: Gaussian filtering parameters at all levels: Layer 1 (coarse scale): Window size 31 × 31 (capturing global illumination). Layer 2 (Medium Scale): Window size 15 × 15 (Medium Structure). Layer 3 (fine scale): Window size 7 × 7 (retaining high-frequency details).

Running time: Running time (single 1080P image): CPU mode: approximately 0.8 seconds. GPU acceleration: about 0.3 seconds (achieving bilateral filtering and Retinex decomposition). Multi scale decomposition adopts multi threading, and each level of filtering is independently calculated.

7. Please also correct grammatical errors in the paper as there were a few.

Re: The entire text has been checked and grammar errors have been corrected.

Attachment

Submitted filename: Response.docx

pone.0332195.s002.docx (22.7KB, docx)

Decision Letter 1

Yongjie Li

19 Aug 2025

PONE-D-25-02402R1Visual Edge Feature Enhancement of Product Appearance Design Images Based on Improved Retinex AlgorithmPLOS ONE

Dear Dr. Chen,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Oct 03 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org . When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols .

We look forward to receiving your revised manuscript.

Kind regards,

Yongjie Li

Academic Editor

PLOS ONE

Journal Requirements:

If the reviewer comments include a recommendation to cite specific previously published works, please review and evaluate these publications to determine whether they are relevant and should be cited. There is no requirement to cite these works unless the editor has indicated otherwise. 

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

Additional Editor Comments:

As suggested by one reviewer, please include pseudo-code summarizing the core steps of the algorithm in the Methods section.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #3: (No Response)

Reviewer #4: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #3: Yes

Reviewer #4: (No Response)

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #3: Yes

Reviewer #4: I Don't Know

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #3: Yes

Reviewer #4: No

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #3: Yes

Reviewer #4: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #3: I would like to thank the authors for submitting this outstanding paper, which presents an innovative method for enhancing image clarity and improving edge features in product appearance design using an improved Retinex algorithm. The proposed method appears effective and yields impressive results based on the metrics evaluated, such as PCQI, IQE, and IQM.

Regarding research ethics, I did not notice any concerns related to dual publication or violations of research ethics in this work. Upon reviewing the paper, it seems that all data collection and analysis were conducted transparently and in adherence to academic standards. There is no indication of this work being published elsewhere, and I believe the research makes a valuable contribution to the field of image enhancement and analysis.

I recommend the acceptance of this paper, with the continued commitment to transparency in future related publications.

Reviewer #4: I appreciate the substantial improvements you have made in revising the manuscript. For reproducibility, please include pseudo-code summarizing the core steps of your improved algorithm in the Methods section.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean? ). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy .

Reviewer #3: No

Reviewer #4: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/ . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org . Please note that Supporting Information files do not need this step.

PLoS One. 2025 Sep 18;20(9):e0332195. doi: 10.1371/journal.pone.0332195.r004

Author response to Decision Letter 2


25 Aug 2025

Amendment notes

Journal Requirements:

If the reviewer comments include a recommendation to cite specific previously published works, please review and evaluate these publications to determine whether they are relevant and should be cited. There is no requirement to cite these works unless the editor has indicated otherwise.

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

Additional Editor Comments:

As suggested by one reviewer, please include pseudo-code summarizing the core steps of the algorithm in the Methods section.

Re: The core algorithm steps of the visual edge feature enhancement method for the appearance design image of the above product are pseudocode, as shown in Figure 2.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #3: (No Response)

Reviewer #4: All comments have been addressed

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #3: Yes

Reviewer #4: (No Response)

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #3: Yes

Reviewer #4: I Don't Know

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #3: Yes

Reviewer #4: No

Re:The statistical data in Table 1 are the mean values of PCQI, IQE, and IQM data for different methods. We have provided additional measurement data points for median and variance based on your feedback, as shown in Table 1.

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #3: Yes

Reviewer #4: Yes

Re: Thank you for your evaluation. I have thoroughly reviewed the paper and provided the corresponding content in response to your project feedback mentioned above.

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #3: I would like to thank the authors for submitting this outstanding paper, which presents an innovative method for enhancing image clarity and improving edge features in product appearance design using an improved Retinex algorithm. The proposed method appears effective and yields impressive results based on the metrics evaluated, such as PCQI, IQE, and IQM.

Regarding research ethics, I did not notice any concerns related to dual publication or violations of research ethics in this work. Upon reviewing the paper, it seems that all data collection and analysis were conducted transparently and in adherence to academic standards. There is no indication of this work being published elsewhere, and I believe the research makes a valuable contribution to the field of image enhancement and analysis.

I recommend the acceptance of this paper, with the continued commitment to transparency in future related publications.

Re:Thank you for your review and affirmation.

Reviewer #4: I appreciate the substantial improvements you have made in revising the manuscript. For reproducibility, please include pseudo-code summarizing the core steps of your improved algorithm in the Methods section.

Re: The core algorithm steps of the visual edge feature enhancement method for the appearance design image of the above product are pseudocode, as shown in Figure 2.

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #3: No

Reviewer #4: No

Attachment

Submitted filename: Response_auresp_2.docx

pone.0332195.s003.docx (47.7KB, docx)

Decision Letter 2

Yongjie Li

28 Aug 2025

Visual Edge Feature Enhancement of Product Appearance Design Images Based on Improved Retinex Algorithm

PONE-D-25-02402R2

Dear Dr. Chen,

We're pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you'll receive an e-mail detailing the required amendments. When these have been addressed, you'll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager®  and clicking the ‘Update My Information' link at the top of the page. For questions related to billing, please contact billing support .

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Yongjie Li

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Acceptance letter

Yongjie Li

PONE-D-25-02402R2

PLOS ONE

Dear Dr. Chen,

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team.

At this stage, our production department will prepare your paper for publication. This includes ensuring the following:

* All references, tables, and figures are properly cited

* All relevant supporting information is included in the manuscript submission,

* There are no issues that prevent the paper from being properly typeset

You will receive further instructions from the production team, including instructions on how to review your proof when it is ready. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few days to review your paper and let you know the next and final steps.

Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

You will receive an invoice from PLOS for your publication fee after your manuscript has reached the completed accept phase. If you receive an email requesting payment before acceptance or for any other service, this may be a phishing scheme. Learn how to identify phishing emails and protect your accounts at https://explore.plos.org/phishing.

If we can help with anything else, please email us at customercare@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Professor Yongjie Li

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    Attachment

    Submitted filename: Response.docx

    pone.0332195.s002.docx (22.7KB, docx)
    Attachment

    Submitted filename: Response_auresp_2.docx

    pone.0332195.s003.docx (47.7KB, docx)

    Data Availability Statement

    All relevant data are within the manuscript.


    Articles from PLOS One are provided here courtesy of PLOS

    RESOURCES