Abstract
Purpose
This study aims to estimate the regional choroidal thickness from color fundus images from convolutional neural networks in different network structures and task learning models.
Method
1276 color fundus photos and their corresponding choroidal thickness values from healthy subjects were obtained from the Topcon DRI Triton optical coherence tomography machine. Initially, ten commonly used convolutional neural networks were deployed to identify the most accurate model, which was subsequently selected for further training. This selected model was then employed in combination with single-, multiple-, and auxiliary-task training models to predict the average and sub-region choroidal thickness in both ETDRS (Early Treatment Diabetic Retinopathy Study) grids and 100-grid subregions. The values of mean absolute error and coefficient of determination (R2) were involved to evaluate the models' performance.
Results
Efficientnet-b0 network outperformed other networks with the lowest mean absolute error value (25.61 μm) and highest R2 (0.7817) in average choroidal thickness. Incorporating diopter spherical, anterior chamber depth, and lens thickness as auxiliary tasks improved predicted accuracy (p-value = , , respectively). For ETDRS regional choroidal thickness estimation, multi-task model achieved better results than single task model (lowest mean absolute error = 31.10 μm vs. 33.20 μm). The multi-task training also can simultaneously predict the choroidal thickness of 100 grids with a minimum mean absolute error of 33.86 μm.
Conclusions
Efficientnet-b0, in combination with multi-task and auxiliary task models, achieve high accuracy in estimating average and regional macular choroidal thickness directly from color fundus photographs.
Keywords: choroid, Optical imaging, Deep learning, Fundus oculi
1. Introduction
The choroid is important for maintaining the integrity of photoreceptors in our eyes, working as the vasculature to supply the outer retina and remove waste products [1]. Choroid thickness, which refers to the distance between the Bruch's membrane and the choroidal-scleral interface [2,3], has been identified as a biomarker for various eye diseases. For instance, both high myopia [4,5] and age-related macular degeneration [6] are associated with thinner choroids. Thicker choroids may be the risk factor for polypoidal choroidal vasculopathy [1,6] and central serous chorioretinopathy [7,8]. Therefore, assessment of choroidal thickness is important for the diagnosis and monitoring of these diseases.
Choroidal thickness can be measured using optical coherence tomography (OCT), with techniques to increase the penetration, such as enhanced depth imaging OCT (EDI-OCT) [9] and swept-source OCT (SS-OCT) [10]. However, in practice, the use of EDI-OCT or SS-OCT can be limited by their high cost and lack of availability in remote or impoverished regions. Furthermore, the manual measurement of choroidal thickness through OCT images is a laborious and time-intensive task for ophthalmologists. In contrast, automatic measurement of choroidal thickness on OCT is prone to segmentation errors. We have employed a convolutional neural network (CNN) to directly estimate choroidal thickness without segmentation and achieved comparable precision [11].
Color fundus photography is more cost-effective and widely used, which can provide information on choroidal thickness. It is well known that tessellation is associated with thinner choroid. Previous studies used subjective grading [12], image analysis [13] and deep learning [14] to assess the severity of fundus tessellation. However, there may be other information on color fundus photography other than tessellation associated with choroidal thickness. Therefore, deep learning method were used to estimated choroidal thickness directly from color fundus photography [15,16]. But these studies solely concentrated on the subfoveal and mean choroidal thickness, disregarding the considerable variability in choroidal thickness distribution within the macular region [[17], [18], [19]]. Moreover, prior studies only employed a single neural network, and the estimated accuracy was inadequate.
In this study, we tried ten CNNs with varied structures and implemented different task models to enhance the performance of CNN in accurately estimating the regional choroidal thickness from color fundus photography.
2. Methods
2.1. Data collection
It is a retrospective study incorporating data from 2016 to 2020 obtained from the Joint Shantou International Eye Center (JSIEC) of Shantou University and the Chinese University of Hong Kong. It was approved by the Institutional Review Board (IRB) of JSIEC, which waived the need for informed consent due to the retrospective design.
Medical outpatient records with OCT examinations were screened to select healthy individuals without systematic diseases, chorioretinal diseases, and ocular surgical or treatment history. The fundus photograph and optical coherence tomography were obtained using a SS-OCT device (DRI-OCT Triton; Topcon, Japan). The OCT imaging protocol using a mm macular scan mode with a resolution of . The included images were required to be devoid of any pathology in the macular region, with an OCT image quality score exceeding 40. For every included OCT image, a clinical training doctor performed segmentation error corrections, specifically focusing on the inner and outer borders of the choroid, that is the outmost hyperreflective line of the RPE (Retinal Pigment Epithelium) layer, and the choroid-sclera junction line. Then the average choroidal thickness of the mm region, the nine subregions defined by the Early Treatment Diabetic Retinopathy Study (ETDRS) (Fig. 1 (b)) [20] and one hundred mm sub-regions (Seen in Fig. 1 (c)) were computed using the values generated automatically by the Triton inbuilt algorithm, which were set as the ground truth. The corresponding fundus images (Fig. 1 (a)) were also collected to train the CNN models.
Fig. 1.
(a) Original image of color fundus photograph; (b) Color fundus photograph with nine subregions in ETDRS grids; (c) Color fundus photograph with one hundred mm subregions. (For interpretation of the references to color in this figure legend, the reader is referred to the Web version of this article.)
Ophthalmic parameters of all enrolled patients were also collected, including uncorrected visual acuity, intraocular pressure measured by non-contact tonometer, diopter spherical, diopter cylindrical, the axis of astigmatism, and spherical equivalent acquired from a fully automatic optometer (KP8800, TOPCON, Japan), axial length, anterior chamber depth, lens thickness and central corneal thickness examined by ocular biometry (OA 2000; Tomey Corp, Japan).
2.2. Convolutional neural networks designs
2.2.1. Experiment settings
The images were randomly divided into three sets: a training set, a validation set, and a test set, with a proportion of 8:1:1. Data augmentation techniques, such as center crop, and random vertical and horizontal flip, were involved in the experiments. L1 loss was employed for the CNN models during training. We utilized the Adam optimizer [21] for the mini-batch optimization with an initial learning rate of 0.001. To update the learning rate, the technique of cosine annealing was adopted with a period of 5 and a period multiple was set to 2 [[22], [23]]. The CNN models were trained for 150 epochs with a batch size of 16. All the experiments were conducted in the Pytorch library with a TITAN Xp GPU.
2.2.2. Convolutional neural networks structures
To explore the impact of networks' structure on performance, we employed ten commonly used networks, including Alexnet [24], Resnet-18 [25], Resnet-34, Resnet-50, vgg-16 [26], Googlelenet [27], Efficientnetb0 [28], Efficientnetb1, Efficientnetb2 and Efficientnetv2-small [29], to estimate the average choroidal thickness in the whole region ( mm) based on the fundus images. The CNN with the best performance will be selected as the backbone for further exploration of different task and regional thickness estimation.
2.2.3. Different task models approaches
Single-task learning model of the chosen CNN is applied to estimate the choroidal thickness of both the average and individual sub-regions within each of the nine ETDRS grids separately. Multi-task learning model [30] involves simultaneous estimation of regional choroidal thickness in all subregions, which includes nine subregions within the ETDRS grids and one hundred mm subregions within the 100 grids. It is important to note that multi-task learning is an approach that utilizes the domain information present in the training signals of related tasks to facilitate inductive transfer [30]. These related tasks share representation, thereby aiding each other in learning different tasks and enhancing the model's generalization capabilities. Thus, the motivation behind incorporating multi-task learning in this study is to investigate the potential relatedness among choroidal thickness in various regions whether can enhance the performance of the CNN.
The auxiliary-task learning model is also employed to predict the average choroidal thickness across the entire region of mm, which can help to identify the clinical index inputs' function on the prediction accuracy enhancement of the CNN. In auxiliary task learning, the primary task's performance takes precedence, while the other tasks function as aids to enhance the CNN model's performance on the primary task [31]. In this study, we applied different clinical indexes as auxiliary references to train the CNN models along with the primary task of predicting the average choroidal thickness in the entire region. In clinical practice, there exist correlations between the choroidal thickness and certain clinical indexes, for instance, the axial length [32] and spherical equivalent [33] are correlated with choroidal thickness. Clinical indexes that exhibit a positive or negative correlation with choroidal thickness may either facilitate or impede the performance of CNNs on the primary task. Therefore, we used twelve clinical indexes, encompassing uncorrected visual acuity, intraocular pressure, diopter spherical, diopter cylindrical, axis of astigmatism, spherical equivalent, axial length, anterior chamber depth, lens thickness, central corneal thickness, age and gender, as the auxiliary references. Subsequently, a CNN model was trained to accomplish the primary task and each the auxiliary task simultaneously.
2.3. CNN performances evaluations
The performance of CNNs in various experiments was evaluated using metrics such as mean absolute error (MAE), coefficient of determination (R2) [34], Pearson correlation coefficient (PCC), and p-value [35]. MAE is an average of the absolute errors | where is the prediction and is the ground truth. A smaller MAE value indicates a better performance. The coefficient of determination (R2) shows the proportion of variance in the dependent variable that can be explained by the independent variable. When the value is close to 1, the CNN model is reliable in estimation. The PCC is defined as the measurement of the strength of the relationship between two variables and their association with each other, and its range is between −1 and 1. If the PCC is below 0, the prediction and ground truth have negative correlations; when the PCC is above 0, there is a positive correlation. A strong correlation is indicated when the absolute value of the Pearson correlation coefficient (PCC) approaches 1. The p-value is a statistical measure that indicates the likelihood of observing a specific set of data if the null hypothesis is true. In this study, p-value is used for testing the hypothesis of no correlation against the alternative that there is a non-zero correlation. If a p-value is small, e.g. less than 0.001, then the correlation is significantly different from zero. All data are collected and statistically analyzed using SPSS Statistics 25 for Windows (SPSS Inc., IBM, Somers, NY, USA).
3. Results
3.1. Data analysis
In total, 1276 color fundus images from 1276 individuals were collected, in which 52.16% were female and 47.84% were male, aged from 5 to 82 (30 ± 22 years old). The mean choroidal thickness of all samples was 209.39 ± 72.03 μm.
Supplementary Table 1 provides a summary of statistical results from all ophthalmic examinations. Based on the linear regression analysis conducted on each parameter and the average choroidal thickness in the entire mm region. We discovered a positive correlation between the average choroidal thickness and various factors, including uncorrected visual acuity, diopter spherical, diopter cylindrical, spherical equivalent and gender. Note that male and female were labeled as 1 and 0 respectively for analyzing the correlation between the choroidal thickness and gender in this study. Inversely, it was negatively correlated with the axial length, lens thickness and age, while other parameters including intraocular pressure, axis of astigmatism, anterior chamber depth, and central corneal thickness were not correlated with the average choroidal thickness, as shown in Supplementary Table 1.
3.2. Average choroidal thickness estimation
3.2.1. Performance of different networks
Among ten CNN models, the Efficientnetb0 model has the lowest MAE (MAE = 25.61 μm), and the highest R2 (R2 = 0.7817) as well as PCC (PCC = 0.8985), which outperformed other networks (shown in Table 1). Besides, the scatter plots (Supplementary Fig. 1) and Bland-Altman plots (Supplementary Fig. 2) of all networks also visually demonstrated that the predicted values of the Efficientnetb0 CNN model are more consistent with the ground truth choroidal thickness values. Thus, the Efficientnetb0 CNN model was chosen as the backbone for further experiments.
Table 1.
Average choroidal thickness estimation performances of different CNN model.
| CNNa model | MAEb (μm) | R2c | PCCd | p-vaule |
|---|---|---|---|---|
| Alexnet | 42.28 | 0.4928 | 0.7049 | |
| Resnet 18 | 34.79 | 0.6451 | 0.8033 | |
| Resnet 34 | 32.44 | 0.6703 | 0.8985 | |
| Resnet 50 | 31.53 | 0.6970 | 0.8362 | |
| Vgg16 | 62.20 | – | – | – |
| Googlelenet | 27.53 | 0.7611 | 0.8725 | |
| Efficientnetb0 | 25.61 | 0.7817 | 0.8985 | |
| Effcientnetb1 | 27.11 | 0.7752 | 0.8805 | |
| Effcientnetb2 | 27.64 | 0.7706 | 0.8786 | |
| Effcientnetv2-s | 31.22 | 0.7029 | 0.8440 |
The bolded numbers represent that Efficientnetb0 outperforms other networks.
CNN: convolutional neural network.
MAE: mean absolute error.
R2: coefficient of determination.
PCC: Pearson correlation coefficient.
3.2.2. Performance of auxiliary task learning
Due to the limitations of the retrospective study, it was found that some parameter data were missing, while deleting these samples might cause a generalized performance reduction. Therefore, to ensure a fair comparison, we treated each parameter as an individual sample set in order to assess the prediction accuracy of the model both before and after incorporating the auxiliary task. By comparing the performance of the CNN model before and after, we can determine the extent to which each parameter contributes to improving the accuracy of the predictions.
From Table 2, we observed that among all the auxiliary tasks, only the diopter spherical (MAE = 26.25 μm, R2 = 0.7819, PCC = 0.8862), anterior chamber depth (MAE = 28.81 μm, R2 = 0.7334, PCC = 0.8581), and lens thickness (MAE = 30.73 μm, R2 = 0.7174, PCC = 0.8487) can assist the CNN models in improving the performance on the primary task. Besides, the auxiliary tasks of diopter cylindrical (MAE = 32.96 μm, R2 = 0.7056, PCC = 0.8717), axis of astigmatism (MAE = 28.83 μm, R2 = 0.7424, PCC = 0.8633), spherical equivalent (MAE = 29.47 μm, R2 = 0.7377, PCC = 0.8592), axial length (MAE = 31.17 μm, R2 = 0.7060, PCC = 0.8424), central corneal thickness (MAE = 32.23 μm, R2 = 0.6760, PCC = 0.8329), age (MAE = 29.98 μm, R2 = 0.6980, PCC = 0.8369) and gender (MAE = 33.77 μm, R2 = 0.7005, PCC = 0.8449) were in poor performance with the MAE values exceed 28 μm, especially the diopter cylindrical, the axial length, the central corneal thickness, and the gender, whose MAE values even more than 30 μm.
Table 2.
Efficientnetb0 with auxiliary task learning in average choroidal thickness estimation.
| Ocular Parameters | Efficientnetb0 |
Efficientnetb0 Auxiliary Task |
||||||
|---|---|---|---|---|---|---|---|---|
| MAEa (μm) | R2b | PCCc | p-value | MAEa (μm) | R2b | PCCc | p-value | |
| uncorrected visual acuity (LogMAR) | 25.07 | 0.8100 | 0.9001 | 26.46 | 0.7679 | 0.8774 | ||
| intraocular pressure (mmHg) | 25.47 | 0.8250 | 0.9084 | 26.14 | 0.7785 | 0.8827 | ||
| diopter spherical (D) | 28.27 | 0.7594 | 0.8735 | 26.25 | 0.7819 | 0.8862 | ||
| diopter cylindrical (D) | 27.31 | 0.7729 | 0.8814 | 32.96 | 0.7056 | 0.8717 | ||
| axis of astigmatism (/) | 25.33 | 0.7982 | 0.8938 | 28.83 | 0.7424 | 0.8633 | ||
| spherical equivalent (D) | 25.54 | 0.8064 | 0.8988 | 29.47 | 0.7377 | 0.8592 | ||
| axial length (mm) | 29.24 | 0.7371 | 0.8619 | 31.17 | 0.7060 | 0.8424 | ||
| anterior chamber depth (mm) | 30.84 | 0.7159 | 0.8491 | 28.81 | 0.7334 | 0.8581 | ||
| lens thickness (mm) | 31.49 | 0.6912 | 0.8338 | 30.73 | 0.7174 | 0.8487 | ||
| central corneal thickness (μm) | 29.75 | 0.7410 | 0.8631 | 32.23 | 0.6760 | 0.8329 | ||
| age | 28.00 | 0.7290 | 0.8598 | 29.98 | 0.6980 | 0.8369 | ||
| gender | 30.19 | 0.7215 | 0.8505 | 33.77 | 0.7005 | 0.8449 | ||
The bolded numbers represent that the corresponding auxiliary tasks can improve the performance of the CNNs in choroidal thickness estimation compared with the corresponding single task learning.
MAE: Mean absolute error.
R2: coefficient of determination.
PCC: Pearson correlation coefficient.
3.3. Regional choroidal thickness estimation in ETDRS grids
3.3.1. Performance of single-task learning
Regional choroidal thickness estimation in ETDRS grids was summarized in Table 3, demonstrating that the Efficientnetb0 CNNs yielded better performance in the outer temporal (MAE = 33.23 μm, R2 = 0.6914, PCC = 0.8432) and the inferior temporal (MAE = 33.20 μm, R2 = 0.7405, PCC = 0.8644) sub-regions. However, it is important to note that the performance of the Efficientnetb0 model in estimating the average choroidal thickness (MAE = 25.61 μm, R2 = 0.7817, PCC = 0.8985) surpassed that of each individual sub-region.
Table 3.
Performances of Efficientnetb0 single- and multi-task learning in regional choroidal thickness estimation of ETDRS grids.
| ETDRSa grids | Single-task Learning |
Multi-task Learning |
||||||
|---|---|---|---|---|---|---|---|---|
| MAEb (μm) | R2c | PCCd | p-value | MAEb (μm) | R2c | PCCd | p-value | |
| Central | 40.48 | 0.6399 | 0.8041 | 37.71 | 0.6853 | 0.8282 | ||
| Inner superior | 41.07 | 0.5755 | 0.7655 | 39.51 | 0.6171 | 0.7965 | ||
| Inner temporal | 39.92 | 0.6512 | 0.8134 | 35.54 | 0.7249 | 0.8523 | ||
| Inner inferior | 40.49 | 0.6488 | 0.8088 | 40.10 | 0.6742 | 0.8251 | ||
| Inner nasal | 38.42 | 0.6811 | 0.8319 | 36.11 | 0.7114 | 0.8444 | ||
| Outer superior | 41.91 | 0.5315 | 0.7527 | 40.52 | 0.5974 | 0.7877 | ||
| Outer temporal | 33.23 | 0.6914 | 0.8432 | 31.10 | 0.7650 | 0.8751 | ||
| Outer inferior | 33.20 | 0.7405 | 0.8644 | 31.68 | 0.7504 | 0.8713 | ||
| Outer nasal | 35.52 | 0.6711 | 0.8382 | 32.77 | 0.7256 | 0.8526 | ||
The bolded numbers represent that the choroidal thickness in the corresponding regions can be estimated by the CNNs in the performance with MAE less than 35 μm.
ETDRS: Early Treatment Diabetic Retinopathy Study.
MAE: Mean absolute error.
R2: coefficient of determination.
PCC: Pearson correlation coefficient.
3.3.2. Performance of multi-task learning
For all nine subregions in ETDRS grids, we implemented multi-task learning in the CNN model to enable simultaneous prediction. The performance in each subregion, as predicted by the multi-task model, exhibited improvement compared to single-task learning (Table 3). For example, in outer temporal region, the MAE decreased from 33.23 to 31.10 μm and R2 increased from 0.6914 to 0.7650 when using multi-task model compared to single task mode. It is worth noting that the performance in the ETDRS regions was not as good as the estimation of average choroidal thickness.
3.4. Regional choroidal thickness estimation in 100 grids
For one hundred mm grids of choroidal thickness estimation, we implemented multi-task learning to train the CNN model to simultaneously predict the choroidal thickness values for all one hundred sub-regions. Fig. 2 shows the MAE for each sub-region. It is observed that the MAE for all subregions was less than 45.90 μm, while the minimum one was only 33.86 μm (R2 = 0.7606, PCC = 0.8726). The estimated performance in temporal and inferior areas was relatively better (MAE <40 μm). The estimation results of 100 grids were found to be suboptimal compared to the metrics of the mean choroidal thickness within the intire mm region. (MAE = 25.61 μm, R2 = 0.7817, PCC = 0.8985).
Fig. 2.
The mean absolute error values of choroidal thickness in 100 sub-regions obtained by the Efficientnetb0 model with multi-task learning.. (The units of all data were 'μm'. Red color numbers mean the MAE >40 μm, while blue color means MAE <40 μm). (For interpretation of the references to color in this figure legend, the reader is referred to the Web version of this article.)
4. Discussions
In this study, we evaluated various CNN models and identified that the Efficientnetb0 network exhibited the highest performance in directly estimating choroidal thickness from color fundus photographs. Additionally, we introduced novel models employing single-task, multi-task, and auxiliary task learning approaches to enhance the estimation of regional choroidal thickness. Our results demonstrated accurate predictions for average choroidal thickness, as well as regional choroidal thickness within ETDRS grids and 100 grids.
Previous studies have employed CNNs to estimate the choroidal thickness using fundus images [15,16], which only focus on the performance of single CNN. We have evaluated various neural networks and found that Efficientnetb0 achieves the best performance in the estimation of average choroidal thickness across the entire mm region (MAE = 25.61 μm, R2 = 0.7817, PCC = 0.8985), which demonstrated higher accuracy compared to the findings of Komuku et al. [15] (R2 = 0.68) and Dong et al.'s study [16] (MAE = 49.20 μm, R2 = 0.62). Moreover, by introducing diopter spherical, anterior chamber depth, and lens thickness as auxiliary tasks, the performance of the CNN models on the primary task can be improved. Theoretically, deep learning has the potential to systematically investigate the relationship between features to enhance performance [36]. By incorporating auxiliary tasks that involve various clinical indexes, additional information can be obtained. For instance, these auxiliary tasks can reveal the positive or negative correlations between choroidal thickness and ocular parameters, thereby affecting the performance of CNN. Interestingly, the diopter spherical (correlation PCC = 0.3480) and spherical equivalent (correlation PCC = 0.3501) exhibit positive correlations with the choroidal thickness, but the performance of their respective auxiliary task models differs. The auxiliary task model using diopter spherical achieves a predicted MAE of only 26.25 μm, whereas the MAE of the spherical equivalent increases to 29.47 μm. This indicates that despite having the same positive correlation as auxiliary tasks, the prediction MAE can increase by over 3 μm. Hence, it is inconclusive whether clinical parameters enhance or hinder prediction performance solely based on the correlation between the main task and auxiliary task. Further investigation is necessary to examine the correlation between the features of the main task and the auxiliary task. Identifying the optimal auxiliary tasks that can effectively enhance the accuracy of choroidal thickness estimation still need further investigation.
The thickness of the choroid exhibits notable variations across different regions of the fundus in healthy individuals [37,38], suggesting that only investigating the mean and sub-foveal choroidal thickness is inadequate for a comprehensive choroidal assessment. For instance, in early-stage diabetic retinopathy patients, the choroidal thickness is significantly increased in the outer nasal and outer inferior regions of the ETDRS grid [17]. Patients with idiopathic macular holes exhibit a thinner choroid compared to their unaffected fellow eye, specifically in the nasal 3 mm to the fovea area [18]. The choroidal thickness decreases in all sub-regions among the aging and myopic population and the most pronounced thinning for the aged group occurs in the outer inferior region of the ETDRS grid, while the myopic group exhibits greater thinning in the inner inferior region [19]. Previous studies only applied the CNNs to estimate the mean and average choroidal thickness, while our research breaks new ground by introducing the multi-tasks learning model to estimate the regional thickness values in ETDRS and 100 grids, which achieves a remarkable level of accuracy in prediction outcomes. Interestingly, the Efficientnetb0 multi-task learning model performs better in each subregion when compared to single-task learning. This can be attributed to the sharing of valuable choroidal information across different tasks, enabling the model to extract intricate details for enhancing CNN training and ultimately boosting the accuracy of predictions. However, as the number of tasks increases, starting from the nine subregions as defined by ETDRS and extending to one hundred mm subregions, a decrease in both the predicted accuracy and efficiency was observed. Therefore, it is important to recognize that there is no "more the better" principle in multi-task learning. Determining the optimal number of tasks that yield the highest prediction accuracy warrants further exploration.
This study marks the first investigation into the performance of CNN for estimating regional choroidal thickness from fundus photography. Moreover, our results surpass those of prior studies in accurately estimating the mean choroidal thickness. Estimating choroidal thickness based on color fundus images provides an economical, convenient, and accessible approach for clinical doctors to assess the choroid, which may have a great application prospect.
We acknowledge that the precision of the present algorithm still falls short of its clinical applicability, particularly since our investigation focused solely on healthy subjects and excluded patients with pachychoroidal diseases or uveitis, whose scleral-choroidal junction may be poorly distinguishable. The current algorithm may not be applied to pachychoidal diseases or uveitis. Further studies are needed to investigate how to estimate choroidal thickness in these patients. As previously discussed, there are several avenues for enhancing the performance of the CNN models in future research. These may involve devising a suitable auxiliary task, exploring an optimal number of multi-tasks, and incorporating diverse disease samples for validation purposes.
In conclusion, the Efficientnet-b0 CNN model can directly predict the average and regional choroidal thickness based on fundus color imaging with high prediction accuracy and efficiency. Additionally, the incorporation of multi-task and auxiliary task learning enhances the model's performance even more.
Funding statement
This work was supported in part by the Shantou Science and Technology Project (190,917,085,269,835), in part by STU Scientific Research Initiation Grant, China (NTF20021), in part by the Guangdong Natural Science Foundation (2022A1515011396), and in part by Science and Technology Planning Project of Guangdong Province of China (180,917,144,960,530).
Declarations of patients’ inform consent
Our research adheres to the tenets of the Declaration of Helsinki. Institutional Review Board (IRB) approval was obtained from the Joint Shantou International Eye Center (JSIEC) of Shantou University and the Chinese University of Hong Kong. (Approval ID 190917085269835)
Informed consent is omitted by the Institutional Review Board from the Joint Shantou International Eye Center of Shantou University and the Chinese University of Hong Kong due to its retrospective nature. All details of patients’ names and date of birth from images and data were removed before analysis, so there is not identifiable patient information for publication.
CRediT authorship contribution statement
Yibiao Rong: Writing – original draft, Visualization, Validation, Software, Project administration, Methodology, Investigation, Formal analysis, Data curation, Conceptualization. Qifeng Chen: Writing – original draft, Visualization, Validation, Software, Formal analysis. Zehua Jiang: Writing – review & editing, Resources, Investigation, Data curation. Zhun Fan: Writing – review & editing, Supervision, Project administration, Funding acquisition, Conceptualization. Haoyu Chen: Writing – review & editing, Supervision, Project administration, Funding acquisition, Conceptualization.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Footnotes
Supplementary data to this article can be found online at https://doi.org/.
Contributor Information
Zhun Fan, Email: zfan@stu.edu.cn.
Haoyu Chen, Email: drchenhaoyu@gmail.com.
Appendix A. Supplementary data
The following are the Supplementary data to this article.
figs1.
figs2.
References
- 1.Lee S.S.Y., et al. Choroidal thickness in young adults and its association with visual acuity. Am. J. Ophthalmol. 2020;214:40–51. doi: 10.1016/j.ajo.2020.02.012. [DOI] [PubMed] [Google Scholar]
- 2.Sekiryu T. Choroidal imaging using optical coherence tomography: techniques and interpretations. Jpn. J. Ophthalmol. 2022;66:213–226. doi: 10.1007/s10384-022-00902-7. [DOI] [PubMed] [Google Scholar]
- 3.Huynh E., et al. Past, present, and future concepts of the choroidal scleral interface morphology on optical coherence tomography. Asia Pac J Ophthalmol (Phila) 2017;6:94–103. doi: 10.22608/apo.201698. [DOI] [PubMed] [Google Scholar]
- 4.Fujiwara T., Imamura Y., Margolis R., Slakter J.S., Spaide R.F. Enhanced depth imaging optical coherence tomography of the choroid in highly myopic eyes. Am. J. Ophthalmol. 2009;148:445–450. doi: 10.1016/j.ajo.2009.04.029. [DOI] [PubMed] [Google Scholar]
- 5.Ikuno Y., Tano Y. Retinal and choroidal biometry in highly myopic eyes with spectral-domain optical coherence tomography. Invest. Ophthalmol. Vis. Sci. 2009;50:3876–3880. doi: 10.1167/iovs.08-3325. [DOI] [PubMed] [Google Scholar]
- 6.Chung S.E., Kang S.W., Lee J.H., Kim Y.T. Choroidal thickness in polypoidal choroidal vasculopathy and exudative age-related macular degeneration. Ophthalmology. 2011;118:840–845. doi: 10.1016/j.ophtha.2010.09.012. [DOI] [PubMed] [Google Scholar]
- 7.Kim Y.T., Kang S.W., Bai K.H. Choroidal thickness in both eyes of patients with unilaterally active central serous chorioretinopathy. Eye (Lond) 2011;25:1635–1640. doi: 10.1038/eye.2011.258. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Cheung C.M.G., et al. Pachychoroid disease. Eye (Lond) 2019;33:14–33. doi: 10.1038/s41433-018-0158-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Laviers H., Zambarakji H. Enhanced depth imaging-OCT of the choroid: a review of the current literature. Graefes Arch. Clin. Exp. Ophthalmol. 2014;252:1871–1883. doi: 10.1007/s00417-014-2840-y. [DOI] [PubMed] [Google Scholar]
- 10.Adhi M., et al. Choroidal analysis in healthy eyes using swept-source optical coherence tomography compared to spectral domain optical coherence tomography. Am. J. Ophthalmol. 2014;157:1272–1281.e1271. doi: 10.1016/j.ajo.2014.02.034. [DOI] [PubMed] [Google Scholar]
- 11.Rong Y., et al. Direct estimation of choroidal thickness in optical coherence tomography images with convolutional neural networks. J. Clin. Med. 2022;11 doi: 10.3390/jcm11113203. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Yan Y.N., et al. Fundus tessellation: prevalence and associated factors: the Beijing eye study 2011. Ophthalmology. 2015;122:1873–1880. doi: 10.1016/j.ophtha.2015.05.031. [DOI] [PubMed] [Google Scholar]
- 13.Neelam K., Chew R.Y., Kwan M.H., Yip C.C., Au Eong K.G. Quantitative analysis of myopic chorioretinal degeneration using a novel computer software program. Int. Ophthalmol. 2012 Jun;32(3):203–209. doi: 10.1007/s10792-012-9542-4. [DOI] [PubMed] [Google Scholar]
- 14.Huang D., Li R., Qian Y., Ling S., Dong Z., Ke X., Yan Q., Tong H., Wang Z., Long T., Liu H., Zhu H. Fundus tessellated density assessed by deep learning in primary school children. Transl Vis Sci Technol. 2023 Jun 1;12(6):11. doi: 10.1167/tvst.12.6.11. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Komuku Y., et al. Choroidal thickness estimation from colour fundus photographs by adaptive binarisation and deep learning, according to central serous chorioretinopathy status. Sci. Rep. 2020;10:5640. doi: 10.1038/s41598-020-62347-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Dong L., et al. Deep learning-based estimation of axial length and subfoveal choroidal thickness from color fundus photographs. Front. Cell Dev. Biol. 2021;9 doi: 10.3389/fcell.2021.653692. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Wang W., et al. Choroidal thickness in diabetes and diabetic retinopathy: a swept source OCT study. Invest. Ophthalmol. Vis. Sci. 2020;61:29. doi: 10.1167/iovs.61.4.29. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Zeng J., et al. Choroidal thickness in both eyes of patients with unilateral idiopathic macular hole. Ophthalmology. 2012;119:2328–2333. doi: 10.1016/j.ophtha.2012.06.008. [DOI] [PubMed] [Google Scholar]
- 19.Bartol-Puyal F.A., et al. Distribution of choroidal thinning in high myopia, diabetes mellitus, and aging: a swept-source OCT study. J Ophthalmol. 2019;2019 doi: 10.1155/2019/3567813. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Flynn H.W., Jr., et al. Pars plana vitrectomy in the early treatment diabetic retinopathy study: ETDRS report number 17. Ophthalmology. 1992;99:1351–1357. doi: 10.1016/S0161-6420(92)31779-8. [DOI] [PubMed] [Google Scholar]
- 21.Kingma D., Ba J., Adam: a method for stochastic optimization, arXiv preprint arXiv: 2014,1412.6980.
- 22.Loshchilov I., Hutter F. ICLR (5th International Conference on Learning Representations. 2016. SGDR: stochastic gradient descent with warm restarts. [Google Scholar]
- 23.He T., Zhang Z., Zhang H., Zhang Z., Li M. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. CVPR); 2019. Bag of tricks for image classification with convolutional neural networks. [Google Scholar]
- 24.Krizhevsky A., Sutskever I., Hinton G. ImageNet classification with deep convolutional neural networks. J Advances in neural information processing systems. 2012;25 [Google Scholar]
- 25.He K., Zhang X., Ren S., Sun J. 2016. Deep residual learning for image recognition; pp. 770–778. (2016 IEEE Conference on Computer Vision and Pattern Recognition: 2016). [Google Scholar]
- 26.Simonyan K., Zisserman A. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv. [Google Scholar]
- 27.Szegedy C., Wei L., Jia Y., Sermanet P., Rabinovich A. 2015 IEEE Conference on Computer Vision and Pattern Recognition 2015. 2015. Going deeper with convolutions. [Google Scholar]
- 28.Tan M., Le Q.V. Proceedings of the 36th International Conference on Machine Learning. 2019. EfficientNet: rethinking model scaling for convolutional neural networks; pp. 6105–6114. 2019. [Google Scholar]
- 29.Tan M., Le Q.V. EfficientNetV2: smaller models and faster training. Proceedings of the 38th International Conference on Machine Learning. 2021;2021:10096–10106. [Google Scholar]
- 30.Zhang Y., Yang Q. A survey on multi-task learning. IEEE Trans. Knowl. Data Eng. 2021:1. doi: 10.1109/TKDE.2021.3070203. 1. [DOI] [Google Scholar]
- 31.Liu S., Davison A.J., Johns E. 2019. Self-supervised Generalisation with Meta Auxiliary learning[C]. Proceedings of the 33rd International Conference on Neural Information Processing Systems; pp. 1679–1689. New York. USA. [Google Scholar]
- 32.Flores-Moreno I., Lugo F., Duker J.S., Ruiz-Moreno J.M. The relationship between axial length and choroidal thickness in eyes with high myopia. Am. J. Ophthalmol. 2013;155:314–319.e311. doi: 10.1016/j.ajo.2012.07.015. [DOI] [PubMed] [Google Scholar]
- 33.Xue J., Ji X., Yu Y., Wu C. Choroidal thickness and correlations with intraocular pressure and spherical equivalent in young myopic eyes without maculopathy. International Journal of Ophthalmology & Visual Science. 2021;6(1):22–28. [Google Scholar]
- 34.M S. Coefficients of determination for multiple logistic regression analysis. Am. Statistician. 2000;54:17–24. [Google Scholar]
- 35.Rodgers J.L., Nicewander W.A. Thirteen ways to look at the correlation coefficient. 1988;42:59–66. [Google Scholar]
- 36.He S., Feng Y., Ellen Grant P., Ou Y. Deep relation learning for regression and its application to brain age estimation. IEEE Trans. Med. Imag. 2022 doi: 10.1109/tmi.2022.3161739. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Ouyang Y., et al. Spatial distribution of posterior pole choroidal thickness by spectral domain optical coherence tomography. Invest. Ophthalmol. Vis. Sci. 2011;52:7019–7026. doi: 10.1167/iovs.11-8046. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Hwang S., Kong M., Song Y.M., Ham D.I. Choroidal spatial distribution indexes as novel parameters for topographic features of the choroid. Sci. Rep. 2020;10:574. doi: 10.1038/s41598-019-57211-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.




