Skip to main content
Computational Intelligence and Neuroscience logoLink to Computational Intelligence and Neuroscience
. 2022 Jun 29;2022:9063880. doi: 10.1155/2022/9063880

Classification of Transgenic Mice by Retinal Imaging Using SVMS

Farrukh Sayeed 1,, K Rafeeq Ahmed 2, M S Vinmathi 3, A Indira Priyadarsini 4, Charles Babu Gundupalli 5, Vikas Tripathi 6, Wesam Shishah 7, Venkatesa Prabhu Sundramurthy 8,
PMCID: PMC9259271  PMID: 35814547

Abstract

Alzheimer's disease is the neuro disorder which characterized by means of Amyloid– β (A  β) in brain. However, accurate detection of this disease is a challenging task since the pathological issues of brain are complex in identification. In this paper, the changes associated with the retinal imaging for Alzheimer's disease are classified into two classes such as wild-type (WT) and transgenic mice model (TMM). For testing, optical coherence tomography (OCT) images are used to classify into two groups. The classification is implemented by support vector machines with the optimum kernel selection using a genetic algorithm. Among several kernel functions of SVM, the radial basis kernel function provides the better classification result. In order to deal with an effective classification using SVM, texture features of retinal images are extracted and selected. The overall accuracy reached 92% and 91% of precision for the classification of transgenic mice.

1. Introduction

The most common form of disability is neurodegenerative disease [1, 2]. Because Alzheimer's disease has such a long development period, patients can benefit from frequent testing and receive early treatment. However, due to their high cost and limited choice, current clinical diagnostic imaging techniques do not match the specific needs of screening methods [3, 4]. We made it a priority in this study to assess the retinal, particularly the retinal vasculature, as a potential solution for performing dementia assessments in Alzheimer's chronic conditions. Inflammatory alterations may begin 20+ years before neurological dysfunction manifests, and though the time neurotoxic effects manifest, cerebral deterioration has so far gradually extended. The Alzheimer's Society, the National Institute of Health, and thus the Global Advisory Committee on AD have suggested a study paradigm given a set of confirmed indicators connected towards both kinds of abnormalities that are proxies for AD to identify AD in actual persons [57]. All across the process, flexible scalable neural nets were used. The process obtained an overall accuracy rate of 82.44 percent using data from either the UK Biobank. It included a saliency analysis of this pipeline's understandability in addition to a high classifier shown in Figure 1. The detection of transgenic mice is carried out from the input fundus image, but the existing approaches possess a higher false detection rate which degrades the accuracy of the system. Additionally, the following problems are faced in optimal detection of classes which are listed as

  1. Difficulty in feature differentiation: the detection of transgenic mice is based on various features such as texture, color, and intensity, but the differentiation of these minute features from each other is a hard task that degrades the computation of accurate diseases

  2. Class overlapping: the class of input image is also determined by the existing approaches, but the limited set of training data of each severity results in a class imbalance problem affecting the accuracy of classification

  3. Improper preprocessing: the execution of conventional proper preprocessing and effective enhancement of contrast techniques by the existing approaches results in difficulty in identifying the features from the background

Figure 1.

Figure 1

Typical flow for transgenic mice.

The major objective of this study is to provide precise classification between the WT and TMM and to compute the accuracy of the diseases in an accurate manner. This objective is achieved by fulfilling the subobjectives which are listed as follows:

  1. To minimize the level of artifacts in the input image by performing effective preprocessing of the image

  2. To maximize the precise identification of features from the preprocessed image by performing enhancement of contrast level

  3. To effective classify the images into two classes based on the extraction of significant features

  4. To determine the features related to the disease based on the variation in the intensity of the features for the purpose of diagnosis

2. Related Work

Under [8] article, authors investigated alterations in optic disc linked with Alzheimer's disease using the retinal as a window into the central and peripheral nervous system. Optical coherence tomography would be used to analyse the retinas of transgenic mice models (TMM) and wild-type (WT) of Alzheimer's disease, and support vector machines with the radial basis function kernel were used to categorize the cells in the retina into TMM and WT classes. At the age of four months, predictions were over 80% accurate, and at the age of eight months, they were over 90% accurate. In line with the results, feature extraction of generated fundus images acquired shows a much more diverse retinal architecture in mouse models at the age of eight.

Utilizing coregistered angle-resolved [9] low-coherence interferometry (a/LCI) and optical coherence tomography, we obtained insight light scattering data from the retinas of triple transgenic Alzheimer's disease (3xTg-AD) mice and wild-type (WT) age-matched controls (OCT). Visual guiding and segmentation depths supplied by cross OCT B-scans were used to obtain perspective dispersion data from the peripheral nerve layer, outer papillary overlay, and endodermal epithelial. When comparing vivo mouse cells in the retina to WT controls, OCT imaging revealed a substantial weakening of the nerve fibre layer. The a/LCI scattering measures offered additional information which helps to differentiate AD mice by quantifying tissue heterogeneity. While compared to the WT mice, the AD mice's eyes demonstrated an increased range of values in motor neuron layer interferometric strength.

In [10], the authors of this article describe the relationship between retinal image characteristics and cerebral-amyloid (A) load in the hopes of establishing a benign method for predicting A deposit in Alzheimer's illness. Moreover, while comparing to A+ individuals, a substantial variation in textural predefined sequence across retina capillaries and their neighbouring areas was detected in A+ participants. Using the collected characteristics, classifiers are trained to classify new individuals. Including an efficiency of 85 percent, the classification can distinguish A+ patients from “A” patients.

3. Proposed Work

This section presents the description of the proposed model for the classification of transgenic mice using SVMs.

3.1. Preprocessing

For enhancing the information for the disease diagnosis system, it is necessary to use some of the preprocessing steps as follows:

  • (i)

    Artifacts removal: blurriness, poor edges, and illumination are called as artefacts, which are removed using the nonlinear diffusion filtering algorithm, which eliminates all kinds of artefacts and ensures the image quality in terms of illumination correction and edge preservation

  • (ii)

    Contrast enhancement: low contrast is one of the important issues of image classification. In this work, we consider that contrast enhancement is an optimization problem that intention is to optimize the pixel values based on the contrasting level of the input image.

  • (iii)
    Image normalization: normalization of the image is valuable to variation of pixel intensity or RGB color values for retina images that increase the quality of acquired fundus images by decreasing the equipment and desired noises of the retina images. Following, the misrepresentations and fluctuations that happened in the retina images because of inexact image internment are recognized. Throughout the normalization of the image, the learned image is transformed into predetermined values. The formula for the estimation of image normalization is exactly denoted as follows. Image normalization is a technique of preprocessing that uses certain types of range as an expected outcome for the given inputs. It is useful for the prediction of forecasting purposes. Here, we know that there are several ways for forecasting and also prediction to maintain the large variations and also forecasting the normalized values makes the closer. There are some existing normalization techniques that are used for image normalization, which are as follows:
    1. Min-max normalization
    2. Z-score normalization
    3. Decimal scaling

Figure 2 describes the proposed work. In the following, the description of these normalization techniques is given in detail.

  • (i)
    Min-max normalization: this technique provides the transformation function for linear cases by the original values of data which is known as the min-max normalization technique. This technique uses predefined boundary for the specific retina images. The min-max normalization for the proposed technique is estimated as follows:
    A^=AminAmaxAminA×DC+C, (1)
      where A^ represents the normalized value of min-max data values and when the predefined boundary is between the C and D. When the range of values of A and B is matched between one another is used for result validation.
  • (ii)
    In general unstructured data can be normalized using Z-score normalization, which is represented as follows:
    Vi=ViESTDE, (2)
     where Vi is the Zscore normalized values of the input, and  E represents the row E of the ith column.
    STDE=1n1i=1nviE2, (3)
    E=1ni=1nviE2, (4)
     where  E is the mean value of the inputs. This technique uses five rows such as X,  Y,  Z,  U, and V for different columns for “N” for each row in which each row represents the Z-score technique that applies for computation of the normalized values. So that the standard deviation of the row is equal to the zero, then all values for the row are fixed to the zero values. It also gives the range of values between 0 and 1. In the technique of decimal scaling, the range is between −1 and 1. Based on the decimal scaling for image normalization, it is computed by the following equation:
    vi=v10j, (5)
  • where vi represents the scaled values, v represents the range of values, and j represents the small integer Max (vi) < 1. The above-mentioned techniques can be useful for discussing the values of normalization.

Figure 2.

Figure 2

Proposed work.

The combination of the above three techniques helps in producing the result, that is, improved min-max decimal with Z_normalization). The proposed retina image normalization technique is the advanced and most effective normalization technique that uses various types of input images, and also, it produces outputs in the range of 0 to 1. The normalization techniques can be possible for taking the average values as a threshold and then normalizing or replacing the values of the other side of pixels using the mean and standard deviation.

As compared to the min-max, Z-score, and decimal scaling techniques for image normalization, the proposed advanced technique for image normalization produces an effective result. The proposed technique is used for image normalization that produces the following advantages than the other existing methods.

  1. Suited for any volume of datasets (large, small, or medium size datasets)

  2. Individual pixel-based scaling and transformation are possible

  3. Used to make the independent data size

  4. Set the range between 0 and 1 and have the normalized values

  5. Easy to apply for whole numerical data values

The proposed innovative normalization technique is mathematically expressed as follows:

Y=X10n1×A10n1, (6)

where X represents the particular element of the data, N represents the number of digits in the element of X, A represents the pixel element for 1st digit X, and Y represents the scaled 1 value between 0 and 1. The proposed model is applicable for all types of input lengths to the full types of integers. This technique is different than the existing normalization approaches which are as follows:

  1. Changed from the unstructured to the structured one.

  2. Purpose of formulation/scaling.

  3. All the inputs are numerical data only.

  4. Low light enhancement: recent methods for low light enhancement methods are not assured for applying in low light environments. In order to design the new method for low light enhancement, it should focus on the following.

  5. Enhance the efficiency and robustness of the low-light image enhancement algorithms, and the previous methods are not supported for insufficient techniques to meet the needs of current applications.

  6. This method should be able to adjust for the different types of images on different scales to produce an extraordinary result.

  7. Minimize the complexity (time, space) for overall computations that are available to all the methods. This satisfies the practical application, and also, real-time images must be supported to use this.

  8. Most of the existing techniques are used for longer operations and hence take more processing time. And still, it leads to two problems such as detail ambiguity and color deviations.

  9. Establishes the higher quality of the image evaluation in which image information recovery and color recovery functions are used for adjusting the low light enhancements.

To address these issues for this step, multiscale Retinex theory is proposed, which is a color restoration method that processes the image quality for further enhancement using the single-scale Retinex or multiscale Retinex method. This algorithm is applied for 3 kinds of color channels such as R, G, and B separately. Thus, here, the original image is converted into the number of channels. This avoids the color distortion issue. For each algorithm, the color recovery factor C is computed. This computes the proportional relationship between the R, G, and B channels. This mathematically expressed equation is as follows:

Cix,y=FiIix,yi3Iix,y, (7)

where F represents the function for mapping the color values, and the performance of the best color intensity values for restoration and recovery helps in mapping the function in which logarithmic is used for computations of color recovery.

Cix,y=β×  logαIix,yi3Iix,y, (8)

where α and β are the mathematical expressions for variables in which logarithmic function is computed and rewritten as follows:

MRT=log  Rix,y, (9)
Nk=1Ciwklog  Iix,ylogGkx,y×Iix,y. (10)

This algorithm considers the merits of the convolution operation using Gaussian computations. For the multiscale, that is, small, medium, and large range of patches yield good ideal effects. The performance of color restoration is improved using the color recovery factor values since it is updated for concurrent iterations.

3.2. Feature Extraction

The gray level co-occurrence matrix captures numerical features of a texture using spatial relations of similar gray tones. The following are the features derivable from a normalized co-occurrence matrix in Table 1.

Table 1.

Statistical GLCM features-22.

Feature name Description
Autocorrelation Sum of squares
Contrast Sum average
Correlation 1 & 2 Sum variance
Cluster prominence Sum entropy
Cluster shade Difference variance
Dissimilarity Difference entropy
Energy Information measure of correlation 1 & 2
Entropy Inverse difference normalized (INN)
Homogeneity 1 & 2 Inverse difference moment normalized
Maximum probability

3.2.1. Computation of Textural Features from Normalized GLCM

  • Energy: measures the uniformity (or orderliness) of the gray level distribution of the image

  • Range = [0 1]
    i,jpi,j=0.1662+0.0832+0.0422+0.0832+0.1662+0+0.0422+0+0.02502+0.0422. (11)
  • Homogeneity: measures the smoothness (homogeneity) of the gray level distribution of the image; range = [0 1]
    i,jpi,j1+ij=0.8155. (12)
  • Contrast: Tables 2 and 3 give a measure of the intensity contrast between a pixel and its neighbor over the whole image

  • Range = [0 (size(GLCM,1)-1)2]

Table 2.

List of features.

S.No Feature Formula Description
1 Autocorrelation i,j(ij)p(i, j) It measures the coarseness of an image and evaluates the linear spatial relationships between texture primitives.
2 Contrast i,j|ij|2p(i, j) Represents the amount of local gray level variation in an image; a high value of this parameter may indicate the presence of edges, noise, or wrinkled textures in the image.
3 Correlation 1 i,j(jμy)p(i, j)/σxσy Gives a measure of how correlated a pixel is to its neighbor over the whole image.
4 Correlation 2 i,j(ij)p(i, j) − μxμy/σxσy Gives a measure of gray level linear dependence between the pixels at the specified positions relative to each other.
5 Cluster shade i,j(i+jμxμy)3p(i, j) Cluster shade and cluster prominence are measures of the skewness of the matrix, in other words the lack of symmetry.
6 Cluster prominence i,j(i+jμxμy)4p(i, j) Gives a measure of local intensity variation.
7 Dissimilarity i,j|ij|p(i, j) Dissimilarity measure belongs to the contrast group of texture metrics. Gives a measure of dissimilarity.
8 Energy i,jp(i, j)2 Measures the uniformity (or orderliness) of the gray level distribution of the image; images with a smaller number of gray levels have larger uniformity.
9 Entropy −∑i,jp(i, j)log(p(i, j)) Inhomogeneous images have a low entropy, while a homogeneous scene has high entropy.
10 Homogeneity 1 i,jp(i, j)/1+|ij| Gives a value that measures the closeness of the distribution of elements in the GLCM to the GLCM diagonal.
11 Homogeneity 2 i,j1/1+(ij)2p(i, j) Measures the smoothness (homogeneity) of the gray 12level distribution of the image; it is inversely correlated with contrast—if contrast is small, usually homogeneity is large.
12 Maximum probability MAXpi,ji,j Gives a measure of max. Frequency of occurrence of pixel pairs.
13 Sum of squares: Variance i,j(iμ)2p(i, j) Measures the dispersion (with regard to the mean) of the gray level distribution.
14 Sum average i−22Ngipx+y(i) Measures the mean of the gray level sum distribution of the image.
15 Sum variance i−22Ng(i − [∑i−22Ngipx+y(i)])2 Measures the dispersion (with regard to the mean) of the gray level sum distribution of the image.
16 Sum entropy −∑i−22Ngpx+y(i)log{px+y(i)} Measures the disorder related to the gray level sum distribution of the image.
17 Difference variance i22Ngii22Ngipxyi2 Measures the dispersion (with regard to the mean) of the gray level difference distribution of the image.
18 Difference entropy −∑i−22Ngpxy(i)log{pxy(i)} Measures the disorder related to the gray level difference distribution of the image.
19 Information measure of correlation 1 HXYHXY1/max{HX, HY} H is the entropy. HXY1=−∑i,jp(i, j)log(px(i), py(j)).
20 Information measure of correlation 2 1e2HXY2HXY HXY2=−∑i,jp(i, j)py(j)log(px(i), py(j)).
21 Inverse difference normalized (IDN) i=0Ng−1p(i, j)/1+(|ij|/N) IDMN and IDN measure image homogeneity as it assumes larger values for smaller gray tone differences in pair elements. It is more sensitive to the presence of near diagonal elements in the GLCM. It has maximum value when all elements in the image are same.
22 Inverse difference moment normalized (IDMN) i=0Ng−1p(i, j)/1+(|ij|2/N2)
Table 3.

List of shape features.

Sl.No Feature Formula Description
1 Circularity C=4πArea/(Perimeter)2 A measure of roundness or circularity (area-to-perimeter ratio) can be obtained as the ratio of the area of an object to the area of a circle with the same convex perimeter.1-for a circular object and <1 or >1 for an object that departs from circularity.
2 Eccentricity E=axislengthshort/axislengthlong Eccentricity is the ratio of the length of the short (minor) axis to the length of the long (major) axis of an object. Range: 0 to 1.
3 Orientation θ=1/2tan−1(2μ11/μ20μ02) The orientation is the angle between the horizontal line and the major axis. It indicates the overall direction of the shape. Range: −90° to 90°

3.3. Classification

For the classification of retinal images into two classes such as WT and TMM, SVMs are used, and the optimum kernel function is selected from the set of kernel functions for the classifications. Figure 3 discusses the pictorial representation for the classification using SVMs.

Figure 3.

Figure 3

SVM for classification of transgenic mice.

4. Experimental Results and Discussion

This proposed work is mainly implemented to provide precise classification between the WT and TMM for obtained accuracy of the diseases in an accurate manner. This proposed work undergoes preprocessing and feature extraction. The preprocessing technique is performed to minimize the level of the artifacts in the input image, and the enhancement of the contrast level is performed to increase the precise identification of features from the preprocessed image. Then, feature extraction is implemented to classify the images into two classes based on the extraction of significant features in an effective manner.

In this section, the performance of the proposed model is implemented for the sum of images in the dataset in Figure 4. Table 4 describes the confusion matrix for the two classes with the use of four kinds of metrics. The definition of each metric is given below, and classifier performance is shown in Table 5.

  1. True positive (TP) is the no. of candidates correctly identified as TMM

  2. False positive (FP) is the no. of candidates incorrectly identified as TMM

  3. True negative (TN) is the no. of candidates correctly identified as non-TMM

  4. False negative (FN) is the no. of candidates incorrectly identified as non-TM

Sensitivity=TPTP+FN=1818+06=75%, (13)
Specificity=TPTP+FN=16432256=72.82%, (14)
FPR=1Specificity=TPTP+FN=6132256=0.271, (15)
Accuracy=TP+TNTP+FN+TN+FP=16612280=72.85%. (16)

Figure 4.

Figure 4

(a) and (b) retinal images, and (c), (d) OCT images.

Table 4.

Confusion matrix.

Total candidates (2280) True class
Positive Negative
Predicted class Positive TP (18) FP (613) TP + FP (631)
Negative FN (06) TN (1643) TN + FN (1647)
TP + FN (24) TN + FP (2256)

Table 5.

Classifier performance.

Classifier Accuracy (%) Sensitivity (%) Specificity (%)
Decision tree 96 94 97
Neural network 74 98 73
Random forest 98 65 98
SVM 99 98 99

5. Conclusion

Alzheimer's disease is a progressive neurodegenerative illness defined by the presence of Amyloid–(A) in the brain. Nevertheless, because the degenerative concerns of the brain are complicated in classification, precise detection of this condition is a difficult process. The abnormalities in retinal fundus images for Alzheimer's disease are divided into two categories in this paper: wild-type (WT) and transgenic mice model (TMM). Optical coherence tomography (OCT) pictures are utilised to classify the patients into 2 categories for assessment. SVMs are used to classify the data, only with the best kernel selected via an evolutionary method. The RBF kernel function outperforms the other SVM support vectors in terms of accuracy. The textural properties of retinal fundus images are used to deal with just an efficient categorization utilising SVM. The overall accuracy reached 92% and 91% of precision for the classification of transgenic mice.

Contributor Information

Farrukh Sayeed, Email: sayeed.farrukh@gmail.com.

Venkatesa Prabhu Sundramurthy, Email: venkatesa.prabhu@aastu.edu.et.

Data Availability

The datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare that there are no conflicts of interest.

References

  • 1.Chibhabha F., Yang Y., Ying K., et al. Non-invasive optical imaging of retinal Aβ plaques using curcumin loaded polymeric micelles in APPswe/PS1ΔE9 transgenic mice for the diagnosis of Alzheimer’s disease. Journal of Materials Chemistry.B . 2020;8 doi: 10.1039/d0tb01101k. [DOI] [PubMed] [Google Scholar]
  • 2.Kumar S., Berriochoa Z., Ambati B. K., Fu Y. Angiographic features of transgenic mice with increased expression of human serine protease HTRA1 in retinal pigment epithelium. Investigative Opthalmology & Visual Science . 2014;55(6):3842–3850. doi: 10.1167/iovs.13-13111. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Tian J., Smith G., Guo H., et al. Modular machine learning for Alzheimer’s disease classification from retinal vasculature. Scientific Reports . 2021;11(1):p. 238. doi: 10.1038/s41598-020-80312-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Fosnacht A. M. From brain disease to brain health: primary prevention of Alzheimer’s disease and related disorders in a health system using an electronic medical record-based approach. J. Prev. Alzheimer’s Dis. . 2017;4(3):157–164. doi: 10.14283/jpad.2017.3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Doustar J., Rentsendorj A., Torbati T., et al. Parallels between retinal and brain pathology and response to immunotherapy in old, late‐stage Alzheimer’s disease mouse models. Aging Cell . 2020;19 doi: 10.1111/acel.13246. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Salobrar-García E., Rodrigues-Neves A. C., Ramírez A. I., et al. Microglial activation in the retina of a triple-transgenic Alzheimer’s disease mouse model (3xTg-AD) International Journal of Molecular Sciences . 2020;21 doi: 10.3390/ijms21030816. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Naruhiko S., Jada Lewis N., Lewis J. Amyloid precursor protein and tau transgenic models of Alzheimer’s disease: insights from the past and directions for the future. Future Neurology . 2010;5(3):411–420. doi: 10.2217/fnl.10.10. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Bernardes R., Silva G., Chiquita S., Serranho P., Ambrósio A. F. Retinal biomarkers of alzheimer’s disease: insights from transgenic mouse models. Proceedings of the International Conference Image Analysis and Recognition ICIAR; 2017; Canada. [Google Scholar]
  • 9.Song G., Steelman Z. A., Finkelstein S., et al. Multimodal coherent imaging of retinal biomarkers of Alzheimer’s disease in a mouse model. Scientific Reports . 2020;10 doi: 10.1038/s41598-020-64827-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Sharafi S. M., Sylvestre J. P., Chevrefils C., et al. Vascular retinal biomarkers improves the detection of the likely cerebral amyloid status from hyperspectral retinal images. Alzheimer’s and Dementia: Translational Research & Clinical Interventions . 2019;5(1):610–617. doi: 10.1016/j.trci.2019.09.006. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request.


Articles from Computational Intelligence and Neuroscience are provided here courtesy of Wiley

RESOURCES