Fig. 4. Overview of select semantic and radiomic features and human vision.
a First-order features describe the distribution of voxel values in a given region, without encoding spatial relationships. Such features can be modeled with histograms. Second-order features, often called “textures”, describe the statistical interrelationships between voxels in space and require more complex models. Higher-order features are obtained by imposing a filter or other transformation to extract statistical relationships which may be abstract and lack visual correlates. Note, handcrafted radiomic features are typically applied to a region of interest which can be segmented manually, or in a data-driven manner as part of the machine learning pipeline. For features at all levels, if the imaging data were obtained prospectively, radiomic analysis may include access to the raw data (pre-reconstruction), whereas retrospective datasets typically lack the raw image data and must be analyzed in the reconstructed format. *Hand-crafted features are specified in code; whereas, as part of its training process, deep-learning AI autonomously chooses which features to model. b Key concepts in human visual perception. Information derived from refs. 31–40,55–58,67,68. Brain, neuron, cone cells, and DNA illustrations courtesy of National Institutes of Health “BioArt” (https://bioart.niaid.nih.gov/), other drawings are original. Abbreviations: FD Fractal Dimension, GBOR Gabor Filters, GLCM Gray-Level Co-occurrence Matrix, GLDM Gray-Level Dependence Matrix, GLRLM Gray-Level Run Length Matrix, GLSZM Gray-Level Size Zone Matrix, HOG Histogram of Oriented Gradient, LBP Local Binary Patterns, LoG Laplacian of Gaussian, LPQ Local Phase Quantization, MF Minkowski Functional, NGTDM Neighborhood Gray-Tone Difference Matrix, WT Wavelet Transform.
