Skip to main content
Journal of Biomedical Optics logoLink to Journal of Biomedical Optics
. 2023 Apr 5;28(4):040901. doi: 10.1117/1.JBO.28.4.040901

Compact and ultracompact spectral imagers: technology and applications in biomedical imaging

Minh H Tran a, Baowei Fei a,b,c,*
PMCID: PMC10075274  PMID: 37035031

Abstract.

Significance

Spectral imaging, which includes hyperspectral and multispectral imaging, can provide images in numerous wavelength bands within and beyond the visible light spectrum. Emerging technologies that enable compact, portable spectral imaging cameras can facilitate new applications in biomedical imaging.

Aim

With this review paper, researchers will (1) understand the technological trends of upcoming spectral cameras, (2) understand new specific applications that portable spectral imaging unlocked, and (3) evaluate proper spectral imaging systems for their specific applications.

Approach

We performed a comprehensive literature review in three databases (Scopus, PubMed, and Web of Science). We included only fully realized systems with definable dimensions. To best accommodate many different definitions of “compact,” we included a table of dimensions and weights for systems that met our definition.

Results

There is a wide variety of contributions from industry, academic, and hobbyist spaces. A variety of new engineering approaches, such as Fabry–Perot interferometers, spectrally resolved detector array (mosaic array), microelectro-mechanical systems, 3D printing, light-emitting diodes, and smartphones, were used in the construction of compact spectral imaging cameras. In bioimaging applications, these compact devices were used for in vivo and ex vivo diagnosis and surgical settings.

Conclusions

Compact and ultracompact spectral imagers are the future of spectral imaging systems. Researchers in the bioimaging fields are building systems that are low-cost, fast in acquisition time, and mobile enough to be handheld.

Keywords: camera, compact, hyperspectral imaging, multispectral imaging, spectral imaging

1. Introduction and Motivation

Light interacts with objects through various means of scattering, absorption, reflection, and transmission. In the late 18th to early 19th century, studies involving light emitted from chemical flames and celestial bodies demonstrated that each chemical compound has a different fingerprint in its interactions with electromagnetic radiation. Nowadays, using instruments called spectrometers or spectrographs, the incoming radiation can be separated by wavelengths, and the resulting spectrum can be matched to determine the types and amount of chemicals. If spectrometers typically analyze the compound within a limited field of view, spectral imaging expands the field of view to include the spatial morphology of the subject. Spectral imaging is more powerful than spectrometry in understanding not just the types and amount but also the spatial distribution of chemicals.1 As such, spectral imaging is used in almost every application that requires imaging, such as satellite,2 agriculture,3 food science,4 and art.5 In biomedical imaging, spectral imaging has some advantages over regular color images. The amount of spectral and morphology information provides better understanding about physiological processes, which RGB and spectroscopy alone cannot achieve.6 Both Li et al.6 and Lu and Fei7 produced seminal literature reviews that discuss the different processes spectral imaging can reveal and help measure, such as metabolic processes; retinal oxygen saturation; tumors on the surface of skin, tongue, and mucosa; and ischemia in the intestine and the brain. Because spectral imaging mostly captures the reflected and scattered light in the nonionizing wavelengths, invasiveness and potential harm are minimal.

The motivation for compact and lightweight spectral camera systems came from remote sensing. Smaller and lighter cameras meant more space for other instruments. As spectral imaging was adapted to other fields, compact spectral cameras proved useful because they enabled on-the-spot sample acquisitions and analysis without cumbersome setup. Since the first generations of spectral cameras in the 1970s, innovations in compact spectrometry, manufacturing processes, material sciences, and computations have enabled compact and ultracompact spectral imaging systems.8 Innovations came from both industry and academia: while compact commercial devices were developed using proprietary solutions, many devices used within academia relied on low-cost, commercial-off-the-shelf (COTS) components. Since the last review of medical hyperspectral imaging from our research group,7 several new applications of spectral imaging in the medical and biological field became possible due to developments of compact and ultracompact spectral cameras. And yet, spectral imaging devices are still not widely used in the biomedical field.6,9 With this review paper, we hope that researchers can (1) understand the technological trends of upcoming spectral cameras, (2) understand new specific applications that portable spectral imaging unlocked, and (3) evaluate proper spectral imaging systems for their specific applications. Section 4 reviews the acquisition methods. Section 5 discusses components that enable the miniaturization of spectral cameras. In Sec. 6, we focus on a special subset of compact spectral cameras, which are spectral cameras that are both compact and low-cost, built using off-the-shelf components and low-cost manufacturing processes. In Sec. 7, we provide specific applications of compact spectral cameras in biomedical research. Finally, in Sec. 8 we provide extended discussion on the future of compact spectral cameras, in terms of engineering and biomedical applications.

2. Scope and Methodology

In this paper, the term “spectral imager” refers to both hyperspectral imagers (HSIs) and multispectral imagers (MSIs). In the early years of spectral imaging research, the term “imaging spectrometer” was also common.1015 In general, MSIs capture <20 bands of wavelengths, whereas HSIs can capture 20 to hundreds of bands.6 Some literature uses the term “ultraspectral imaging,” which refers to systems that collect hundreds to thousands of bands.16 Spectral imaging captures light reflected, scattered, and fluoresced from a sample, as in conventional imaging. We do not cover other imaging modalities that also rely on the spectral response of tissues, such as laser speckle contrast imaging (LSCI),17 Raman spectroscopy,18 and optical coherence tomography.19 Readers should not confuse spectral imaging with multispectral photoacoustic imaging, which uses the formation of sound waves following light absorption to image at different wavelengths.20 We only discuss the cameras using digital sensors, as research in analog spectral cameras is almost nonexistent. Finally, the definition of “compact” as used in the literature varies depending on the field. Table 1 displays dimensions and weights for systems that meet our definition of “compact”: without external lens or cables, these systems weigh no more than 5 kg (11  lbs). We also define “ultracompact” cameras as systems that weigh <500  g (1.1  lbs). For comparison, commercially high-end digital single-lens reflex (DSLR) cameras typically weigh <2.5  kg (5.5  lbs), midrange commercial webcams weigh 80 to 100 g (3 to 4 oz), and modern handheld spectrometers usually weigh no more than 1 kg (2  lbs).8 Our findings are summarized in Table 1. In this paper, we use the notation h×w×λ to show the pixel raster size of the hypercube. h and w refer to the height and width in the spatial dimension, and λ refers to the number of bands in the spectral dimension.

Table 1.

Compilation of compact spectral imaging systems used in medical and biological applications. Cust., customized or commercial-off-the-shelf systems; Comm., commercialized systems that can be purchased; LED, light-emitting diode; PGP, prism-grating-prism; AOTF, acousto-optic tunable filter; IMS, image mapping spectrometer; LCTF, liquid crystal tunable filter; FPI, Fabry–Perot interferometer; SRDA, spectrally resolved detector array; CTIS, computed tomographic imaging spectrometer; SFDI, spatial frequency-domain imaging; CCD, charge-coupled device; CMOS, complementary metal oxide semiconductors; CVD, cardiovascular disease; AMD, age-related macular degeneration; E, ex vivo; I, in vivo; H, human; A, animal; P, phantom. Weight refers to the weight of the camera and the lens only, not including other optical systems that may be used in the acquisition of biological samples.

  System specifications Biomedical application  
Year System Acquisition mode Dispersive device Range (nm) # Bands Sensor Weight (kg) Procedure Disease/process Organs I/E Subject Ref.
2001 Cust. Spectral Interference filters 400 to 635 4 CCD <2.0 Monitoring Basal cell carcinoma, Bowen’s diseases Skin I H 21
2016 Cust. Spectral Filter wheel 460 to 690 9 to 20 CMOS 0.13 Diagnosis Melanoma Skin I H 22
2017 Comm. Spatial FPI mosaic 600 to 1000 100 CMOS <0.5 Imaging Cancer Skin I A 23
2017 Comm. Spectral LED 414 to 995 8 CCD 0.5 Diagnosis Melanoma Skin I H 24
2018 Cust. Spectral LED 995 to 1613 6 InGaAs <1.0 Diagnosis Melanoma Skin I H 25
2018 Cust. Spectral LED 405 to 964 4 NA <1.0 Diagnosis Melanoma Skin I H 26
2019 Cust. Snapshot LVF 400 to 700 9 CMOS <0.5 Monitoring Skin cancer Skin I H 27
2020 Cust. Spectral LED 405 to 940 9 CMOS <1.0 Monitoring Skin cancer, erythema Skin I H 28
2017 Cust. Spectral LED 448 to 659 3 CMOS <0.5 Diagnosis Nevi, hemangiomas, seborrheic keratosis Skin I H 29
2011 Cust. Snapshot Mosaic filter 540 to 970 4 CMOS, CCD 0.1 Monitoring Skin ulcer wounds Skin I H 30
2017 Comm. Snapshot Holographic grating 600 to 1000 32 NA 0.36 Monitoring Skin ulcer wounds Skin I H 31
2012 Comm. Spectral AOTF 550 to 1000 NA CCD <1.0 Imaging Skin E A 32
2017 Comm. Spatial Grating 500 to 980 100 CMOS <1.5 Monitoring Wound healing Skin I H 33
2018 Comm. Spatial Transmission grating 500 to 1000 100 CMOS 0.45 Monitoring Wound healing Skin I H 34
2021 Comm. Spatial Transmission grating 500 to 1000 100 CMOS 0.45 Monitoring Burn wounds Skin I H 35
2021 Comm. Spatial Transmission grating 500 to 1000 100 CMOS 0.45 Monitoring Wound healing Skin I H 36
2018 Cust. Spectral LED 395 to 940 13 CMOS <2.0 Diagnosis Erythema Skin I H 37
2019 Cust. Spectral LED 453 to 663 8 CMOS <0.5 Diagnosis Seborrheic dermatitis, psoriasis Skin I H 38
2019 Comm. Snapshot SRDA 450 to 950 41 CCD <0.5 Diagnosis Necrosis Skin I H 39
2019 Cust. Snapshot SRDA 470 to 630 16 CMOS <0.5 Monitoring, diagnosis Melanoma, vascular occlusion Skin I H 40
2019 Comm. Spectral FPI interferometer 500 to 900 NA CMOS 0.99 Diagnosis, monitoring Vascular occlusion Skin I H 41
2020 Cust. Snapshot Weiner estimation 482 to 506 16 CMOS <0.5 Monitoring Melanoma, vascular occlusion, CVD Skin I H 42
2018 Comm. Snapshot Transmission grating 475 to 875 40 CMOS 0.36 Imaging   E H 43
2013 Comm. Spectral Filter wheel 475 to 850 5 CCD 0.68 Detection Veins I H 44
2012 Comm. Spectral AOTF 550 to 1000 NA CCD <1.0 Diagnosis Head and neck cancer Tongue I H 45
2013 Cust. Snapshot IMS 471 to 667 41 CCD <1.0 Diagnosis Head and neck cancer Oral cavity I H 46
2020 Comm. Spatial SRDA 470 to 900 150+ CMOS 0.58 Diagnosis Head and neck cancer Thyroid, tongue, esophagus E H 47
2016 Comm. Spatial Grating 405 to 750 72 CMOS NA Diagnosis Colorectal cancer Colon I, E H 48
2019 Cust. Spatial LED 660 to 950 9 CMOS NA Diagnosis Colorectal cancer Colon E H 49
2018 Cust. Spectral SFDI 690 to 950 9 CMOS <1.0 Diagnosis Ovarian cancer Ovaries E H, P 50
2016 Cust. Spectral LED NA 10 CMOS 0.6 Diagnosis Cervical cancer Cervix I H 51,52
2019 Comm. Snapshot SRDA 450 to 950 41 CCD <0.5 Diagnosis Breast cancer Breast tumor E H 53
2017 Comm. Snapshot Mosaic filter 460 to 630 16 CMOS 0.4 Diagnosis Diabetic retinopathy Retina I H 54
2017 Comm. Snapshot FPI mosaic 460 to 630 16 CMOS <0.5 Diagnosis Diabetic retinopathy Retina I H 55
2016 Cust. Spatial Grating 480 to 705 NA CCD <1.0 Diagnosis Alzheimer’s Retina I A 56
2019 Comm. Spatial PGP 400 to 1000 467 CCD 1.1 Diagnosis Alzheimer’s Retina I H 57
2011 Comm. Snapshot CTIS 420 to 720 76 CMOS <2.0 Diagnosis AMD Retina I H 58
2020 Cust. Spectral LED 405 to 700 9 CMOS <0.5 Diagnosis Chronic otitis media Ear E, I P, H 59
2019 Comm. Spatial Transmission grating 400 to 1000 1004 CCD 0.68 Imaging   E H 60
2011 Comm. Spectral LCTF 400 to 720 NA CCD 0.375 Surgery Tooth implant Tooth I H 61
2016 Comm. Spectral LCTF 400 to 1700 NA CCD, InGaAs 0.375 Surgery Parotidectomy, colectomy, cystectomy Facial nerve, colon, ureter. I H 62
2015 Comm. Spectral LCTF 400 to 720 NA CCD 0.375 Surgery Ischemia Small bowel I H 63
2016 Comm. Snapshot FPI mosaic 481 to 632 16 CMOS 0.4 Surgery Epilepsy Cortex I, E H, P 64
2016 Cust. Spectral Spectral wheel 470 to 700 8 CCD <2.0 Surgery Colorectal cancer Liver, gallbladder, colon, kidney E A 65
2019 Comm. Spatial PGP 400 to 1000 768 CMOS 1.8 Surgery Colorectal cancer Colon E H 66
Comm. Spatial PGP 900 to 1700 240 InGaAs 4.4
2010 Comm. Spatial PGP 400 to 1000 NA CCD 1.1 Surgery Ischemia Intestine I H 67
Comm. Spatial PGP 900 to 1700 NA InGaAs 1.5
2019 Comm. Spatial Transmission grating 500 to 1000 100 CMOS 0.45 Surgery Colorectal resection Colon E H 68
2018 Comm. Spatial Transmission grating 500 to 1000 100 CMOS 0.45 Surgery Anastomotic insufficiency Colon E H 69
2020 Cust. Spatial Transmission grating 500 to 1000 100 CMOS 0.26 Surgery Head and neck cancer Esophagus E P, H 70

Our literature search used the combination of the following search terms on three databases (Scopus, PubMed, and Web of Science): “compact” OR “miniature,” “hyperspectral” OR “multispectral,” and “camera” OR “imager.” We included only fully realized systems with definable dimensions and excluded developments in single components. There was a wide variety of contributions from industry, academic, and hobbyist space.

3. Historical Progress

The earliest applications of spectral imaging system were point scan cameras used for Earth remote sensing.71 A point scan camera called the Multispectral Scanner System (MSS) was used onboard the Earth Resources Technology Satellite (later called LANDSAT-1) in 1972.2 The system provided invaluable satellite images for the purpose of identifying, managing, and surveying geographical resources. To relay light, the system used a set of 24 fiber optic cables that transmitted light to the detectors. Because charge-coupled devices (CCDs) were not available at that time, the detectors used were photomultiplier tubes, which were extremely sensitive vacuum tubes that generated voltage upon radiation in the visible to near-infrared light range. Because of the extra components, MSSs were extremely bulky by today’s standards. The four-band system weighed up to 48 kg, measured 40×59×89  cm in dimensions, and consumed up to 42 W of power.72 Nevertheless, its success prompted NASA and Jet Propulsion Laboratory to develop multispectral cameras onboard future LANDSAT missions. In 1975, NASA launched LANDSAT-2 and equipped it with an even larger (64 kg, 54×62×126  cm) five-band MSS with added infrared capability. In 1979, NASA developed the airborne imaging spectrometer (AIS), which included a 32×32  pixels mercury cadmium telluride imaging sensor coupled with a silicon CCD multiplexer. AIS had a compact design to be flown on aircraft, measuring 30×30×20  cm in dimensions.2 A series of AIS followed, the most successful one being the airborne visible/infrared imaging spectrometer (AVIRIS). While new satellite-based spectral systems were being developed, the need for hyperspectral imagery in space-based applications was deemed unnecessary, and the only hyperspectral system that enjoyed long-term usage was the 202-wavelengths Hyperion camera onboard the Earth Observation-1 (EO-1).2 At the same time, commercial hyperspectral cameras for the purpose of land observation were developed with new imaging sensor technologies. An example was the compact airborne spectrographic imager (CASI), developed in 1989. CASI used a 512×288  pixels CCD to record up to 512 spatial pixels or 288 spectral bands in the 430 to 780 nm range.73

Up until 2000, progress in spectral imaging left much to be desired, especially when compared with parallel progress in commercial cameras and other consumer electronics.74 Developments of compact spectral imaging systems prospered after the 2000s, due to developments of portable spectroscopy.8 These developments did not stem from any new optical architecture, but rather manufacturing methods that matured enough to create miniature imaging components. The number of transistors in an integrated circuit has been observed to double every two years for the last 50 years, this is colloquially known as Moore’s law. Smaller imaging sensors that followed Moore’s law, lithography processes, microelectro-mechanical systems (MEMS), microcontrollers, and 3D printers were some innovations that made new spectral cameras smaller and lightweight. With advances in unmanned aerial vehicles (UAVs or drones) for consumers, ultracompact spectral imaging systems were being developed for the purpose of being carried by UAV.75 Modern UAVs have payload limits ranging from 125 g to 5.5 kg, limiting the maximum weight of compact spectral cameras.76 However, UAVs were not the only impetus for compact and ultracompact imaging systems. As other fields, such as biomedical imaging, industrial imaging, and environmental monitoring, moved from laboratory analysis to on-site imaging, the demand for compact spectral cameras grew larger. In the early phase of developments, tradeoffs were often required between performance and size. However, new generations of ultracompact devices seemed to overcome those limitations altogether, as demonstrated by a recent line scan imaging system that weighed only 400 g (0.9  lbs), yet was capable of capturing 100 spectral bands at a resolution of <5  nm.77 Figures 1 and 2 show a sample of spectral cameras over time to show the progress in size.

Fig. 1.

Fig. 1

Timeline of the progress in size for spectral imaging. 1. MSS-1, 1972. 2. MSS-2, 1975. 3. CASI, 1989. 4. Digital airborne imaging spectrometer, 1993. 5. Compact high-resolution imaging spectrometer, 2000. 6. Headwall Hyperspec, 2009. 7. Resonon PikaL, 2012. 8. BaySpec OCI-1000, 2014. 9. Imec Snapscan, 2018. 10. Specim IQ, 2018. After the 2000s, spectral imaging systems became remarkably smaller due to manufacturing advances. All selected imaging systems required motion of the camera or the sensor to acquire the spectral cube.

Fig. 2.

Fig. 2

A selection of ultracompact spectral imagers. (a) Two snapshot imaging cameras that weighed <30  g (reproduced from Ref. 76). (b) A handheld snapshot camera that weighed <500  g (reproduced from Ref. 77). (c) A spatial scanning camera used in UAV applications (reproduced from Ref. 78). (d) A snapscan imaging camera being used in a laboratory setting (reproduced from Ref. 79).

4. Acquisition Overview

The goal of spectral imaging is to acquire a datacube, a 3D block of data with two spatial dimensions and one spectral dimension (Fig. 3). In a datacube, the unit of smallest resolution is called the voxel, which is the equivalent of a pixel in digital images. The spatial dimension represents the field-of-field of interest, and the spectral dimension represents the different wavelengths. There are five methods of acquiring the datacube based on how much of the datacube is being captured within one exposure: point scanning, line scanning, spatial–spectral scanning, spectral scanning, and snapshot imaging.80 Because spatial–spectral scanning relies on translation to image the entire data cube, we grouped spatial–spectral scanning as a subset of line scanning.81 Acquisition methods can also be classified based on how they acquire the spectral component: either through interference filters, monochromators, or interferometers.80 Both interferometers and interference filters use the same optical mechanisms. They both consists of devices that superimposed light and produced interference patterns. However, interference filters typically use interference to block certain amount of light, whereas interferometers use interference to generate signals that can be measured and processed down the line. Each method offers their own engineering strengths and weaknesses, so choosing an appropriate method for datacube acquisition for specific applications is important.

Fig. 3.

Fig. 3

Comparison of different methods of datacube acquisitions. The shaded regions correspond to the section captured in one exposure. The arrows show the direction of scanning. (a) The hypercube. (b) Point scanning. (c) Line scanning. (d) Spectral scanning. (e) Spatial–spectral scanning. (f) Snapshot scanning.

4.1. Point Scanning and Line Scanning

Point scanning and line scanning, collectively called the spatial scanning or imaging spectrograph, acquires the entire spectrum section-by-section and uses mechanical means to scan the entire space. These methods are also known as “whiskbroom”/ “spotlight” and “push-broom,” respectively, which are terminologies originated from satellite imaging. In a point scanning imager, an aperture only allows light from a small section to pass through a monochromator, which will disperse the light onto a sensor array. The specific method of scanning point by point in whiskbroom imagers brings to mind that of confocal microscopy imaging, in which a pinhole is also used to block lights that are out of focus. Hyperspectral confocal microscopy combines both methods with little modifications, producing 3D structures that can be analyzed by spectral values.82,83 In a line scanning imager, the aperture is a slit that allows a sliver of light to be dispersed onto a two-dimensional (2D) sensor plane (the direction of dispersion is perpendicular to the direction of the slit). Compared with point scanning, line scanning can capture the same field of view in less time, but the higher number of sensors means that there are potentially more elements to calibrate and higher chance of sensors artifact.6 Line scanning remains the dominant method of acquisition in biomedical imaging7 and remote sensing.81

As mentioned, usage of spectral imaging for practical purposes began with the MSS in 1972. In this system, the point-scanning imager used a mirror and the motion of the spacecraft to achieve perpendicular and parallel scanning, respectively.84 Nowadays, in the field of remote sensing, mirrors combined with motion of the aircrafts are still being used as the mean of acquiring the full spatial component.71,85 By contrast, in histology, fluorescence, and confocal microscopy imaging, the subject being imaged is moved with the help of programmable platforms.60 Several line scanning cameras that used mirrors to image near objects were proposed. However, these devices all required spatial calibration to avoid image distortions.86,87 In the works by Sigernes et al.86 and Gutiérrez-Gutiérrez et al.,87 the mirror is on a rotational axis parallel to the line scanning slit. When the mirror rotates, only a line section gets reflected and focused onto the dispersive device.

4.2. Spectral Scanning

In spectral scanning (also called staring, framing, or band sequential), the datacube is captured spatially all at once but only at selected wavelengths. The process of wavelength selection is done using bandpass spectral filters, which can be either interference filters, variable filters, or interferometers. Because no filter has an infinitely small bandwidth, the resulting image captured at each wavelength should be considered more as a function of the filter’s spectral response, quantified by the following equation:

I(i)=0mi(λ)r(λ)dλ.

In this equation, I(i) is the intensity of the datacube captured by filter i, mi(λ) is the spectral response of the filter, and r(λ) is the aggregated spectrum that reaches the imager. Some early spectral scanning systems used filter wheels, which were interference filters that can be switched in and out. As the name suggests, the filters were arranged in a circle. During capture, the entire wheel rotated and cycled through every waveband. The advantages of filter wheels included cost and simplicity. However, the disadvantages of filter wheels included speed, size, lack of customization, and small number of bands. An alternative to mechanical filter wheels was electronic tunable filters, which were mainly acoustooptical tunable filters (AOTF),8890 liquid crystal tunable filters (LCTF),62,63,9193 or Fabry–Perot interferometers (FPI). There existed other mechanisms for manufacturing tunable filters, such as surface plasmon coupled tunable filters;94 however, three types described earlier are still the most popular. LCTF uses birefringence crystals to bandpass light. In LCTF, birefringent crystals and polarizing filters are stacked in alternating layers. Modifying the birefringent crystal’s retardance with electrical inputs results in different polarizing states of the output wavelength and in turn bandpass only selected wavelengths.91 AOTF uses crystal that selectively bandpass wavelengths based on the radio frequency inputs. AOTF functions similar to a diffraction grating but only has one constructive waveband passing through.95 Compared with LCTF, AOTF have faster switching time and less power demand.93 They also have no moving component, making them the preferred method for vibration-sensitive applications.93 Interferometers refer to a broad class of spectral imaging layouts that use different reflective surfaces layouts to produce interference, such as the Michelson interferometer, the Sagnac interferometer, the Mach–Zehnder interferometer, and the FPI.96

4.3. Spatial-Spectral Scanning

Spatial–spectral scanning (also called windowing) captures the datacube in both the spatial and spectral directions within one single exposure and effectively samples a diagonal “slice” out of the datacube. Even though the datacube reconstruction is harder to visualize in spatial–spectral scanning, this method offers advantages in acquisition speed and movements. Methods of spatial–spectral scanning often used dispersion elements that were location-dependent, such as linear variable filters (LVFs).97 With the use of a simple grating element in the fore optics, a line scanning spectral camera can be converted into a spatial–spectral scanning camera.98 The grating element transmitted lights that pass through the aperture slit based on both spatial and spectral values. Pichette et al.79 developed a commercial device called the “Snapscan” imager that used location-dependent interference filters. In their system, a series of Fabry–Perot filters overlaid the imaging sensor. The bandpass values of the filters varied across the translation direction, so by moving the sensor both spatial and spectral components were sampled. Fourier transform spectrometer can also be used as a mechanism to achieve spatial–spectral scanning.13,99

4.4. Snapshot Scanning

Snapshot scanning cameras capture the entire spatial structure multiple wavelengths at one exposure.100 The implementations of snapshot cameras are extremely diverse and can be classified into the following main categories: dispersive-based methods, coded-aperture methods, speckle-based methods, and spectral-resolved detector methods.101 Dispersive-based methods rely on dispersive elements (grating or prisms) to split the incoming images into different wavelengths that are recorded either on a single sensor or on multiple sensors. An example of a system that uses multiple sensors to record the spectral images is the beam splitting camera.100 These types of cameras are often seen in television and movies production, although they only have three sensors for red, green, and blue images. An example of a system that records the entire datacube on a single sensor is the computed tomographic imaging spectrometry (CTIS). CTIS is popular in the hobbyist and academic space, as it is easy to construct with low-cost components.102104 However, the reconstruction of the datacube using CTIS requires using inverse projection, which can be computational-consuming. Another problem is that CTIS records both the spatial and spectral data on a single sensor, which requires a tradeoff between spatial and spectral resolution. Coded-aperture methods use a patterned filter (mask/coded aperture) at the location of the aperture, which will “encode” the incoming wavelengths by lossy compression. The compressed data can record more light compared with CTIS, but this also means that some of the wavelengths will be lost.105 Speckle-based systems reconstruct the datacube through correlating the speckle data and the wavelengths. Spectrally resolved detector arrays (SRDA)39,5355 use interference filters manufactured on top of imaging sensor to capture snapshot images. For example, color filter arrays, the most popular of which are Bayer filters, are used in consumer digital cameras to simulate human color vision.106 Snapshot cameras that use SRDA are fast, compact, lightweight, and require no additional movement mechanics. However, they downsample the spatial dimension, which can result in aliasing of the spatial data if the Nyquist limit is not obeyed. There is also a tradeoff between the number of filters and the spatial pixel resolution: the higher the number of different filters are, the lower the light throughput.100 Readers who are interested in snapshot imaging implementations should consult a literature review by Hagen and Kudenov100

4.5. Conversion from RGB to Spectral Images

There existed a separate line of research that generates spectral images without using spectral cameras. Regular color vision cameras use three color filters in the red, blue, and green (RGB) wavelengths to simulate human color vision. However, these filters are broadband filters with a large amount of overlap, and transformation algorithms can be used to extrapolate multispectral or hyperspectral data. Furthermore, hyperspectral data are often sparse, making recovery of spectral data more feasible.107 All conversion methods generated a mapping function that transform three-wavelengths data to multiwavelengths data. The function was approximated by minimizing the differences between generated spectral data and expected spectral images. One of the first methods used for this was Wiener estimation.42,108,109 Arad and Ben-Shahar107 used match pursuit algorithm to construct a mapping from natural RGB to hyperspectral images. In recent years, machine learning algorithms, including neural networks, were investigated to learn the mapping. Koundinya et al.110 proposed 2D and 3D convolutional neural networks (CNN) systems that mapped RGB images to 31-waveband images in the wavelengths 400 to 700 nm. Alvarez-Gila et al.111 used generative adversarial networks (GAN) to produce spectral images from RGB. All these methods still faced significant constraints. As noted by Signoroni et al.112 in a review of neural networks for spectral images analysis, several of these generation methods could only generate outputs in the visible light range. Furthermore, they required extensive training inputs of RGB images and corresponding spectral images, which by itself needs a spectral camera to capture. Most input images used for training were outdoor images, which can make the mapping unsuitable for indoor or laboratory settings.

4.6. Comparisons of Acquisition Methods

The rule of thumb has been that spatial scanning cameras can achieve high spatial and spectral resolutions at the cost of acquisition time, snapshot cameras are fast, and spectral cameras achieve high spatial resolution while trading between acquisition time and spectral resolution.6 Considering the state-of-the-art available, it is important to re-examine this convention. We start by comparing two key spectral variables: the spectral resolution (defined as the full-width half-maximum or FWHM value at center wavelength8) and the spectral range. In this paper, we use the term “bandwidth” interchangeably with FWHM. Some authors such as Hagen and Kudenov100 found the term “resolution” in digital spectral cameras to be confusing as it can also be used to describe the number of spectral samples. Point scan and line scan systems have reliable spectral resolution: many commercial systems have FWHM in the range of 2.5 to 5 nm and are capable of capturing more than 100 spectral bands at once.81 One line scan system for remote sensing reported FWHM as narrow as 1.85 nm.113 Spectral scanning systems that use filter wheels typically achieved 3 to 10 bands38,114116 and each individual filter had bandwidth ranging anywhere from 30 to 250 nm.117 On the other hand, tunable filters can achieve very fine spectral resolution. AOTF could achieve a bandwidth of 0.1 nm;118 however, commercially available AOTF filters often had a minimum FWHM in the 2 to 6 nm range.91,119 LCTF systems often had larger spectral resolution compared with AOTF, ranging from 4 to 30 nm.91 Interferometers typically achieve good spectral resolution: FPI that were driven by piezoelectric had spectral bandwidth in the range of 10 to 20 nm.120 The FWHM also affects the light throughput, as interference filters with smaller bandwidths have lower light throughput. Some filters’ specification uses the term “optical density” to quantify the amount of energy blocked. Applications that need low signal such as autofluorescence imaging require an optical density of 6 or greater. However, low signal also means low signal-to-noise ratio (SNR) and affects the image quality.

In terms of functional ranges, the wavelength ranges of these devices were more dependent on the sensor types than the acquisition method. The functional range of silicon-based sensors is constrained in the range 400 to 1000 nm because of silicon’s photoelectric properties. If researchers wanted to investigate wavelengths in the short-wave infrared (SWIR) and higher (1000 to 12,000 nm), an alternative semiconductor material such as indium gallium arsenide (InGaAs, also named GaInAs) should be used. In terms of spectral wavelength selection, LCTF, AOTF, and FPIs provide an advantage over spatial scanning, snapshot scanning, and filter wheels. That is because the former can tune to arbitrary wavelengths while the latter have discrete wavelength selection. In many cases, specific selection of center wavelengths was not a major concern, so this advantage was not often used.

The spatial pixel resolution in spatial scanning and spectral scanning system is dependent mostly on the sensor resolution and the binning (grouping of several pixels into one) used. In snapshot cameras, the optic architecture determines the maximum pixel resolution available. If the snapshot architecture captures the entire datacube representation upon a single detector plane as in the case of CTIS, IMS, and SRDA architectures, then the spatial resolution is often poor. Ford et al.12 used a 2048×2048  pixels detector to capture a datacube of dimension 203×203×55 using CTIS. A recent CTIS system captured up to 315 wavebands but has a pixel resolution of 116×110  pixels.102 Many other commercial snapshot cameras could capture up to 100 or more wavebands but had spatial pixel resolution of not more than 100×100.81 The acquisition time of point scanning, line scanning, and spatial–spectral scanning systems can be described as rapid. A recent series of line scanning cameras to be used in industrial settings were capable of scanning 576  spatial line/s or 2880 spatial–spectral elements/s.121 Because spatial and spatial–spectral scanning systems are used in many industrial and remote sensing environments, the readout speed of these systems also depends on the translation speed of the camera or the samples. The acquisition time of spatial scanning systems relies on the switching time of the filters. Filter wheels achieve switching time in orders of seconds, LCTF in orders of 50 to 500 ms, and AOTF in orders of 10 to 50 μs.93,100 As for FPI, the switching time ranges from 5 to 50 ms,122 although <3  ms switching time had been reported.123 The fact that snapshot cameras can capture the datacube in one exposure does not necessarily mean that they have low acquisition time. Snapshot systems using mosaic filters can be very fast: one system was capable of capturing a 256×256×32  pixels datacube at a rate of 340 images per second.121 The processing time was part of the consideration for a very long time. Unlike spatial scanning, spectral and snapshot scanning methods all required postprocessing to stitch together the spectrum.74 Some methods required extensive processing to generate the datacube, such as CTIS (Fourier slice theorem) or Fourier transform spectroscopy (inverse Fourier transform). However, with advances in computer power, the processing time after acquisition to generate the datacube is becoming more similar across all platforms.100

Light throughput, the amount of light in the datacube that the detector can measure, affects the SNR. Theoretically, it is been long known that some methods of spectroscopy acquisition have higher light throughput compared with others, which in turn increases the SNR.100 However, among spatial and spatial-spectral scanning systems, the theoretical differences in SNR are not significant.14 Hagen et al.124 argued for the snapshot advantage, which is the increased throughput that comes from the fact that snapshot cameras capture the entire datacube at once. However, the snapshot advantage is only available for a select number of snapshot architectures, such as CTIS, IMS, and CASSI. Realistically, throughput also depends on the filter transmission rate and the quantum efficiency (QE) of the sensors. Finally, some imaging applications are more suitable for certain methods of acquisitions. Confocal microscopy, for example, can only be coupled with point scanning imaging due to the fact that only a small section of light is imaged.100

5. Technical Aspects of Compact Spectral Cameras

Components of a spectral imager include the optical system (such as lens, endoscope, microscope), the spectral dispersion system (such as monochromator or interferometer), the digital image detector, the control module, and the mechanical elements (gears and housing).6 As spectral and optical systems are dependent on how the datacube is acquired, we decided to group them by acquisition methods instead. For some spectral imaging systems, illumination is also a critical component.

5.1. Spatial Scanning

Miniature spatial scanning systems use two main classes of architectures: reflective grating and transmission grating/prism.74 In both systems, after the light passes through the aperture, it gets collimated (made parallel), then dispersed, and then the individual wavelengths will be refocused onto the array detector.15 In the Czerny–Turner reflective grating configuration, a curved mirror acts as the collimating mirror and a second curved mirror refocuses the diffracted light onto the imaging sensor. In the Offner configuration, three concentric elements (collimating mirror, reflective grating mirror, and focusing mirror) make up the optical components [Fig. 4(a)]. Even though the Offner configuration provides better spectrographic ability, manufacturing them was not possible until the 1990s due to the lack of precision lithography technology.74 Offner spectrometers are commonly used due to their low aberration.127 Warren et al.126 used a monolithic block of glass as the transmitting medium for the Offner relay, reducing the volume and weight of the imager down to 0.54 kg [Fig. 4(b)].

Fig. 4.

Fig. 4

Offner spectrograph is an example of a reflective grating configuration. (a) The working mechanism of an Offner spectrograph camera (adapted from Ref. 125). (b) An example of a device that uses Offner spectrograph (reproduced from Ref. 126).

Many low-cost and compact systems used prisms128130 or transmission grating70,85,102,131133 as the monochromator. The problem was that these systems were prone to artifacts due to misalignments of the optical components.85 Some key artifacts included chromatic aberration (lens not focusing all wavelengths properly), smile (bending of the spectral line), keystone (bending of the spatial data), and straylight (unwanted light caused by other sources).134 Some optical architecture showed advantages in producing fewer artifacts. For example, Offner spectrometers have less smiles and keystones compared with Czerny–Turner and transmission grating architectures.10,135 High quality optical components, such as achromatic lenses, also reduce aberration to a degree, but they can potentially increase the manufacturing cost of the device. Laboratory calibration can also be done to reduce smile and keystone.127 In commercial spatial scanning systems, smile ranges between 0.1 and 0.1 pixels and keystone reaches a maximum of 3.5 pixels at 1000 nm.136 Grism (prism and grating) can reduce some chromatic aberration and is also a common monochromator used in some compact spectral imagers.134,137 Prism–grating–prism (PGP) was first seen in the works of Aikio74 as a method of building ultracompact push broom imagers. In a PGP, two identical prisms sandwich a volume transmission grating [Fig. 5(a)]. Compared with prism and transmission grating alone, PGP disperses light linearly, has high throughput, and is extremely robust.74 The greatest advantage of PGP, however, is the ability to disperse light with little space, allowing the development of miniaturized spectral imaging systems. Multiple compact spectral imagers used PGP as their dispersive element121,137,139141 [Fig. 5(b)]. Some commercial systems also use high quality manufactured transmission holographic grating, such as the one reported by Wu et al.,77 which demonstrated an ultracompact line scan imager using volume phase grating. Table 2 summarizes the different common methods of dispersion used in spatial scanning imaging cameras.

Fig. 5.

Fig. 5

PGP is an example of a transmission grating configuration, where no reflective device was used. (a) The working mechanism of a PGP imaging camera (adapted from Ref. 125). (b) An example of a device that uses PGP (reproduced from Ref. 138).

Table 2.

Optical comparison among common types of spatial scanning imagers.

Type Collimating method Dispersion method Focusing method
Czerny–Turner spectrometer Concave mirror Flat reflective grating Concave mirrors
Offner spectrometer Concave mirror Convex reflective grating Concave mirrors
Prism–grating–prism Optical lenses A transmission grating between two prisms Optical lenses
Holographic grating Optical lenses Volume phase holographic transmission grating Optical lenses
Grism Optical lenses Transmission grating followed by prism Optical lenses

Due to their nature of acquisition, many laboratory-based spatial scanning imagers require either the stage to move or the camera to move. These methods of acquisition are not suitable for bioengineering applications such as surgical guidance or in vivo imaging.129 To develop spectral imagers that can be compact and usable for live imaging, new techniques in spatial scanning acquisition have been devised to overcome the movement problem. These techniques use microelectronic internal devices such as digital micromirror devices (DMD)142 and piezoelectric motors70,79,143 to move the imaging sensors or optical components. Several commercial systems, such as those shown by Wu et al.77 and Behmann et al.,144 are fast enough that they can capture accurate spatial data entirely handheld.

5.2. Spectral Scanning

Of the compact spectral scanning systems that we surveyed, tunable filters were preferred over filter wheels due to their small size and narrow bandwidth. Some have succeeded in miniaturizing filter wheel systems. For example, Kim et al.22 applied a filter wheel with nine wavebands on top of a smartphone camera. While electrically tunable filters can achieve arbitrary waveband selection, efforts to miniaturize them were hindered by the lack of suitable compact configurations. Both AOTF and LCTF require large external power and large optical pathway, which makes them unsuitable candidates for ultracompact systems.94 While the size of the filters in AOTF and LCTF themselves is compact and lightweight (often <1  kg),88,89,118,145,146 it is the weight of filter driver that adds up to the weight of these devices. Ishida et al.147 were able to deploy an LCTF-based system on top of a UAV. However, the UAV was only able to fly for 10 min due to the high payload weight. On the other hand, interferometric imaging spectrometers are becoming increasingly compact and have found their way into many applications. We discussed two primary types of compact spectral cameras that use interferometers as their dispersive elements: Fourier-transform imaging spectrometers (FTIS) and FPIs (Figs. 6 and 7). All the previously discussed systems used filters to filter out broadband light sources. However, the light sources themselves can be used as mechanisms for spectral scanning. In such systems, there is an arrangement of narrow-band light sources that illuminate the subjects at different wavelengths. The reflected light measured by the electronics detector is analogous to the spectral response of the subjects. Light-emitting diodes (LEDs) are a common method to achieve variable lighting useful for multispectral or hyperspectral systems. Various LEDs are discussed in Sec. 5.6.1. Table 3 compares the common methods used to produce spectral scanning imagers.

Fig. 6.

Fig. 6

FTIS. (a) Working mechanism of an FTIS system (adapted from Ref. 148). (b) Image of a compact FTIS camera that uses a focal plane birefringent interferometer (FPBI in the figure) in front of the detector (reproduced from Ref. 149).

Fig. 7.

Fig. 7

FPI. (a) Working mechanism of an FPI system (adapted from Ref. 150). (b) A compact FPI chip that is driven by electrostatic actuation (reproduced from Ref. 151).

Table 3.

Comparisons of several types of spectral scanning cameras. AOTF, acousto-optic tunable filter; LCTF, liquid crystal tunable filter; FTIS, Fourier transform imaging spectrometer; FPI, Fabry–Perot interferometer; FWHM, full-width at half-maximum (measured in nm). Wavelengths refer to the operating range of wavelengths. VIS, visible wavelengths (400 to 700 nm); NIR, near-infrared wavelengths (700 to 2000 nm). Switching speed refers to time to change from one wavelength to another one; ms stands for milliseconds (1/1000 s).

Type Wavelengths operating range FWHM (nm) Number of bands Switching speed
Filter wheel VIS-NIR 30 to 250 <50 1  s
LED-based VIS 20 to 70 <100 500  ms
AOTF VIS-NIR 2 to 6 >1000 1  ms
LCTF VIS-NIR 5 to 30 >1000 100  ms
FTIS VIS-NIR 10 to 50 >1000 10  ms
FPI VIS-NIR 10 to 50 >1000 10  ms

5.2.1. Fourier transform imaging spectrometers

An interferometer splits the incoming light wave into two light waves that are then superimposed onto each other. The superimposition has a slight delay, causing a wave pattern (interferogram) whose magnitude is dependent on the delay. In the early 19th century, Albert Michelson showed that one can use Fourier inverse transform to convert the interferogram into actual spectra of the incoming light.152 This was the basis of Fourier transform spectrometry [Fig. 6(a)]. When the detector noise dominates other sources, Fourier transform spectrometry has lower SNR compared with dispersive-based spectrometers and higher throughput compared with slit-based spectrometers.153 However, previous imaging spectrometers that used mechanical interferometers (also called Michelson interferometers) suffered from many drawbacks. Mainly, they required accurate mechanical movement, making them unsuitable for field deployment.154 In recent years, birefringent (different refractive index based on light polarization and propagation) crystals were being used to generate compact interferometers in FTIS.96,149,155 The majority of birefringent crystal schemes were either Wollaston or Savart prisms.155 Both optical schemes used prisms to separate polarized light and collimate them to introduce a delay in the light. Perri et al.155 introduced a birefringent interferometer called translating-wedge-based identical pulses encoding system and commercialized a compact hyperspectral camera using this system. Xu et al.149 used a birefringent interferometer at the focal plane to produce an ultracompact spectral camera [Fig. 6(b)].

5.2.2. Fabry–Perot interferometers

In its simplest form, an FPI is an arrangement of two parallel or curved mirrors that bandpass wavelength based on the separation distance between the mirrors. Suppose two highly reflective surfaces are separated by a distance d apart by a separation medium with refractive index n, then a collimated beam arriving at normal angle will exhibit transmittance by constructive interference at the wavelengths 2dn=mλ for m=1,2,3156 with all other wavelengths being almost entirely reflected by the Fabry–Perot filters [Fig. 7(a)]. An FPI could then become a tunable filter by varying the distance between the two mirrors. However, FPI filters allow transmission of wavelengths that are periodic, and the distance between the two wavelengths that are transmitted is called the filter-free spectral range (FSR):

FSR=λ22d.

In this equation, the FSR and the distance between the plates are inversely related. This means that if we want to increase the range of the FPI, a low value of d needs to be selected. If the FPI operates within the infrared region, this value could be as low as several microns.157 Like other FTIS, FPI has a throughput advantage compared with dispersive-based spatial scanning cameras. However, the realization of FPI in spectrometry was very recent since their fabrication required highly reflective surfaces to decrease the spectral resolution. Instead of a singular reflective medium, alternating high and low refractive index materials were arranged to create highly reflective Bragg mirrors.158,159 The reflectivity of the materials affects the FWHM value of the filter in the following relationship:

F=πR(1R),

where F is the FWHM and R is the reflectivity of the cavity mirrors. If the distance between the mirrors is unchanged, FPI is also called etalon and is more often used in LVFs and snapshot scanning cameras (see Secs. 5.3.2 and 5.3.5). There are many methods to vary the distances between the mirrors. In piezo-actuated methods, piezo devices produce strong physical displacements when voltage is applied. In capacitive or electrostatic actuated methods, the moving plate is tensioned by springs and moves to electrostatic force157 [Fig. 7(b)]. FPI can be manufactured through photolithography and assembled either through surface micromachining or bulk micromachining (see Sec. 5.4 for more discussions). This process of manufacturing enables ultracompact FPI filters. From the equations, it is important to notice that the constructive interference in FPI not only allows the central wavelength to pass through but also other secondary wavelengths that are multiples of the central wavelength. Appropriate long-pass and short-pass optical filters should be used.54

5.3. Snapshot Scanning

The mechanisms used to acquire a spectral cube in a snapshot manner are numerous. Here, we describe the common methods used in compact spectral imaging, which were CTIS, SRDA, compressive sensing, image mapping spectrometer (IMS), and LVF.

5.3.1. Computed tomographic imaging spectroscopy

The basis for CTIS was proposed in the early 1990s and was refined by Descour and Dereniak11 and Johnson et al.104 After passing through an objective lens, light enters the dispersive element of the CTIS, in order: an aperture, a collimator lens, and a grating/reflection dispersive device [Fig. 8(a)]. The aperture could be either a square or a slit depending on whether the desired purpose is imaging or line spectrometry. After passing through the dispersive device, a focusing lens focuses the light onto the staring sensor. What shows up on the sensor is a series of projections of the hypercube, arranged with the zeroth-order in the center and higher order further from the center. Dispersive devices were often transmissive devices, but reflective devices have been developed. Reconstruction from the projections slice is done using Fourier slice theorem. Reconstruction is more accurate with a higher number of projections. However, with limited sensor size, a higher number of projections also means that the reconstructed hypercube will have a lower spatial resolution. Due to advances in computing power, reconstruction-based systems such as CTIS were realized at lower costs. Habel et al.103 demonstrated a CTIS camera that uses DSLR cameras and low-cost components. Salazar-Vazquez and Mendez-Vazquez102 demonstrated an entirely open-source CTIS system with 3D-printed housing and off-the-shelf optical systems. Their imager had a significantly lower cost yet achieved a higher number of wavebands compared with previous CTIS cameras.

Fig. 8.

Fig. 8

Optical layouts of some common snapshot imaging systems. (a) CTIS. (b) Coded-aperture imaging. (c) SRDA. (d) IMS (reproduced from Ref. 100).

5.3.2. Spectrally resolved detector array

While Bayer filters were common and can be manufactured for low-cost consumer electronics, the same cannot be said for SRDA used in snapshot cameras [Fig. 8(b)]. This comes down to the fact that Bayer filters used organic pigments or dye to color their filters, which are cheap but have large bandwidth.106 To manufacture SRDA with bandwidth narrow enough for accurate scientific uses, other types of interference filters must be used, including plasmonic filters, silicon nanowires, Fabry–Perot etalons, cavity-enhanced multispectral photodetectors, and multilayer quantum-well infrared photodetectors.100,106 Compared with architectures for spatial and spectral scanning cameras, SRDA were not typically robust for multiple applications. The number and value of wavelengths were fixed, which means that either the application must be specific or the SRDA must be custom-made, both limited the efficiency of research. Nevertheless, SRDA systems found many applications in bioimaging; we found systems that used SRDA in fluorescence microscopy,23 fluorescence endoscope imaging,160,161 skin imaging,40 and fundus imaging.54 Recovering the full spectral image from the acquired mosaiced image is not a simple task since the mosaic image shows a sparse representation of the captured data. Multiple authors have proposed generic algorithms to tackle this problem. Miao et al.162 proposed a binary tree algorithm to reconstruct the final spectral image. Wu et al.163 used sparse encoding to estimate reconstruction candidates, then used a heuristic method to search for the optimal reconstruction. Sawyer et al.164 compiled different established reconstruction algorithms and produced a Python package for open-source distribution.

There were two methods of manufacturing SRDA: one is by directly depositing and etching the filter layer on top of the sensor (monolithic integration), and another is by producing the filter layer separately and then mount it on top of the sensor (hybrid integration).165 On the monolithic integration end, a group of researchers in Belgium advanced many aspects of SRDA manufacturing using FPI.76,79,117,121,166 By depositing etalons of different cavity heights directly on top of the sensor, the resulting effect is like depositing interference filters of different bandpass values. Similar to the engineering of moving FPI, Bragg mirrors should be used in place of a single reflective material such as silver or aluminum.117 The materials varied; however, TiO2 and SiO2 were commonly used as high and low refractive materials.167 For a more in-depth article on the process of deposition and etch, refer Ref. 168. Out of this, the Belgian group developed many ultracompact commercial snapshot, spatial and spatial–spectral scanning cameras that use different interference filters patterns. For snapshot cameras, the filters were grouped into 4×4 square cells repeated throughout the entire sensors.54,55,76 For spatial and spatial–spectral scanning cameras, the filters changed bandpass values (and cavity heights) linearly across the sensor, resulting in a “staircase” pattern.121,169 Most recently, SRDA using etalons had been monolithically integrated on top of InGaAs sensors, making these types of spectral cameras also functional in the SWIR ranges.170,171 Elsewhere, other researchers were finding ways to bring the cost of manufacturing down. Yu et al.172 proposed a batch wafer manufacturing method that used silver and aluminum oxide (Al2O3) as the dielectric materials.

5.3.3. Compressive sensing

Coded-aperture imaging cameras project the datacube onto the 2D imaging sensor through a coded mask [Fig. 8(c)]. In theory, it is not possible to resolve the images obtained from the coded aperture to the datacube since many different datacubes can produce the same image on the sensor. However, by applying some constraints onto the datacube such as sparsity and low variation, reconstructing a unique datacube became possible. Žídek et al.173 demonstrated that with off-the-shelf components and several custom optical lenses, a compact spectral camera using compressive sensing can be constructed.

5.3.4. Image mapping spectrometer

The IMS (also called image slicing spectrometer) provides a one-to-one mapping of the datacube voxels onto the detector’s pixels. The captured image will be optically “sliced” by thin strips of mirrors. A dispersive element will then disperse the spectral elements of the image onto the sensor. The mirrors are arranged such that after dispersion, the spatial–spectral dispersion of the sliced image will fill the sensor [Fig. 8(d)]. IMS originated in astronomy and has been used in fluorescence microscopy174 and retinal imaging.175 In recent years, several compact systems that used IMS emerged. Bedard et al. built a compact IMS system that has high spectral–spatial resolution (350×355×41  pixels) and good acquisition rate (7.2  frame/s). Pawlowski et al.176 used lenslet array as the optical slicing component in their ultracompact snapshot system. The optical element measured only 27×27×8  mm, and the resolution was 210×279×54  pixels.

5.3.5. Linear variable filter

An LVF is a monochromator that varies its dispersive property based on the spatial location. It is typically used in spatial–spectral imagers (see Sec. 3.3 for the “Snapscan” concept). Ding et al.27 adapted the concept onto snapshot imaging by incorporating lenslet (small lenses that project images onto a small part of the sensor). LVF is created using FPI of varying cavity height. The lenslet splits the image onto subimages, each of which falls onto a different part of the LVF. Snapshot imaging is achieved by combining spectral filtered subimages onto the datacube. Conceptually, this is similar to SRDA and suffers similar drawbacks as SRDA does.

5.4. Mechanical Components

Mechanical systems refer to nonoptical systems that are responsible for driving the optical components and provide the mechanical housing. Here, we discuss MEMS, which are commonly used in compact spectral cameras. MEMS refers to small systems that are manufactured using semiconductor fabrication techniques. These devices have been hypothesized as theoretically possible since the early 1960s177 and now encompass a large range of devices including RF switches, cantilever, piezoelectric, comb-drive actuator, and resonator.178 MEMS are electrically reliable, low on heat dissipation, and can be scaled up to large-scale and low-cost manufacturing processes.8 Prior to spectral imagers, MEMS has been used in portable spectrometers.179181 The single most important process in the manufacturing of MEMS is photolithography, or the process of etching complex nanopatterns on a photosensitive polymer using lights.182 Photolithography enables MEMS to be produced in high volume and consistent quality. To integrate multiple MEMS components together, two main methods are used: surface micromachining (depositing the desired system on top of a sacrificial layer to be washed away) or bulk micromachining (directly shaping the substrate without the need for a sacrificial layer). In hyperspectral and multispectral imaging systems, both surface micromachining and bulk machining were used to produce MEMS.157 In FPIs, MEMS is used as the actuator that drives the distances between the two reflective surfaces. This can be accomplished using either piezo-actuators or electrostatic actuators.183 According to Trops et al.,183 piezo-actuators FPI had larger optical apertures and higher SNR, and electrostatic actuators were mass-producible, have smaller optical apertures, and lower SNR. In terms of spectral ranges, piezo-actuated MEMS FPI have a wider tuning range compared with electrostatic actuated MEMS FPI. Rissanen et al.184 and Näsilä et al.159 used surface micromachining to produce MEMS FPI spectral cameras that work with mobile phone cameras. Näsilä et al.123 used the same technique to produce a ultracompact (<40  g) spectral camera. Another type of MEMS is a digital micromirror device (DMD), which is an array of microscopic mirrors on a chip that can be individually activated and rotated. A common usage of DMD is in push-broom imaging, where DMD replaces the slit translation mechanism. The DMD selects the narrow section of the image and reflects that section onto the grating mechanism. This mechanism was seen in works by Arablouei et al.142 and Dong et al.185,186 A comb-drive MEMS was used by Wang et al.187 to rotate a mirror within their handheld spectral imaging camera. There are notable limitations to the range of movement in MEMS. On small scales, the mechanical strain and stress behavior is much different compared with their macrocounterparts. Pull-in phenomenon is seen when the electrostatic forces between MEMS elements are greater than the mechanical forces, which “pulls in” the MEMS components into each other and potentially causes a breakdown.178 In spectral imagers, this is most applicable to MEMS-enabled FPI imagers, where the air gap between the mirrors cannot be smaller than two-thirds of the initial unactuated air gap size to avoid pull-in phenomena.159,183,188

For compact devices that are planned to be used in outdoor situations, mechanical housing is as important as every other component. Crocombe8 discussed the importance of rigid and durable housing in portable spectrometers used in manufacturing. Even though devices in a clinical setting are not subject to harsh environmental factors the same ways devices in industrial or earth science fields do, mechanical housing is still an important factor in biomedical imaging devices. Many housing systems, if they are used in noncommercial devices, are likely to be highly customized using 3D printing. 3D printing has many advantages over traditional manufacturing, such as the ability to go from designs to prints within a short amount of time, the low cost of plastic, the lack of screws and adhesion, and the fact that when 3D printing different designs, the printer configuration remains largely similar.85,189 The rise in 3D printing of optical instruments is due to not just the large availability of 3D printers but also because of the open-source movement.190 For a review of 3D printing technologies and additive manufacturing, see Ngo et al.191 3D printing can be used for many versatile components of spectral imaging. Ghassemi et al.192 and Cavalcanti et al.59 used 3D printing to develop biological phantom models for hyperspectral imaging. Ortega et al.60 created custom 3D printed gears to mechanically move the sample in a push broom microscopic imager. However, the most common use of 3D printing in spectral imager is for the housing of optical and dispersive elements.85,102,131,159,187,193 When researchers and hobbyists had access to 3D printers, design and manufacturing became an iterative process due to the speed and low-cost that 3D printing brings. Design was often done with the help of computer aided design (CAD) software. The most common 3D printing method used was fused deposition modeling (FSM), which uses a heated nozzle to fill semiliquid filaments in a layer-by-layer manner. Depending on the printer chosen, 3D printing using FSM can achieve high printing resolution and is capable of fitting optical lenses without much calibration necessary.131,189 However, FSM methods can have weak mechanical properties and poor aesthetic appearance compared with more advanced methods of additive manufacturing, such as stereolithography or power bed fusion.191 The most common material used was polylactic acid, commonly abbreviated as PLA.191 PLA is notable for having a low melting point and good biocompatibility. This aspect means that it is possible for researchers to use PLA to build biomedical components in their spectral cameras.51,59 However, PLA has some structural downsides. PLAs are known to shrink during printing; Sigernes et al.85 suggested that the actual design is 1% to 2% larger than intended so the optical components can fit. If the optical alignment is critical, 3D-printed housing can potentially produce optical misalignment and imaging artifacts. Despite shrinkage being a common problem in polymer-based printing, not many research studies discuss the solutions to counter this.194 Beyond accounting for the shrinkage in the initial design, Pearre et al.195 suggested bounding the printed components with solid structures. Alternatively, inkjet technology using plastic powder and cyanoacrylate adhesive was used by Wang et al.187 to print housing for their handheld spectral camera.

5.5. Electronics Components

Electronics is the driving component of all spectral imaging systems. “Electronics systems” refers to systems that provide illumination, capture images, control hardware, and transfer data. Systems that are expected to perform their work remotely, such as UAV-based, require a means to store power and to store data as well. Here, we discuss the use of illumination with a special focus on LEDs, the use of sensors, the use of microcontrollers, and the expected power consumption of the system.

5.5.1. Power consumption

While it is still the norm for many laboratory-based imaging systems to have no battery systems and to instead draw power from the grid,47,54 many UAV-based systems require batteries to have reliable performance in a reasonable amount of time. Even though battery life is not typically specified in many commercial systems, a flight time of a UAV from 12 to 90 min76 gives a good idea of how long spectral cameras should operate. More well-known specifications that come with remote battery-powered spectral cameras are power consumptions, which range from 5 to 10 W,134,137 comparable to the power consumptions of UAVs (10 to 20 W) but still greater than that of many consumer products such as smartphones (<500  mW).

5.5.2. Sensor technologies

Most digital imaging sensors used in compact spectral imaging fall into two broad categories: charge-couple devices (CCDs) and active pixel sensors, also called complementary metal oxide semiconductors (CMOS). CCDs use linked MOS capacitors to transfer electric charges from the pixels toward the shift register where it will be amplified. CMOS use MOSFET switches to access and amplify each pixel individually.196 The size and cost of CMOS benefit greatly from advances in semiconductor fabrication, which has been observed to double the amount of transistor every 2 years.197 However, CMOS have higher dark current compared with CCD.198 The majority of compact spectral CCD/CMOS use silicon as the semiconductor, which has an operational range between 550 and 900 nm. Shorter wavelengths with higher frequency in the SWIR (900 to 1700 nm) range can penetrate deeper into biological tissues and reveal more underlying features; however, capturing them requires alternative CCD materials, such as InGaAs.25,158 Alternative architectures for CCD and CMOS used in spectral imaging include intensified CCD21,199 and electron multiplying CCD.200

5.5.3. Microcontrollers

Many spectral imaging systems use microcontrollers to provide system control. In recent years, open-source systems have made spectral imagers more low-cost and more customizable. These microcontrollers were part of the open-design movement, which aimed for free collaboration and sharing of schematic and software. Common open-source systems include microcontroller boards aimed at specific tasks (such as the Arduino Uno) and microcomputers capable of full-fledged control tasks (such as the Raspberry Pi). Programming these devices is much easier compared with previous generations of programmable circuit boards, as these newer devices used USB connections to transfer instructions. Furthermore, these devices have a large support community, making them the preferred option for off-the-shelf spectral cameras and for prototyping new designs.

Open-source microcontrollers in spectral imaging devices perform three main tasks: (1) drive optical components, (2) control illumination, and (3) act as a device to send and receive signals. Nevala and Baden193 used an Arduino Uno microcontroller in a spatial scanning camera to move mirrors on a predefined path. The microcontroller also drove the spectrometer via a transistor–transistor logic gate to capture data. Ortega et al.60 used a similar microcontroller to drive a stepper motor, which will translate the microscope stage for a line scan imager to acquire images. Näsilä et al.159 system used an MEMS actuator driven by an AC actuator signal. A microcontroller board used the I2C interface to drive the actuator. Some spectral imaging systems used narrowband LED to provide illumination, which the camera could capture the reflected light and construct the datacube in the same manner as a spectral imager does. However, to make acquisition fast, synchronization between LED and image capture was necessary. Ohsaki et al.201 constructed an LED flickerless system controlled by a Raspberry Pi microcomputer. Di Cecilia et al.202,203 used microcomputer to drive a pulse current to control the LEDs that was synchronized with the shutter: once the camera shutter was pressed, the microcomputer turned off the LEDs. Typically, the spectral imaging system was directly connected to the workstation through a USB connection, or the data were stored inside memory for later retrieval. Näsilä et al.159 prototyped an FPI-based spectral imaging system whose main controller is a Raspberry Pi microcomputer. The microcomputer sent signals to the FPI driver, received images from the camera module, and sent images through Wi-Fi to a workstation for further analysis. A similar setup was employed by Salazar-Vazquez and Mendez-Vazquez102 in their CTIS-based spectral camera. The microcomputer employed was also a Raspberry Pi, which sent commands to the camera module and sent images through Wi-Fi to the workstation.

5.6. Illumination

For most biomedical imaging applications, imaging was performed indoors using artificial illuminations. This posed various challenges for acquisitions. It is important to differentiate between luminance and radiance. Radiance refers to the quantity of radiant energy per emitted solid angle per receiving surface area and is applied across all wavelengths, whereas luminance is a human vision-centric measurement and is the radiance weighted by the response curve of the human eye. With spectral imaging devices, radiance is the more appropriate measurement of light sources whereas with regular cameras and human activities, luminance is more appropriate. Unlike outdoor illumination, which can exceed 100,000 lux in luminance, indoor systems using incandescent light only illuminate around 10,000 to 20,000 lux. Spectral imaging requires more illumination than similar RGB or monochrome cameras, because spectral cameras need to capture energy associated with a narrow range of wavelengths. Furthermore, many spectral imaging systems have additional filters that reduce the incoming light quantity. Some systems require wavelengths bandpass, lowpass, or high pass filters to block out unnecessary wavelengths.54 Some systems use beam splitters to either achieve snapshot imaging or to capture both live images and spectral images.200 Additional illuminations are often needed in spectral imaging acquisition. The additional components can increase the footprints of the system. The addition of light sources should be balanced with the applications. If the intensity of the light is too large, damage to the tissues can be irreversible. The most vulnerable organ is the eyes. Permissible exposure limit to the human retina is dependent on both the wavelength and exposure time. Rees and Dobre204 calculated the maximum exposure power to the eye at 0 deg to be 180  μW at 5 s and 120  μW at 30 s. For thermal light sources, Yan et al.205 evaluated maximum exposure for a direct angle to be 1.8×103×t0.25  J/cm2 with time t in second and is <3×104  s. These values are only for humans; for animals with retina diameter different than humans, the maximum permitted exposure needs to be varied. There is no definitive guidelines on animals’ retinal’ light exposure limits. For other organs, the limits are more forgiving and often orders of magnitude larger than many illuminations needs. Surgical lighting, for example, has a suggested luminance ranging from 40,000 to 160,000 lux,206 which is often enough for acquisitions in the visible light range.

There are different geometries to arrange the light source, the subject, and the imaging camera. Amigo et al.207 detailed several laboratory setups, which are frontal, lateral, contrast-transmission, diffuse, co-axial, and dark field. In each case, illumination should be as uniform as possible. Sawyer et al.208 detailed three types of nonuniformity: spatial uniformity, which refers to differences in illumination of incident light across object; angular uniformity, which refers to differences in incident and shadowed areas; and spectral uniformity, which refers to spatial uniformity across all wavelengths. They compared three different light setups for wide-field imaging: an LED ring, a fiber halogen ring, and a diffusion dome. They found that all systems achieved similar spatial and spectral uniformity, but the diffuse scattering dome achieved the highest angular uniformity.

For the illumination type, we covered the three commonly used types: halogen incandescent, gas discharged lamp, and LED. Halogen light operates in the ultraviolet, visible, and NIR regions. It offers a continuous spectrum suitable for acquisition with many bands. However, halogen light has a low color temperature (3200 to 5000 K) and can appear yellow at low illumination.206 Halogen and LED were the most often light sources in microscopes, so many spectral imaging applications that used microscopy also used halogen or LED as default illumination. Gas-discharged lamps, which include xenon lamps, mercury lamps, and mercury–argon lamps, often have higher color temperature. Xenon lamps have color temperatures ranging from 4000 to 6000 K, which makes them closer to outdoor lighting. Gas-discharged lamps often have energy spikes in the NIR regions, which can affect acquisitions. Due to these energy spikes, gas-discharged lamps are not used in retinal illumination and retinal surgeries. Both gas-discharged lamps and incandescent lamps emit a large amount of heat. If this is a concern, a fiber optic guide is needed to direct the light far away from the light source. Tunable laser is also seen in spectral imaging devices,209211 although not in compact systems due to the size and power consumption of the laser components.

5.6.1. Light-emitting diode

The advantages of LED in illumination scanning systems are numerous: they are low-cost, fast, have low power consumption, low heat dissipation, and long life.206 A common way to use LED for spectral imaging is to use different LEDs to illuminate the object, each LED emits light within a specific waveband. Alignment of LED is important to achieve uniform illumination. Figure 9 shows different methods of arranging LEDs in a compact spectral imaging system. A common layout is the ring layout, where all LEDs are arranged in a circle.22,24,25,38 Li et al.212 proposed a different setup where the LEDs were arranged in a mosaic fashion, however warned that this arrangement can lead to nonuniform lighting for different wavelengths. Bolton et al.37 arranged the 16-LED setup into a 4×4 rectangular array. Light uniformity was ensured by choosing LEDs with very wide illumination angles. If the LEDs were arranged in a circle, it is better to have multiple of each type that are opposite to each other. Delpueyo et al.24 and Rey-Barroso et al.25 proposed a system using 32 LEDs, where four of each type of LED were arranged 90 deg apart [Fig. 9(a)]. For the acquisition camera, Shrestha and Hardeberg213 recommended that monochrome camera might perform better than commercial RGB cameras. The reason being that RGB cameras need to perform demosaicing operations, which altered the actual spectral readings. To control the LED, a separated power source and controller was needed to synchronize the LEDs with the acquisition camera. LEDs do not require a high-power source; a configuration built by Kim et al.22,38 only needed a 3.7 V battery to drive a multi-LEDs system [Fig. 9(b)]. If the LEDs emit different wavelengths, the activation voltage could be different for each LED.37 For Bolton et al.,37 different resistors values were required to produce different activation voltage using the same input voltage [Fig. 9(c)]. The biggest downside to LEDs is the limitations of wavelengths. The spectral imaging systems described can acquire only as many wavelengths as the number of different LEDs are available. For many researchers, this may not be a problem. Some applications only focus on specific excitation and absorption wavelengths and only need three to four different types of LEDs. Well-chosen LEDs are very beneficial. Another potential engineering problem comes from the quality of the LEDs. If the LEDs are bought from commercial vendors, then their bandwidth can range anywhere from 20 to 70 nm. If this is a problem, the interference filters should be applied to achieve a narrower bandwidth.214

Fig. 9.

Fig. 9

Different uses of LEDs in a compact imaging system. (a) A system that used 32 LEDs arranged in a circle. There were four of each types that are spaced 90 deg apart (reproduced from Ref. 24). (b) A system that used a singular white LED and a rotating spectral filter (reproduced from Ref. 22). (c) A system that arranged LEDs into a four-by-four rectangular plane (reproduced from Ref. 37).

6. Low-Cost and Compact Spectral Cameras

A large variety of compact cameras came not from commercial manufacturers but from academics and hobbyists who wanted to build custom imagers that are more accessible and more customized. Many low-cost systems needed to strike a balance between size, acquisition speed, spatial resolution, and spectral resolution. However, with recent advances in custom components, additive manufacturing, smartphones, and compact spectrometers, custom low-cost devices have improved in quality and compactness.

6.1. Commercial Off-the-Shelf Components

COTS, or commercial off-the-shelf, refers to components of a spectral camera that can be bought and assembled into a functional unit. Assembling COTS requires the researchers to have knowledge in building an optical system, electronics, control software, and hardware housing. The total price of all components added together is often much lower than the unit price of one commercial spectral camera of similar capability. Furthermore, COTS components allow researchers a high degree of customization. As mentioned in the previous section, components of a spectral imager include the optical system, the spectral dispersion system, the digital image detector, the control module/electronics, and the mechanical elements. It is more common for researchers to buy each of these components individually and assemble them together, because of the wide availability of low-cost options to choose from (Fig. 10). Alternatively, some researchers opted for portable spectrometers and digital cameras as the acquisition module and built the rest of their system surrounding those.

Fig. 10.

Fig. 10

Spectral devices enabled using commercial off-the-shelf components. (a) A spatial scanning camera built using customized optical components and 3D printed housing (reproduced from Ref. 85). (b) A spatial scanning camera that uses portable spectrometers as the spectral acquisition device (reproduced from Ref. 193). (c) A smartphone-powered spectral imaging systems (reproduced from Ref. 215).

6.2. Optical Setup and Calibration

In the past, to conduct optical research, optical systems needed to be built on a rigid optical table and required large footprints. However, it is now possible to build optical systems with COTS components that are lightweight and have small spatial dimensions. Sigernes et al.85 [Fig. 10(a)] and Fortuna and Johansen216 built systems that were carried by UAVs. Their systems were spatial acquisition systems using transmission gratings. Kim et al.22 built a customized optical system to complement smartphone image acquisition. The system used a planoconcave lens, a reflective flat mirror, linear polarizers, and bandpass filters. The linear polarizers and the bandpass filters filtered the LED light sources of the smartphone, whereas the flat mirror and the planoconcave lens made the illumination broader and more uniform. Jeon et al.217 introduced a new type of diffraction optical element using photolithography and reactive-ion etching techniques. They used the fact that Fresnel diffraction is dependent on the wavelength to produce a diffractive element that produces spectral-varying point spread functions. 3D printed optical elements, such as reflective mirrors, focusing lenses, and prisms, despite already existing,194 have not been seen in any spectral imaging system as of yet. Customized mechanical housings are required to house these optical components. For that, 3D printing is the preferred option.

Even though COTS optical components are vastly cheaper than their commercial counterparts, lack of quality control can harm the acquisition quality. Smile, keystone, and chromatic aberration are all optical problems these customized systems face. We discussed smile and keystone in Sec. 5.1 in relation to spatial scanning systems. Chromatic aberration is a property of optical systems that makes the focus of different wavelengths not aligned. The point-spread function of the image depends on the wavelengths. Chromatic aberration makes focusing difficult. For spectral scanning systems, one method of dealing with chromatic aberration is to change the focus for each wavelength. However, this method could not be replicated with other types of spectral acquisition. A more robust engineering solution is to use achromatic lenses, which come at a higher cost than regular optical components. Mirror-based grating systems also reduce aberration; we discussed the Offner imaging spectrometer, which reduces aberration. If low-cost optical systems are to be expected to perform accurately, extensive calibration processes are necessary. Sigernes et al.85 discussed the steps to perform chromatic and sensitivity calibration in their customized camera. Riihiaho et al.218 performed smile correction by calculating the matrix that transforms curved spectral lines into the straight lines for each spectrum. Henriksen et al.219 proposed an algorithm that could perform real-time correction of smile and keystone based on matrix operations of calibrated results.

6.3. Portable Spectrometers

Compact and portable spectrometers have been developed and used for commercial, health, and scientific purposes.8 Many custom imagers in fact incorporated one or multiple commercially available spectrometers into their designs. By incorporating spectrometers, researchers did not need to worry about optical architecture and could focus more on the scanning methods. Spectrometers also have a much finer spectral resolution compared with line scanning spectral cameras. However, because the field of view of a spectrometer is typically small, the sampling scheme needs to be large with oversampling, otherwise the spatial resolution would be very low. Devices that used compact spectrometers were usually point-scanning or line scanning imagers. If only one spectrometer was used, the device would be like a point-scanning camera, and the researcher would devise an appropriate scanning path. Nevala and Baden193 equipped a commercial spectrometer that has effective spectral ranges from 350 to 950 nm with two mirrors controlled by Arduino microcontrollers [Fig. 10(b)]. Because the scanner had a circular window, they proposed circular scanning paths that are based on Fermat’s spirals. In the system proposed by Stuart et al.,220 light entered a focusing lens before being redirected by two movable mirrors into a spectrometer. Commercial spectrometers were used in confocal microscopy by Frank et al.221 who connected the confocal pinhole with a spectrograph using a fiber optic cable. If multiple spectrometers were used, then the device is a line scanning imager. Uto et al.222 used a series of eight spectrometers for a line scan imager.

6.4. Smartphones

Personal diagnostics with smartphones is gaining more attention in healthcare and bioengineering since smartphones are readily available, portable, offer high quality images, and are power efficient. This trend is reflected in spectroscopy and spectral imaging alike. Crocombe8 outlined four main usages of smartphones in spectrometry and spectral imaging: (1) to receive data from the spectrometer as a separated module; (2) to receive data; (3) to send command to the optical module; and (4) to be the optical module itself. The fourth usage is only applicable in the visible and near-infrared regions, as silicon detectors used in CMOS and CCD only operate within the 400 to 1000 nm wavelengths. Spectroscopic solutions have been developed for the smartphone for the purposes of food quality control,223 lab-on-a-chip diagnosis,224,225 and fluorescence spectroscopy.226 Similarly, spectral imagers have been developed with smartphones in mind for biomedical, remote sensing, and field laboratory purposes. Kim et al.38 developed a system that used smartphones as the controlling center for their MSI. In their system, the smartphone sent signals to a microcontroller unit that drove LEDs and received photos from a CMOS camera. Näsilä et al.159 and Rissanen et al.184 developed FPI-based spectral imagers with smartphone as the centerpiece, both as a controlling device that sent and received signals and as the optical camera itself. Smartphone cameras are extremely powerful and fast, which makes them the ideal candidate for spectral imaging modules. If the devices are used in personal healthcare settings, custom parts need to be developed for smartphone spectral imaging to fit the specific needs. Stuart et al.215 built a spectrograph with the smartphone camera as the acquisition device [Fig. 10(c)]. Bolton et al.51 and Cavalcanti et al.59 used 3D printing to create custom smartphone-based spectral colposcope and otoscope, respectively. Their systems used LEDs driven by microcontrollers to provide narrowband illuminations. These designs also used the smartphone’s ability to transfer signals over Bluetooth or Wi-Fi to construct an internet of things (IoT) system for spectral images acquisition and analysis. In some applications, smartphone can also be used as a geo-tagger, providing GPS data along with images to register images together.227

6.5. Open-Source Design

Open-source design refers to the creation of software and hardware with the intention of freely sharing those designs for others to use, study, modify, and even commercialize without concerns of copyright or patent infringement. This allowance is performed through many different forms of open-source licenses. Whereas the nature of software makes open-source software easily distributed, open-source hardware still requires usages of tangible components to manufacture and assemble. Ideally, open-source hardware designers would want to use low-cost components, 3D-printed components, or other open-source hardware components and run free open-source software. Some spectral imaging researchers published systems that were open source with the goal of easily sharing research progress. As an example of the open-source model, Riihiaho et al.218 built a 3D-printed spectral imaging camera with designs from Sigernes et al.85 Some further modifications were made to reduce smile and aberration, which were publicly shared along with 3D CAD files. They also built software that performed aberration correction and shared it publicly. Nevala and Baden193 published the entire hardware and software design of their system, along with raw spectral data captured by the camera. Low-cost spectral imaging was driven not just by open-source hardware but also through open-source software. Many researchers published hyperspectral processing software as free and open source. Berisha et al.228 published a framework for processing large-scale biomedical hyperspectral images at a faster speed using GPU. The entire software and all its algorithms were published under a license that allowed inclusion in both free and commercialized systems.

7. Applications of Compact Spectral Cameras in Biomedical Imaging

Many bioimaging researchers use compact and ultracompact cameras in both laboratory settings and clinical settings. Systems have been built for light-field microscopes,32,43,47,60 confocal imaging microscope,229,230 fundus cameras,54,55 endoscopes,46,48,56,140,200,231 laparoscope,63,65,66,70 colposcope,51,52 and otoscope.59 Spectral imaging for biomedical purposes has its own specifications that influence the choice of systems. Most commonly, images are captured in the visible-near infrared range (VIS-NIR) from 400 to 1500 nm. Imaging in the visible range often leverages the contrast between different types of dyes and stains, whereas imaging in the infrared range often leverages pure biochemical signatures of biotissues.232 VIS and NIR camera sensors need to be separated, with the first being silicon-based and the second being InGaAs-based. The spatial resolution should be enough to differentiate key features. In head and neck cancer (HNC) diagnosis, for example, the positive margin (margin of cancer plus normal tissue) for cancer identification is around 5 to 10 mm.233 The spatial resolution can be measured with an optical target, such as the USAF target. If the spatial resolution of the camera is not enough, magnification devices can be used. Acquisition speed can be slow for ex vivo applications but must be fast for in vivo applications. In a spectral colonoscopy system developed by Kumashiro et al.,48 they found that the acquisition time cannot exceed 5 s, otherwise the image quality will be unusable. Fast acquisition time also means that the illumination must be higher to accommodate it. In surgical or clinical settings, systems must be sterilized or isolated from the environment. Drapes, resin, and sealants can be used in these situations.

7.1. Diagnosis and Monitoring of Diseases

The primary diagnostic targets for spectral imaging researchers were the skin, the oral cavity, and the retina. These organs are the most accessible organs for in vivo diagnosis. Using endoscopic systems, the luminal organs of the lower abdominal regions could also be imaged with spectral imagers. However, research in this field in vivo was scarce because in vivo research of lower abdominal regions requires systems with fast acquisition time and low illumination. For ex vivo applications, most spectral imaging research was done on pathology slides or tissues. Ex vivo tissues were imaged either in a tabletop setting using reflectance data or through a microscope, which can be reflectance or transmission data.

7.1.1. Skin cancer

Both melanoma and nonmelanoma skin cancer (MNSC), which include Bowen’s disease, Kaposi’s sarcoma, basal cell carcinoma (BCC), and squamous cell carcinoma, are on the rise.234 While MNSC is more common, melanoma has a higher mortality rate.235 Early diagnosis and removal of both melanoma and MNSC are necessary. However, many forms of skin cancer appear similar to benign neoplasm.236 Dermatoscopy can be used to differentiate benign from malignant skin lesions; however, dermatoscopy requires clinician input. Spectral imaging has been used to improve automatic classification with high sensitivity.236 In early 2000, Hewett et al.21,237 developed a portable imaging station to determine tumor margin of MNSC lesions. They used 5-aminolevulinic acid (5-ALA) to induce the fluorescing molecule protoporphyrin IX (ppIX). After application of 5-ALA to the skin, fluorescence imaging of MNSC lesions was taken at the 400, 540, 600, and 635 nm wavelengths. Their results showed a clear outline of the lesion at the 635 nm wavelength, which corresponds with maximum ppIX fluorescence. In the same time period, Elbaum et al.236 developed a handheld spectral imaging system to automatically classify melanoma and melanocytic nevi (pigmented nevi, moles). From a set of 10 spectral images obtained from 430 to 950 nm wavelengths, 822 candidate parameters were extracted in wavelet domain and spatial domain. The features were supposed to represent morphological and textural elements of the lesions. Through training an expert system using a dataset of 63 melanoma images and 183 melanocytic nevi images, they achieved 100% sensitivity at 85% specificity. A similar handheld system [Fig. 11(a)] was developed by Delpueyo et al.24 to classify melanoma and BCC from pigmented nevi [Fig. 11(b)]. First-order statistics were used to extract morphological features. Their systems achieved 91.3% sensitivity at 54.5% specificity for both melanoma and BCC. Rey-Barroso et al.25 continued the study with a second handheld camera that can image in the NIR region (995 to 1613 nm). Higher wavelengths penetrated deeper through the skin and showed more features pertinent to skin cancer diagnosis. With the same set of features, they produced a sensitivity of 85.7% at specificity 76.9%, which was an improvement in specificity over the results obtained by Delpueyo et al. without much loss in sensitivity.

Fig. 11.

Fig. 11

(a) A compact and handheld spectral imaging system that used LEDs to provide spectral scanning. The system weighed 500 g and could capture images in eight wavelengths. (b) Using the system described in (a), different spectral signatures could be seen for nevi, melanoma, and basal cell carcinoma (reproduced from Ref. 24).

Many melanoma diagnostic tools that use spectral imaging exist on the market. However, they have high cost.238 Several researchers developed smartphone solutions for identification of malignant melanoma that were low-cost and accessible. Kim et al.22 built a smartphone spectral imaging system that photographed in 10 wavebands from 440 to 690 nm. They also created a software platform that identified lesions margin and graded lesions severity from spectral images. Ding et al.27 built a similar snapshot system that used smartphone camera. They photographed nevus lesions and identified elevated optical density of the nevus region in the 550 to 640 nm wavelengths; these wavelengths correspond to the peak absorption wavelengths of melanin and oxygenated hemoglobin. Uthoff et al.28 developed a smartphone spectral imaging system that mapped oxygen concentration, melanin concentration, and erythema measurement onto images of squamous cell carcinoma. However, these methods were only able to visualize the outline of skin lesions. To show that these smartphone-based methods can classify between malignant and benign melanoma lesions, further clinical studies are necessary.

7.1.2. Head and neck cancers

HNC refers to cancers that originate from the nasal and oral cavity, nasopharynx, oropharynx, hypopharynx, larynx, and esophagus. Up to 90% of cases of HNC are squamous cell carcinoma,239 so the majority of studies reviewed focus on head and neck squamous cell carcinoma (HNSCC). Liu et al.45 used an AOTF camera to perform tongue tumor pixel segmentation. For classification, they used the sparse representation method. Their system achieved 96.5% pixel-accuracy on a dataset of 65 tumor images. Bedard et al.46 built a snapshot imaging system for imaging both autofluorescence and reflectance data. They used the system to photograph images of the oral cavity in healthy individuals and in patients with oral cancer. Using spectral unmixing, they were able to (1) highlight vasculatures in the lip region and (2) determine boundaries of tumors in the oral cavity. To image further into the oral cavity, spectral imaging systems used endoscopes. Often, in in vivo diagnosis, perfusion is a feature of interest since higher perfusion could be an indication of neoplasm. Köhler et al.70 created a spectral laparoscopic system for imaging the esophagus. Using customized metrics, they were able to visualize the hemoglobin index of a resected esophagus from a patient with Barrett’s syndrome. Our research lab contributed new research in identification of HNSCC ex vivo with the use of a compact spectral camera (Fig. 12). Ma et al.47 imaged histologic slides resected from the laryngeal and hypopharyngeal regions of patients with HNSCC [Fig. 12(a)]. They proposed two different methods of classification: (1) a support-vector machine (SVM) method that uses spectra of segmented nuclei as input data and (2) a convolutional neural network method that uses small image patches as input data [Fig. 12(b)]. They found that the CNN classifier had better performance compared with the SVM classifier. However, they also found that classifiers trained on spectral images did not outperform classifiers trained on RGB images.

Fig. 12.

Fig. 12

(a) A compact hyperspectral Snapscan camera on top of a microscopic system (reproduced from Ref. 47). (b) Using the system described, a series of hyperspectral digital pathology images were acquired. The slides were from patients diagnosed with squamous head and neck carcinoma. A machine learning system was used to produce a probability heat map of cancer occurrence (reproduced from Ref. 240).

7.1.3. Lower abdominal cancer

Diagnosis of cancerous tumor in the lower abdominal region incorporates endoscopic tools. Kumashiro et al.48 used a commercial compact spectral camera for both in vivo and ex vivo identification of colorectal cancer tumors. For ex vivo observation, the camera was connected to a stereoscopic microscope. For in vivo observation, the camera was connected to a colonoscope. They found that in both ex vivo and in vivo analyses, the absorption rate of healthy mucosa tissues at 525 nm wavelength was significantly lower than that of adenocarcinoma. For in vivo data, tumor classification and real-time tumor mapping were also attempted. A sensitivity of 75.0% was achieved. Erfanzadeh et al.50 and Zeng et al.49 developed a handheld spectral imaging system for classification purposes. Erfanzadeh et al. imaged resected ovaries and found that the system can potentially identify malignant tumors from benign ovarian cysts. Zeng et al. used the system to image resected colorectal tumors. Mink et al.52 recognized the need for low-cost tools for cervical cancer diagnosis and developed a smartphone-based spectral colposcope. The system showed promise in augmenting biopsy and regular colposcopy. Baltussen et al.66 used two different spectral cameras, one operating in the visible light range (400 to 1000 nm) and one operating in the NIR range (900 to 1700 nm). Four types of tissues were classified: fat, healthy colorectal wall, colorectal tumor, and mucosa. They found that the near-infrared camera slightly outperformed the visible light camera in terms of classification result. Sun et al.241 produced a dataset of 880 hyperspectral images of cholangiocarcinoma (cancer of the bile duct) using an AOTF-based system. They used two types of CNN (Inception-V3 and ResNet-50) to classify cancerous tissues from normal tissues and achieved a 2% increase using hyperspectral data over using RGB data.

7.1.4. Other types of cancers

In many forms of solid-tumor cancers, surgery is often the necessary treatment. It is important that surgical resection remove all malignant tissues. Spectral imaging on ex vivo tissue could be a method to identify or improve cancer/normal margin. Van Manen et al.53 used a snapshot camera to image excised breast tumors. They found that on average, tumor tissues had significantly higher fluorescence intensity compared with healthy tissues from 450 to 950 nm wavelengths, except at 619, 629, 897, and 934 nm wavelengths. Using the hierarchical stochastic neighbor embedding algorithm, they reduced the number of wavelengths as features and improved segmentation accuracy. Ortega et al.242 combined a compact spatial scanning imager with an upright microscope and custom scanning stage. They used their custom system to generate a database of hyperspectral brain tumor histology. For classification between tumor and normal regions, three different supervised classification methods were used: linear neural network, SVM, and random forest classifier. They found that the neural network classifier has the best overall accuracy of 78.2% while achieving a sensitivity of 75.44% at 77.03% specificity.

7.1.5. Healing from burns, scars, and wounds

External factors such as age, lifestyle, smoking, and diet influence the healing process. Homeostatic imbalances such as ischemia (low perfusion), hypoxia, edema, and necrotic tissues also directly affect healing. To gauge healing process from burns, scars, and wounds, constant monitoring of perfusion, oxygenation rates, and tissue formation must be performed along with visual analysis.243 Methods used for wound monitoring included angiography with indocyanine green dye,244246 LSCI,247249 optical coherence tomography,250,251 laser Doppler flowmetry,252 and high-resolution ultrasound imaging.253,254 While no method offers distinct advantages over others, reflectance spectral imaging offers compact hardware and low-cost, noninvasive imaging.

Marotz et al.255 and Holmer et al.256 used a compact push-broom camera that operated in the VIS-NIR region for the purpose of assessing skin transplantation wounds. Reflectance data in the visible region were used to calculate hemoglobin relative concentration and oxygenation rate in the superficial dermis layer [Figs. 13(b) and 13(c)]. On the other hand, near-infrared reflectance data revealed deeper circulation in the subcutaneous layers and were used for estimating deep perfusion [Fig. 13(e)]. These data were mapped over the wound to assess the presence of ischemia [Fig. 13(f)]. Kulcke et al.34 later used the same commercial spectral system to image wound healing over a period of 2 weeks and showed that the perfusion and tissue oxygenation rates reduced to normal level after 2 weeks. Rutkowski et al.33 used compact spectral camera to monitor wound healing treatment with cold atmospheric plasma. They showed improved angiogenesis in both the dermis and subcutaneous tissue layers through cold atmospheric plasma treatment in vivo.

Fig. 13.

Fig. 13

Estimation of physiological values from a wound photograph. (a) RGB image, (b) and (c) relative and segmented tissue oxygenation mapping (StO2 in the figure), (d) reconstructed RGB image from hyperspectral data, (e) near-infrared-based perfusion data (NIR in the figure), and (f) relative tissue hemoglobin index (THI in the figure) (reproduced from Ref. 257).

7.1.6. Pressure sore and vascular occlusion

Pressure soreness, also called pressure ulcer or bedsores, is a pathological disease where the skin is injured from continuous pressure. Pressure sore is common among bed-ridden patients and often develop around areas with bony protrusions such as the heels, tailbone, hips, and ankles.258 To prevent pressure sores, monitoring of vascular occlusion is important.259 Van Manen et al.39 combined snapshot spectral imaging with LSCI to image the upper arm during and after occlusion. During occlusion, perfusion in the epidermis decreased, which was detected with LSCI. From spectral imaging, oxygenation rate was determined. They found that oxygenation rate measured with spectral imaging correlates with perfusion measured with LSCI. Their findings showed that spectral imaging can be helpful in monitoring epidermal blood flow. Klaessens et al.260 developed a compact spectral scanning system using LCTF to image constricted hand in the 420 to 730 nm wavelengths. During occlusion, they showed that the concentration of deoxygenated hemoglobin and oxygenated hemoglobin increased and decreased, respectively. After the occlusion was released, the absorption and oxygenation level overshot before stabilizing to regular level after a short period of time. He and Wang42 used smartphones to record hyperspectral imaging of fingers under pressure and showed similar results. Chang et al.31 used a compact commercial snapshot camera to measure the oxygen saturation and monitor bed sores in enrolled patients. A condition similar to pressure ulcer is foot ulcer, which is the ulceration that happens below the ankle. Foot ulcer is common among people with diabetes and can be chronic in nature.261 Foot ulcer can also be monitored by monitoring oxy- and deoxysaturation for both healing and prevention.262264 Yang et al.265 developed a compact push broom imager that analyses oxygen saturation to predict the healing quality of foot ulcers. Yudovsky et al.266 developed a custom spectral imaging system using LED that illuminated at 15 different wavebands between 450 and 700 nm to analyze oxy- and deoxysaturation. The researchers used the system to predict foot ulcers before they can form. Lee et al.267 performed a pilot study using two commercial spectral cameras, one in the hemoglobin absorption wavelengths (542 to 578 nm) and one in the near-infrared spectrum (760 to 830 nm).

7.1.7. Retinal diseases

Fundus imaging poses special engineering challenges for compact imagers. Because the eye is a small organ with limited reflection spectra, fundus imaging requires both illumination and magnification for accurate assessment.268 Many different biomarkers and diseases can be detected from the spectral response of the eyes. In clinical settings, most fundus cameras are table-top bulky systems, but many compact handheld and smartphone-based systems have been developed; for reports on these compact systems, consult reviews by Panwar et al.269 and Wintergerst et al.270 In humans, the macula is covered by the macular pigments lutein, zeaxanthin, and meso-zeaxanthin. These pigments are known to contribute to visual acuity.271 The appearance of this pigmented region can serve as biomarkers for visual or neurological diseases and is often quantified by macular pigment optical density (MPOD). Diabetic retinopathy is a complication of diabetes that affects the vessels of the retina and can cause blindness in some cases. Oxygen saturation of eye vessels can be used to infer progression of retinopathy in vivo.272 Early explorations in using compact spectral cameras for oxygenation monitoring was done by Johnson et al.104 through a custom-made CTIS. The system was designed to capture 50 spectral bands within the 450 to 700 nm range. Using the snapshot camera, they were able to demonstrate age-related changes in retinal vessels’ oxygen saturation in two healthy volunteers 30 years apart in age. A CTIS system was later used by Fawzi et al.58 to estimate MPOD in place of other methods, such as autofluorescence imaging and Raman spectroscopy. Li et al.55 used a compact SRDA camera with a microscope to image rat retina. The authors demonstrated that oxygenation rate can be successfully extracted from the snapshot camera. Kaluzny et al.54 continued the study in human subjects. By connecting the camera with a table-top fundus imaging system, they were able to image human retina in spectral datacubes. Using a best-fit model, they estimated oxygen saturation rate in the retinal arteries and veins. Through repeated imaging of the same eye, they achieved a mean standard deviation of 1.4%, showcasing high repeatability of the system. They also estimated MPOD, demonstrating that the snapshot system can extract multiple physiological data from only one measurement.

7.1.8. Diseases of the central nervous system

An exciting recent application of spectral imaging in bioengineering is the monitoring and diagnosis of neurological diseases through retinal imaging. It is known that the retina contains optical neurons that are directly linked to the brain, so it follows that biomarkers for many neurological diseases, such as Alzheimer’s disease (AD), Parkinson’s, and multiple sclerosis can be seen through retinal imaging.273 Amyloid beta (Ab) peptide has been identified in the brains of people with AD and is a known biomarker. Similarly, autopsy of patients with AD shows high concentration of Ab in the retina.274 Through spectral imaging of transgenic Alzheimer’s mice using a compact endoscope, More et al.56 showed that in the wavelengths from 450 to 700 nm, there existed a marked difference in the optical spectra between wild-type mice and transgenic mice (Fig. 14). They also showed that changes in optical spectra strongly correlated with the accumulation of Ab in the retina and progression of AD over time.

Fig. 14.

Fig. 14

(a) A compact spectral imaging system used to image the retina of mice. (b) Using the system described, the retina of wild-type mice (WT) and transgenic mice for Alzheimer’s disease (APP1/PS1) were imaged. The result showed a significant difference in the spectral signature between the two (reproduced from Ref. 56).

7.2. Surgical Guidance

Lu et al.7 identified the four key benefits of using spectral imaging in surgical guidance: (1) visualization of microsurgery features, (2) hyperspectral tumor segmentation during resection, (3) monitoring of tissue oxygenation rate, and (4) visualization of large organs. The use of compact spectral imager benefits surgical guidance greatly because they free up limited space in the surgical room. Because spectral imaging is good at identifying blood oxygenation status, many researchers used them in the surgical room to monitor blood flow. Anastomotic insufficiency is a break or leak in a surgical suture and is among the most serious complications in colorectal surgery. Jansen-Wilken et al.69 used a spectral camera to detect the anastomosis site during small bowel surgery. Many other researchers explored similar surgical complications, most commonly ischemia.63,67,275 Akbari et al.67 used two cameras that operated in the 400 to 1000 nm and 900 to 1700 nm wavelength for bowel surgery. They found that the highest contrast between normal and ischemic regions in the intestine were seen in the 765 to 830 nm wavelength range, and they used SVM to evaluate ischemia progression over time.

7.2.1. Spectral endoscopy

Many researchers investigated the use of spectral imaging in combination of surgical visualization tools. Endoscopic tools, which aid minimally invasive surgery, are often combined with compact spectral cameras. Kumashiro et al.48 attached a mobile spectral camera to a colonoscope. They directly observed colon lesions in vivo during biopsy. However, because the camera had a long acquisition time, scanning time was limited to 5 s and scanning resolution was limited to 200×200  pixels. Nevertheless, they found significant differences in absorption between normal mucosa and adenocarcinoma regions at the wavelengths 525 nm. Laparoscopy is an operation performed through small incisions with the aid of cameras and is minimally invasive. Clancy et al.63 built a custom laparoscope and measured bowel oxygenation rate during clamping in vivo. While their system was built for minimally invasive surgeries, imaging was performed during open surgery. Zhang et al.65 used a similar system to identify different types of tissues. They identified an improvement in classification accuracy using multispectral images over using RGB images. However, their system was tested on ex vivo tissues and not during live surgery. Some researchers built laparoscopic systems with dual channels, such that both live video and spectral images can be simultaneously captured. This was commonly done with the use of beam splitters. Köhler et al.70 developed a spatial laparoscopic camera for aiding esophagus surgery that captured monochromatic video and spectral images. To validate their system, they compared it with a commercial spectral camera developed for surgical settings. The specimen used was ex vivo human esophagus with adenocarcinoma. They found that their own compact laparoscopic system showed carcinoma classification result consistent with that of commercial devices. A similar dual camera endoscopic system that showed both spectral image and real-time video was developed by Yoon et al.231 They demonstrated the system using an ex vivo pig esophagus and used linear unmixing to estimate concentration of staining solution (see Fig. 15). They stained the esophagus with methylene blue (MB) solution and used measured absorbance to calculate the MB concentration throughout the tissue. Yoon et al.276 later improved the system and used it for clinical testing on 10 patients that underwent colonoscopy. They used spectral angle mapping to extract features and k-nearest neighbors to classify between normal mucosa and polyps.

Fig. 15.

Fig. 15

Images taken from an ex vivo pig esophagus by a compact spectral endoscope. (a)–(c) The esophagus taken before staining. (d)–(f) The esophagus after MB staining. (g) and (h) The estimation map of MB concentration using two different linear unmixing methods. (i) The spectral absorbance plot for the shaded areas in (h). In (c) and (f), the color corresponds to the color segment in the corresponding image to the left (reproduced from Ref. 231).

7.2.2. Spectral surgical microscope

A surgical microscope or operating microscope is a system of optical microscopy designed for aiding surgical procedures. They are useful to the point of necessity in microsurgery.206 The use of microscopy in surgical theaters has been around since the late 17th century. Today, surgical microscopes are used in many types of surgeries, ranging from micro-operations, such as dentistry, neurosurgery, and ENT surgery, to macro-operations, such as spine surgery, tumor resection, and plastic and reconstructive surgery. Modern surgical microscopes are engineering marvels, combining high-power optics, precise maneuverability, and good stability. They also have many digital components, which allow combination with many imaging modules, such as fluorescence imaging, optical coherence tomography, laser speckle imaging, and spectral imaging. Spectral imaging is of special interest to many researchers, because the noninvasive and noncontact nature of the technology means spectral imaging can provide visual aids with minimal complications. Furthermore, surgical microscopes often provide light sources sufficient for both the surgical operation and the image acquisition in the forms of xenon light, halogen light, and LED. This reduces the need for additional illumination as seen in other acquisition setups.

Many setups for spectral imaging in surgical microscope used a monochrome imaging camera, a broadband light source, and variable filters/filter wheel. Van Brakel et al.61 used an LCTF for the filters in their setup. They used the system to create high-resolution images of a dental implant; more specifically, they used the spectral data to estimate the soft tissue thickness and height surrounding the implant. To do this, they used a model of absorbance based on mucosa thickness. Postprocessing was required to align spectral images due to the motion blur. Nevertheless, the system was able to estimate soft tissue thickness consistent with previous literature. Both Roblyer et al.277 and Martin et al.278 used filter wheels for their acquisition setup. For illumination, Roblyer et al. replaced the original light sources of the surgical microscope with mercury lamp. While Martin et al. used a monochromatic camera, Roblyer et al. used an RGB camera and used the raw images to perform processing on. Pichette et al.64 used an ultracompact camera with a surgical microscope. The system used 4×4 SRDA to detect 16 spectral bands in the range 481 to 632 nm. With the system, they segmented blood vessels, assessed hemoglobin concentration, and detected potential vasomotion and epileptic spike responses.

8. Discussion and Future Directions

Spectral imaging, which includes both multispectral and hyperspectral imaging, acquires images in many wavelengths and beyond the visible light range. The vast amount of data acquired both spatially and spectrally offer benefits in understanding biochemical compositions of tissues and their locations. The main drive for designing compact and lightweight spectral cameras came from remote sensing, where small cameras are necessary to fit onto UAVs. However, many other imaging-intensive fields, such as biomedical imaging, benefited tremendously from this progress in the development of compact systems. We reviewed the technological progress made in the engineering and manufacturing of compact spectral cameras and found that current compact systems are vastly superior compared with many of their bulky counterparts of only 20 years ago. While the engineering principles have existed since the mid-20th century, it was manufacturing progress that drove the miniaturization of many previously cumbersome systems. For example, the physics behind FPI systems has been known since the 17th century. However, manufacturing them in large quantity and small size required lithography, which has only been possible since the 1990s. As such, we expect that future progress in generating more compact cameras will come from new manufacturing techniques. We also expect future researchers will focus on the engineering and design of compact snapshot cameras. Compared with spatial and spectral scanning cameras, snapshot cameras have a smaller number of mechanical components, which makes them a prime candidate for miniaturization. The variety of methods available for capturing snapshot images meant that researchers could pursue different paths toward miniaturization as well. As of writing this, some of the smallest existing spectral cameras (<30  g) were all snapshot cameras using SRDA technology.53,54,76,161,279

8.1. Hardware Limitations and Potential Solutions

Currently, there is still a large tradeoff in spatial scanning, spectral scanning, and snapshot imaging cameras. The tradeoff has three main components: spatial raster resolution, spectral number resolution, and acquisition time. The spatial and spectral scanning systems are optimized for spatial and spectral resolution, whereas the snapshot scanning systems are being optimized for acquisition time. If there are any “dream” spectral imaging systems that can achieve large spatial and spectral resolution within a short amount of time, several engineering barriers must be overcome. First, what should the acquisition mechanism be? If the system is snapshot, it needs to record the entire datacube onto the image sensor. The sensor for such a system would be much larger than any counterpart RGB imaging sensors; this already can impact the size of the camera. If the system is spatial or spectral, then the mechanism for translating or moving the filter must be fast enough to scan through the entire field in a reasonable time. Second, how will data recording work? Due to the enormous amount of data spectral cameras capture, most systems, even commercial ones, send raw data and receive commands to and from another computer. Behmann et al.144 described a commercial system that performed live calibrations without user input. New generations of spectral camera systems are heading toward on-the-spot image processing within the system hardware, which can make analysis much more seamless.

We briefly touched upon the power issues of existing cameras. For now, power consumption is not important because many laboratory-based systems draw power from the grid. However, in a future where biomedical spectral imaging systems will be used in low-resource situations, remote power sources and low power consumption must be a research priority. Another potential problem with compact spectral cameras is system cooling. Thermal radiation emitted by the instrument itself is negligible in the visible-NIR range but can affect the imagers working in the long wave infrared (LWIR, 8000 to 14,000 nm) range.280 To minimize this effect, LWIR spectral cameras need a specific cooling apparatus, which increases the size and weight of the system.

8.2. Quest for Lower Costs

A barrier toward popularization of spectral imaging research is the high cost of commercialized systems. As of now, a commercial spectral imaging camera can cost tens of thousands of dollars, many orders of magnitude more than their commercialized RGB counterparts. The reason for this is due to the small market size and complex manufacturing process. Manufacturing accurate spectral imaging systems requires precise spectral components and elaborate calibration processes, similar to the manufacturing process of other precision instruments. To counter this, many research facilities and hobbyists have produced customized spectral cameras using off-the-shelf custom instruments. We reviewed the use of open-source hardware and software, 3D printing, smartphone, and low-cost spectroscopy, and off-the-shelf optical systems. One common thread that enables all of them is the rise of personal simulation and modeling software. Ray-tracing software, optic simulation software, and 3D modeling software make customization possible. Still, a wide gap exists between the quality of COTS systems and that of commercialized systems. Many commercialized methods used high precision manufacturing standards. For example, SRDA, MEMS, and Offner spectrometers used lithographic manufacture. These methods were limited to research labs and clean room operations.121,166 If systems are built using low-cost components, extensive optical calibrations are needed, typically using a secondary spectral camera or a known light source. Improper calibration severely distorts the experiment outcomes, especially in applications that require high spectral and spatial precision.

Progress in making more low-cost customized spectral cameras will come from engineering using off-the-shelf components and from the open-source movement. Researchers should share new systems designs through open-access journals and invite collaboration and improvements. While open-source design is becoming increasingly accepted, it faces significant challenges in medical research. Many new designs did not undergo the strict regulation necessary for medical devices. They also had lackluster business models, which discouraged further developments. Copyright laws that cover open-source designs differ between hardware and software. Software is “created work” that is often legally protected under copyright, whereas hardware is “invention” that is protected under patent. Many companies that manufacture open-source hardware still protect their product under a trademark, which acts as a form of quality assurance.281 3D printing and open-source movements have intertwined roots; software used to design and model in 3D are free and have large exchange forums on the internet. The majority of individuals involved in the 3D printing community are also involved in open-source projects, which shows a mindset of collaboration.282 We summarized the use of 3D printing for the manufacturing of customized parts and housing of compact spectral imagers. However, the biggest issue with 3D-printed housing is the mechanical durability. PLA, the most commonly used material for 3D printing, is cheap and accurate but has low heat tolerance and outdoor tolerance. It also has the potential to shrink after cooling. Future researchers interested in developing 3D-printed housing for hyperspectral cameras should study different materials and their durability; many materials exist such as acrylonitrile butadiene styrene, carbon fiber filaments, or metal-filled filaments that can serve as an alternative to PLA.

We believe that smartphones will be an important component of future compact low-cost spectral imagers. Smartphones are already engineered to be extremely compact with good processing and low power consumption. They are also relatively low-cost and widely available. Smartphone cameras have achieved considerable progress in the last decades. Researchers should use smartphones not just as a camera substitution but also as a control unit. For this, open-source phone operating systems should be preferred, because they allow control software to be written and shared freely. Smartphones are also interconnected to the communication network, which means that they can be used in a larger IoT system. We expected more smartphone-based systems to be used in low resource settings or in coordinated clinical trials. A potential problem is that smartphones have a variety of different configurations short-term technical support, so long-term development and collaboration can be difficult.

8.3. Future of Spectral Cameras in Bioimaging

In the second part of the paper, we reviewed the use of compact spectral imagers in the biomedical imaging field. Building spectral imaging biomedical systems in many cases were similar to “plug-and-play” systems. We believe that compact imaging systems will become much more common in clinical settings. The smaller size means that there will be less space, which means more room for other instruments. We predict that compact spectral cameras in biomedical imaging will have two types of applications. First, they will replace existing bulky spectral imaging systems. Surgery and diagnostic devices will be the main beneficiary of this change. In surgery, smaller cameras can be fitted on top of endoscopes, surgical microscopes, and operating robots without much interference to the surgical process. Already, spectral imaging has enabled the visualization of key features during surgery but it has not been widely adopted because of bulky sizes.7 Compact spectral cameras can allow for widespread adoption of spectral imaging technologies in the surgery room. We believe that snapshot cameras will be the dominant system in these situations because of their rapid acquisition time. Compact spectral systems will be used for more diagnosis of skin and retinal diseases in vivo. The strength of spectral devices comes from the fact that they are noninvasive and fast, which means that sensitive organs such as the retina and skin are prime candidates for diagnosis. These organs also contain well-known spectral-sensitive information, most notably blood oxygenation rate. The lighter weights will enable some systems to go from tabletop to handheld, which means that they can be much more convenient for clinical settings.

Second, compact spectral cameras will be used to research new physiological processes. If we want to see more research being done in biomedical imaging using compact cameras, we must understand more about the spectral signatures of physiological processes. Knowledge of hemoglobin spectra and skin physiology has already helped researchers to construct elaborate models to diagnose diseases such as melanoma, burns, wounds, ulcer, diabetic foot, and erythema. Large pathology datasets have been used to construct classification algorithms for digital staining, cancerous cell diagnosis, and cellular segmentations.283 We expected similar progress will be made in the retina, as it is currently linked to many complex diseases of the central nervous system.284 Recently, new understanding of how amyloid beta affects the scattering profile led to the development of hyperspectral imaging for Alzheimer’s diagnosis.56,274,57 We advocate for creations and sharing of hyperspectral imaging databases. Currently, most spectral imaging databases are satellite images used for remote sensing. Only a handful of hyperspectral databases are for biomedical purposes,285,286 and they are of specific diseases. Creation of new databases is difficult: acquisition requires a hyperspectral camera system and many patients or specimens. They also require a large hosting space, which can be hundreds of gigabytes. However, the scientific contribution of such database will be invaluable if they advance our understanding of human physiology.

We also predict the use of compact spectral imaging alongside other imaging modalities. Spectral imaging is not always the superior imaging modality. New applications are limited by the penetration depth of light through the skin tissues. Light in the VIS-NIR region has a penetration depth ranging from 0.48 mm at 550 nm to 3.57 mm at 850 nm. For comparison, photoacoustic multispectral imaging can achieve a penetration depth of up to 5 cm with a handheld system.20 To circumvent this shortcoming, imaging in the SWIR wavelengths has been proposed to provide greater penetration depths.88 By combining compact spectral imagers with other imaging modality, the multimodal system was constructed to provide point-of-care analysis to the patients.108,225 There are several different imaging modalities that pair well with hyperspectral imaging systems. Optical coherent tomography (OCT) captures images with depth of several millimeters and micrometer resolution. Combination of OCT and spectral imaging provided both surface chemical information and depth information.287,288 LSCI provided dynamic movements information of blood vessels, which paired well with spectral imaging ability to resolve blood oxygenation contents.289 Raman spectroscopy provided detailed chemical information in a small spatial scale, which complemented spectral imaging.290 Compact devices exist for many of these modalities, which means that multimodal systems can still be compact and convenient.

9. Conclusion

From large and bulky systems that were used on satellites and aircraft for remote sensing, spectral cameras now exist as compact and portable systems. Both the technology and the applications sides of spectral imaging remained not fully developed. New manufacturing methods and advances in computational speed mean that spectral cameras of the future can be high-quality, fast, and compact without sacrificing the cost factor. Spectral cameras becoming more compact and low-cost means that more individuals can use them to benefit biomedical research.

Acknowledgments

This research was supported in part by the U.S. National Institutes of Health (NIH) Grants (R01CA156775, R01CA204254, R01HL140325, R01CA154475, and R21CA231911), by the Cancer Prevention and Research Institute of Texas (CPRIT) Grant RP190588, and by the Eugene McDermott Fellowship 202009 at the University of Texas at Dallas. The authors thank Kelden Pruitt and Ling Ma for contributing feedback to the manuscript.

Biographies

Minh H. Tran is a PhD candidate in bioengineering at the University of Texas at Dallas. His research is in artificial intelligence, machine learning, and disease diagnosis.

Baowei Fei is a Cecil H. and Ida Green Chair in systems biology science, professor of bioengineering at the University of Texas at Dallas, and professor of radiology at UT Southwest Medical Center. He is a director of the Quantitative BioImaging Laboratory. He is a fellow of the International Society for Optics and Photonics and a fellow of the American Institute for Medical and Biological Engineering.

Disclosures

The authors have no relevant financial interests in this article and no potential conflicts of interest to disclose.

Contributor Information

Minh H. Tran, Email: minh.tran2@utdallas.edu.

Baowei Fei, Email: bfei@utdallas.edu.

References

  • 1.Qin J., “Chapter 5 – Hyperspectral imaging instruments,” in Hyperspectral Imaging for Food Quality Analysis and Control, Sun D.-W., Ed., pp. 129–172, Academic Press, San Diego: (2010). [Google Scholar]
  • 2.Goetz A. F., “Three decades of hyperspectral remote sensing of the Earth: a personal view,” Remote Sens. Environ. 113, S5–S16 (2009). 10.1016/j.rse.2007.12.014 [DOI] [Google Scholar]
  • 3.Dale L. M., et al. , “Hyperspectral imaging applications in agriculture and agro-food product quality and safety control: a review,” Appl. Spectrosc. Rev. 48(2), 142–159 (2013). 10.1080/05704928.2012.705800 [DOI] [Google Scholar]
  • 4.Feng Y.-Z., Sun D.-W., “Application of hyperspectral imaging in food safety inspection and control: a review,” Crit. Rev. Food Sci. Nutr. 52(11), 1039–1058 (2012). 10.1080/10408398.2011.651542 [DOI] [PubMed] [Google Scholar]
  • 5.Polak A., et al. , “Hyperspectral imaging combined with data classification techniques as an aid for artwork authentication,” J. Cult. Heritage 26, 1–11 (2017). 10.1016/j.culher.2017.01.013 [DOI] [Google Scholar]
  • 6.Li Q., et al. , “Review of spectral imaging technology in biomedical engineering: achievements and challenges,” J. Biomed. Opt. 18(10), 100901 (2013). 10.1117/1.JBO.18.10.100901 [DOI] [PubMed] [Google Scholar]
  • 7.Lu G., Fei B., “Medical hyperspectral imaging: a review,” J. Biomed. Opt. 19(1), 010901 (2014). 10.1117/1.JBO.19.1.010901 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Crocombe R. A., “Portable spectroscopy,” Appl. Spectrosc. 72(12), 1701–1751 (2018). 10.1177/0003702818809719 [DOI] [PubMed] [Google Scholar]
  • 9.Leavesley S. J., et al. , “A theoretical-experimental methodology for assessing the sensitivity of biomedical spectral imaging platforms, assays, and analysis methods,” J. Biophotonics 11(1), e201600227 (2018). 10.1002/jbio.201600227 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Chrisp M. P., “Convex diffraction grating imaging spectrometer,” Google Patents (1999).
  • 11.Descour M. R., Dereniak E. L., “Nonscanning no-moving-parts imaging spectrometer,” Proc. SPIE 2480, 48–64 (1995). 10.1117/12.210908 [DOI] [Google Scholar]
  • 12.Ford B. K., Descour M. R., Lynch R. M., “Large-image-format computed tomography imaging spectrometer for fluorescence microscopy,” Opt. Express 9(9), 444–453 (2001). 10.1364/OE.9.000444 [DOI] [PubMed] [Google Scholar]
  • 13.Horton R. F., “Optical design for a high-etendue imaging Fourier-transform spectrometer,” Proc. SPIE 2819, 300–315 (1996). 10.1117/12.258077 [DOI] [Google Scholar]
  • 14.Sellar R. G., Boreman G. D., “Comparison of relative signal-to-noise ratios of different classes of imaging spectrometer,” Appl. Opt. 44(9), 1614–1624 (2005). 10.1364/AO.44.001614 [DOI] [PubMed] [Google Scholar]
  • 15.Willoughby C. T., Folkman M. A., Figueroa M. A., “Application of hyperspectral-imaging spectrometer systems to industrial inspection,” Proc. SPIE 2599, (1996). 10.1117/12.230385 [DOI] [Google Scholar]
  • 16.Kung C.-C., Lee M.-H., Hsieh C.-L., “Development of an ultraspectral imaging system by using a concave monochromator,” J. Chin. Inst. Eng. 35(3), 329–342 (2012). 10.1080/02533839.2012.655535 [DOI] [Google Scholar]
  • 17.Briers D., et al. , “Laser speckle contrast imaging: theoretical and practical limitations,” J. Biomed. Opt. 18(6), 066018 (2013). 10.1117/1.JBO.18.6.066018 [DOI] [PubMed] [Google Scholar]
  • 18.Abramczyk H., Brozek-Pluska B., “Raman imaging in biochemical and biomedical applications. Diagnosis and treatment of breast cancer,” Chem. Rev. 113(8), 5766–5781 (2013). 10.1021/cr300147r [DOI] [PubMed] [Google Scholar]
  • 19.Marschall S., et al. , “Optical coherence tomography—current technology and applications in clinical and biomedical research,” Anal. Bioanal.Chem. 400(9), 2699–2720 (2011). 10.1007/s00216-011-5008-1 [DOI] [PubMed] [Google Scholar]
  • 20.Taruttis A., Ntziachristos V., “Advances in real-time multispectral optoacoustic imaging and its applications,” Nat. Photonics 9(4), 219–227 (2015). 10.1038/nphoton.2015.29 [DOI] [Google Scholar]
  • 21.Hewett J., et al. , “The application of a compact multispectral imaging system with integrated excitation source to in vivo monitoring of fluorescence during topical photodynamic therapy of superficial skin cancers,” Photochem. Photobiol. 73(3), 278–282 (2001). 10.1562/0031-8655(2001)0730278TAOACM2.0.CO2 [DOI] [PubMed] [Google Scholar]
  • 22.Kim S., et al. , “Smartphone-based multispectral imaging: system development and potential for mobile skin diagnosis,” Biomed. Opt. Express 7(12), 5294–5307 (2016). 10.1364/BOE.7.005294 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Luthman A. S., et al. , “Fluorescence hyperspectral imaging (fHSI) using a spectrally resolved detector array,” J. Biophotonics 10(6-7), 840–853 (2017). 10.1002/jbio.201600304 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Delpueyo X., et al. , “Multispectral imaging system based on light-emitting diodes for the detection of melanomas and basal cell carcinomas: a pilot study,” J. Biomed. Opt. 22(6), 065006 (2017). 10.1117/1.JBO.22.6.065006 [DOI] [PubMed] [Google Scholar]
  • 25.Rey-Barroso L., et al. , “Visible and extended near-infrared multispectral imaging for skin cancer diagnosis,” Sensors 18(5), 1441 (2018). 10.3390/s18051441 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Lihacova I., et al. , “A method for skin malformation classification by combining multispectral and skin autofluorescence imaging,” Proc. SPIE 10685, 1068535 (2018). 10.1117/12.2306203 [DOI] [Google Scholar]
  • 27.Ding H., et al. , “Smartphone based multispectral imager and its potential for point-of-care testing,” Analyst 144(14), 4380–4385 (2019). 10.1039/C9AN00853E [DOI] [PubMed] [Google Scholar]
  • 28.Uthoff R. D., et al. , “Point-of-care, multispectral, smartphone-based dermascopes for dermal lesion screening and erythema monitoring,” J. Biomed. Opt. 25(6), 066004 (2020). 10.1117/1.JBO.25.6.066004 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Spigulis J., et al. , “Smartphone snapshot mapping of skin chromophores under triple-wavelength laser illumination,” J. Biomed. Opt. 22(9), 091508 (2017). 10.1117/1.JBO.22.9.091508 [DOI] [PubMed] [Google Scholar]
  • 30.Qi H., et al. , “A hand-held mosaicked multispectral imaging device for early stage pressure ulcer detection,” J. Med. Syst. 35(5), 895–904 (2011). 10.1007/s10916-010-9508-x [DOI] [PubMed] [Google Scholar]
  • 31.Chang M.-C., et al. , “Multimodal sensor system for pressure ulcer wound assessment and care,” IEEE Trans. Ind. Inf. 14(3), 1186–1196 (2017). 10.1109/TII.2017.2782213 [DOI] [Google Scholar]
  • 32.Li Q., et al. , “Microsopic hyperspectral imaging system used in rat skin analysis,” in 2012 5th Int. Conf. BioMed. Eng. and Inf., pp. 76–79 (2012). 10.1109/BMEI.2012.6512880 [DOI] [Google Scholar]
  • 33.Rutkowski R., et al. , “Hyperspectral imaging for in vivo monitoring of cold atmospheric plasma effects on microcirculation in treatment of head and neck cancer and wound healing,” Clin. Plasma Med. 7, 52–57 (2017). 10.1016/j.cpme.2017.09.002 [DOI] [Google Scholar]
  • 34.Kulcke A., et al. , “A compact hyperspectral camera for measurement of perfusion parameters in medicine,” Biomed. Eng./Biomed. Tech. 63(5), 519–527 (2018). 10.1515/bmt-2017-0145 [DOI] [PubMed] [Google Scholar]
  • 35.Marotz J., et al. , “3D-perfusion analysis of burn wounds using hyperspectral imaging,” Burns 47(1), 157–170 (2021). 10.1016/j.burns.2020.06.001 [DOI] [PubMed] [Google Scholar]
  • 36.Thiem D., et al. , “Hyperspectral analysis for perioperative perfusion monitoring: a clinical feasibility study on free and pedicled flaps,” Clin. Oral Investig. 25(3), 933–945 (2021). 10.1007/s00784-020-03382-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Bolton F. J., et al. , “Portable, low-cost multispectral imaging system: design, development, validation, and utilization,” J. Biomed. Opt. 23(12), 121612 (2018). 10.1117/1.JBO.23.12.121612 [DOI] [PubMed] [Google Scholar]
  • 38.Kim S., et al. , “Smartphone-based multispectral imaging and machine-learning based analysis for discrimination between seborrheic dermatitis and psoriasis on the scalp,” Biomed. Opt. Express 10(2), 879–891 (2019). 10.1364/BOE.10.000879 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Van Manen L., et al. , “Feasibility of a snapshot hyperspectral imaging for detection of local skin oxygenation,” Proc. SPIE 10873, 108730Q (2019). 10.1117/12.2507840 [DOI] [Google Scholar]
  • 40.He Q., Wang R. K., “Analysis of skin morphological features and real-time monitoring using snapshot hyperspectral imaging,” Biomed. Opt. Express 10(11), 5625–5638 (2019). 10.1364/BOE.10.005625 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Zherebtsov E., et al. , “Hyperspectral imaging of human skin aided by artificial neural networks,” Biomed. Opt. Express 10(7), 3545–3559 (2019). 10.1364/BOE.10.003545 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.He Q., Wang R., “Hyperspectral imaging enabled by an unmodified smartphone for analyzing skin morphological features and monitoring hemodynamics,” Biomed. Opt. Express 11(2), 895–910 (2020). 10.1364/BOE.378470 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Chandler L., Chandler A., Periasamy A., “Novel snapshot hyperspectral imager for fluorescence imaging,” Proc. SPIE 10498, 1049837 (2018). 10.1117/12.2300933 [DOI] [Google Scholar]
  • 44.Wang F., Behrooz A., Morris M., “High-contrast subcutaneous vein detection and localization using multispectral imaging,” J. Biomed. Opt. 18(5), 050504 (2013). 10.1117/1.JBO.18.5.050504 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Liu Z., Wang H., Li Q., “Tongue tumor detection in medical hyperspectral images,” Sensors 12(1), 162–174 (2012). 10.3390/s120100162 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Bedard N., et al. , “Multimodal snapshot spectral imaging for oral cancer diagnostics: a pilot study,” Biomed. Opt. Express 4(6), 938–949 (2013). 10.1364/BOE.4.000938 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Ma L., et al. , “Hyperspectral microscopic imaging for automatic detection of head and neck squamous cell carcinoma using histologic image and machine learning,” Proc. SPIE 11320, 113200W (2020). 10.1117/12.2549369 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Kumashiro R., et al. , “Integrated endoscopic system based on optical imaging and hyperspectral data analysis for colorectal cancer detection,” Anticancer Res. 36(8), 3925–3932 (2016). [PubMed] [Google Scholar]
  • 49.Zeng Y., et al. , “A multi spectral hand-held spatial frequency domain imaging system for imaging human colorectal cancer,” Proc. SPIE 10874, 108740T (2019). 10.1117/12.2510320 [DOI] [Google Scholar]
  • 50.Erfanzadeh M., et al. , “Low-cost compact multispectral spatial frequency domain imaging prototype for tissue characterization,” Biomed. Opt. Express 9(11), 5503–5510 (2018). 10.1364/BOE.9.005503 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Bolton F. J., et al. , “Development and bench testing of a multi-spectral imaging technology built on a smartphone platform,” Proc. SPIE 9699, 969907 (2016). 10.1117/12.2218694 [DOI] [Google Scholar]
  • 52.Mink J. W., et al. , “Initial clinical testing of a multi-spectral imaging system built on a smartphone platform,” Proc. SPIE 9699, 96990R (2016). 10.1117/12.2218693 [DOI] [Google Scholar]
  • 53.Van Manen L., et al. , “Snapshot hyperspectral imaging for detection of breast tumors in resected specimens,” Proc. SPIE 10856, 108560I (2019). 10.1117/12.2507835 [DOI] [Google Scholar]
  • 54.Kaluzny J., et al. , “Bayer filter snapshot hyperspectral fundus camera for human retinal imaging,” Curr. Eye Res. 42(4), 629–635 (2017). 10.1080/02713683.2016.1221976 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55.Li H., et al. , “Snapshot hyperspectral retinal imaging using compact spectral resolving detector array,” J. Biophotonics 10(6-7), 830–839 (2017). 10.1002/jbio.201600053 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56.More S. S., Beach J. M., Vince R., “Early detection of amyloidopathy in Alzheimer’s mice by hyperspectral endoscopy,” Invest. Ophthalmol. Vis. Sci. 57(7), 3231–3238 (2016). 10.1167/iovs.15-17406 [DOI] [PubMed] [Google Scholar]
  • 57.More S. S., et al. , “In vivo assessment of retinal biomarkers by hyperspectral imaging: early detection of Alzheimer’s disease,” ACS Chem. Neurosci. 10(11), 4492–4501 (2019). 10.1021/acschemneuro.9b00331 [DOI] [PubMed] [Google Scholar]
  • 58.Fawzi A. A., et al. , “Recovery of macular pigment spectrum in vivo using hyperspectral image analysis,” J. Biomed. Opt. 16(10), 106008 (2011). 10.1117/1.3640813 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.Cavalcanti T. C., et al. , “Smartphone-based spectral imaging otoscope: system development and preliminary study for evaluation of its potential as a mobile diagnostic tool,” J. Biophotonics 13(6), e2452 (2020). 10.1002/jbio.201960213 [DOI] [PubMed] [Google Scholar]
  • 60.Ortega S., et al. , “Hyperspectral push-broom microscope development and characterization,” IEEE Access 7, 122473–122491 (2019). 10.1109/ACCESS.2019.2937729 [DOI] [Google Scholar]
  • 61.van Brakel R., et al. , “The effect of zirconia and titanium implant abutments on light reflection of the supporting soft tissues,” Clin. Oral Implants Res. 22(10), 1172–1178 (2011). 10.1111/j.1600-0501.2010.02082.x [DOI] [PubMed] [Google Scholar]
  • 62.Nouri D., Lucas Y., Treuillet S., “Hyperspectral interventional imaging for enhanced tissue visualization and discrimination combining band selection methods,” Int. J. Comput. Assist. Radiol. Surg. 11(12), 2185–2197 (2016). 10.1007/s11548-016-1449-5 [DOI] [PubMed] [Google Scholar]
  • 63.Clancy N. T., et al. , “Intraoperative measurement of bowel oxygen saturation using a multispectral imaging laparoscope,” Biomed. Opt. Express 6(10), 4179–4190 (2015). 10.1364/BOE.6.004179 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 64.Pichette J., et al. , “Intraoperative video-rate hemodynamic response assessment in human cortex using snapshot hyperspectral optical imaging,” Neurophotonics 3(4), 045003 (2016). 10.1117/1.NPh.3.4.045003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 65.Zhang Y., et al. , “Tissue classification for laparoscopic image understanding based on multispectral texture analysis,” Proc. SPIE 9786, 978619 (2016). 10.1117/12.2216090 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 66.Baltussen E. J., et al. , “Hyperspectral imaging for tissue classification, a way toward smart laparoscopic colorectal surgery,” J. Biomed. Opt. 24(1), 016002 (2019). 10.1117/1.JBO.24.1.016002 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 67.Akbari H., et al. , “Detection and analysis of the intestinal ischemia using visible and invisible hyperspectral imaging,” IEEE Trans. Biomed. Eng. 57(8), 2011–2017 (2010). 10.1109/TBME.2010.2049110 [DOI] [PubMed] [Google Scholar]
  • 68.Jansen-Winkeln B., et al. , “Determination of the transection margin during colorectal resection with hyperspectral imaging (HSI),” Int. J. Colorectal Dis. 34(4), 731–739 (2019). 10.1007/s00384-019-03250-0 [DOI] [PubMed] [Google Scholar]
  • 69.Jansen-Winkeln B., et al. , “Hyperspektral-imaging bei gastrointestinalen anastomosen,” Der. Chirurg. 89(9), 717–725 (2018). 10.1007/s00104-018-0633-2 [DOI] [PubMed] [Google Scholar]
  • 70.Köhler H., et al. , “Laparoscopic system for simultaneous high-resolution video and rapid hyperspectral imaging in the visible and near-infrared spectral range,” J. Biomed. Opt. 25(8), 086004 (2020). 10.1117/1.JBO.25.8.086004 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 71.Goetz A. F., et al. , “Imaging spectrometry for earth remote sensing,” Science 228(4704), 1147–1153 (1985). 10.1126/science.228.4704.1147 [DOI] [PubMed] [Google Scholar]
  • 72.Lansing J. C., Jr., Cline R. W., “The four-and five-band multispectral scanners for Landsat,” Opt. Eng. 14(4), 144312 (1975). 10.1117/12.7971838 [DOI] [Google Scholar]
  • 73.Babey S. K., Anger C. D., “Compact airborne spectrographic imager (CASI): a progress review,” Proc. SPIE 1937, 152–163 (1993). 10.1117/12.157052 [DOI] [Google Scholar]
  • 74.Aikio M., “Hyperspectral prism-grating-prism imaging spectrograph,” Dissertation, Univ. of Oulu, Finland (2001).
  • 75.Zhong Y., et al. , “Mini-UAV-borne hyperspectral remote sensing: from observation and processing to applications,” IEEE Geosci. Remote Sens. Mag. 6(4), 46–62 (2018). 10.1109/MGRS.2018.2867592 [DOI] [Google Scholar]
  • 76.Geelen B., et al. , “A tiny VIS-NIR snapshot multispectral camera,” Proc. SPIE 9374, 937414 (2015). 10.1117/12.2077583 [DOI] [Google Scholar]
  • 77.Wu H., et al. , “Miniaturized handheld hyperspectral imager,” Proc. SPIE 9101, 91010W (2014). 10.1117/12.2049243 [DOI] [Google Scholar]
  • 78.Sandino J., et al. , “Aerial mapping of forests affected by pathogens using UAVs, hyperspectral sensors, and artificial intelligence,” Sensors 18(4), 944 (2018). 10.3390/s18040944 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 79.Pichette J., Charle W., Lambrechts A., “Fast and compact internal scanning CMOS-based hyperspectral camera: the Snapscan,” Proc. SPIE 10110, 1011014 (2017). 10.1117/12.2253614 [DOI] [Google Scholar]
  • 80.Prieto-Blanco X., et al. , “Optical configurations for imaging spectrometers,” in Computational Intelligence for Remote Sensing, Graña M., Duro R. J., Eds., pp. 1–25, Springer, Berlin, Heidelberg: (2008). [Google Scholar]
  • 81.Adão T., et al. , “Hyperspectral imaging: a review on UAV-based sensors, data processing and applications for agriculture and forestry,” Remote Sens. 9(11), 1110 (2017). 10.3390/rs9111110 [DOI] [Google Scholar]
  • 82.Elliott A. D., et al. , “Real-time hyperspectral fluorescence imaging of pancreatic β-cell dynamics with the image mapping spectrometer,” J. Cell Sci. 125(20), 4833–4840 (2012). 10.1242/jcs.108258 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 83.Vermaas W. F., et al. , “In vivo hyperspectral confocal fluorescence imaging to determine pigment localization and distribution in cyanobacterial cells,” Proc. Natl. Acad. Sci. U. S. A. 105(10), 4050–4055 (2008). 10.1073/pnas.0708090105 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 84.Mika A. M., “Three decades of Landsat instruments,” Photogramm. Eng. Remote Sens. 63(7), 839–852 (1997). [Google Scholar]
  • 85.Sigernes F., et al. , “Do it yourself hyperspectral imager for handheld to airborne operations,” Opt. Express 26(5), 6021–6035 (2018). 10.1364/OE.26.006021 [DOI] [PubMed] [Google Scholar]
  • 86.Sigernes F., et al. , “Multipurpose spectral imager,” Appl. Opt. 39(18), 3143–3153 (2000). 10.1364/AO.39.003143 [DOI] [PubMed] [Google Scholar]
  • 87.Gutiérrez-Gutiérrez J. A., et al. , “Custom scanning hyperspectral imaging system for biomedical applications: modeling, benchmarking, and specifications,” Sensors 19(7), 1692 (2019). 10.3390/s19071692 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 88.Batshev V., et al. , “Polarizer-free AOTF-based SWIR hyperspectral imaging for biomedical applications,” Sensors 20(16), 4439 (2020). 10.3390/s20164439 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 89.Polschikova O., et al. , “AOTF-based optical system of a microscope module for multispectral imaging techniques,” Proc. SPIE 10592, 105920H (2017). 10.1117/12.2297614 [DOI] [Google Scholar]
  • 90.Li Q., et al. , “AOTF based hyperspectral tongue imaging system and its applications in computer-aided tongue disease diagnosis,” in 2010 3rd Int. Conf. Biomed. Eng. and Inf., Vol. 4, pp. 1424–1427 (2010). [Google Scholar]
  • 91.Abdlaty R., et al. , “Hyperspectral imaging: comparison of acousto-optic and liquid crystal tunable filters,” Proc. SPIE 10573, 105732P (2018). 10.1117/12.2282532 [DOI] [Google Scholar]
  • 92.Balas C., “A novel optical imaging method for the early detection, quantitative grading, and mapping of cancerous and precancerous lesions of cervix,” IEEE Trans. Biomed. Eng. 48(1), 96–104 (2001). 10.1109/10.900259 [DOI] [PubMed] [Google Scholar]
  • 93.Gupta N., “Development of staring hyperspectral imagers,” in IEEE Appl. Imagery Pattern Recognit. Workshop (AIPR) (2011). 10.1109/AIPR.2011.6176379 [DOI] [Google Scholar]
  • 94.Turner J. F., II, Zalavadia A. H., “A novel surface plasmon coupled tunable wavelength filter for hyperspectral imaging,” Proc. SPIE 10376, 103760A (2017). 10.1117/12.2274671 [DOI] [Google Scholar]
  • 95.Gat N., “Imaging spectroscopy using tunable filters: a review,” Proc. SPIE 4056, 50–64 (2000). 10.1117/12.381686 [DOI] [Google Scholar]
  • 96.Bai C., et al. , “Compact birefringent interferometer for Fourier transform hyperspectral imaging,” Opt. Express 26(2), 1703–1725 (2018). 10.1364/OE.26.001703 [DOI] [PubMed] [Google Scholar]
  • 97.Delauré B., et al. , “The geospectral camera: a compact and geometrically precise hyperspectral and high spatial resolution imager,” Int. Arch. Photogramm. Remote Sens. and Spatial Inf. Sci. XL-1/W1, 69–74 (2013). 10.5194/isprsarchives-XL-1-W1-69-2013 [DOI] [Google Scholar]
  • 98.Couce B., et al. , “A windowing/pushbroom hyperspectral imager,” in Int. Conf. Knowl.-Based and Intell. Inf. and Eng. Syst., Gabrys B., Howlett R. J., Jain L. C., Eds., pp. 300–306 (2006). [Google Scholar]
  • 99.Köhler M. H., et al. , “Hyperspectral imager for the mid-infrared spectral range using a single-mirror interferometer and a windowing method,” OSA Contin. 2(11), 3212–3222 (2019). 10.1364/OSAC.2.003212 [DOI] [Google Scholar]
  • 100.Hagen N. A., Kudenov M. W., “Review of snapshot spectral imaging technologies,” Opt. Eng. 52(9), 090901 (2013). 10.1117/1.OE.52.9.090901 [DOI] [Google Scholar]
  • 101.Monakhova K., et al. , “Spectral DiffuserCam: lensless snapshot hyperspectral imaging with a spectral filter array,” Optica 7(10), 1298–1307 (2020). 10.1364/OPTICA.397214 [DOI] [Google Scholar]
  • 102.Salazar-Vazquez J., Mendez-Vazquez A., “A plug-and-play hyperspectral imaging sensor using low-cost equipment,” HardwareX 7, e00087 (2020). 10.1016/j.ohx.2019.e00087 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 103.Habel R., Kudenov M., Wimmer M., “Practical spectral photography,” Comput. Graphics Forum 31(2pt2), 449–458 (2012). 10.1111/j.1467-8659.2012.03024.x [DOI] [Google Scholar]
  • 104.Johnson W. R., et al. , “Snapshot hyperspectral imaging in ophthalmology,” J. Biomed. Opt. 12(1), 014036 (2007). 10.1117/1.2434950 [DOI] [PubMed] [Google Scholar]
  • 105.Arce G. R., et al. , “Compressive coded aperture spectral imaging: an introduction,” IEEE Signal Process Mag. 31(1), 105–115 (2013). 10.1109/MSP.2013.2278763 [DOI] [Google Scholar]
  • 106.Luthman A. S., “Spectral imaging systems and sensor characterisations,” in Spectrally Resolved Detector Arrays for Multiplexed Biomedical Fluorescence Imaging, Luthman A. S., Ed., pp. 9–50, Springer International Publishing, Cham: (2018). [Google Scholar]
  • 107.Arad B., Ben-Shahar O., “Sparse recovery of hyperspectral signal from natural RGB images,” in Comput. Vision, Leibe B., et al., Eds., pp. 19–34 (2016). [Google Scholar]
  • 108.Song J. H., Kim C., Yoo Y., “Vein visualization using a smart phone with multispectral Wiener estimation for point-of-care applications,” IEEE J. Biomed. Health Inf. 19(2), 773–778 (2014). 10.1109/JBHI.2014.2313145 [DOI] [PubMed] [Google Scholar]
  • 109.Yao H.-Y., et al. , “Hyperspectral ophthalmoscope images for the diagnosis of diabetic retinopathy stage,” J. Clin. Med. 9(6), 1613 (2020). 10.3390/jcm9061613 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 110.Koundinya S., et al. , “2D-3D CNN based architectures for spectral reconstruction from RGB images,” in Proc. IEEE Conf. Comput. Vision and Pattern Recognit. Workshops (2018). 10.1109/CVPRW.2018.00129 [DOI] [Google Scholar]
  • 111.Alvarez-Gila A., Van De Weijer J., Garrote E., “Adversarial networks for spatial context-aware spectral image reconstruction from RGB,” in Proc. IEEE Int. Conf. Comput. Vision Workshops (2017). 10.1109/ICCVW.2017.64 [DOI] [Google Scholar]
  • 112.Signoroni A., et al. , “Deep learning meets hyperspectral image analysis: a multidisciplinary review,” J. Imaging 5(5), 52 (2019). 10.3390/jimaging5050052 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 113.Stuart M. B., McGonigle A. J., Willmott J. R., “Hyperspectral imaging in environmental monitoring: a review of recent developments and technological advances in compact field deployable systems,” Sensors 19(14), 3071 (2019). 10.3390/s19143071 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 114.Renkoski T. E., Utzinger U., Hatch K. D., “Wide-field spectral imaging of human ovary autofluorescence and oncologic diagnosis via previously collected probe data,” J. Biomed. Opt. 17(3), 036003 (2012). 10.1117/1.JBO.17.3.036003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 115.Afromowitz M. A., et al. , “Multispectral imaging of burn wounds: a new clinical instrument for evaluating burn depth,” IEEE Trans. Biomed. Eng. 35(10), 842–850 (1988). 10.1109/10.7291 [DOI] [PubMed] [Google Scholar]
  • 116.Salsone S., et al. , “Histological validation of near-infrared reflectance multispectral imaging technique for caries detection and quantification,” J. Biomed. Opt. 17(7), 076009 (2012). 10.1117/1.JBO.17.7.076009 [DOI] [PubMed] [Google Scholar]
  • 117.Tack N., et al. , “A compact, high-speed, and low-cost hyperspectral imager,” Proc. SPIE 8266, 82660Q (2012). 10.1117/12.908172 [DOI] [Google Scholar]
  • 118.Machikhin A., Pozhar V., Batshev V., “Double-AOTF-based aberration-free spectral imaging endoscopic system for biomedical applications,” J. Innov. Opt. Health Sci. 8(03), 1541009 (2015). 10.1142/S1793545815410096 [DOI] [Google Scholar]
  • 119.Katrašnik J., et al. , “Spectral characterization and calibration of AOTF spectrometers and hyper-spectral imaging systems,” Chemometr. Intell. Lab. Syst. 101(1), 23–29 (2010). 10.1016/j.chemolab.2009.11.012 [DOI] [Google Scholar]
  • 120.Guo B., et al. , “Wide-band large-aperture Ag surface-micro-machined MEMS Fabry–Perot interferometers (AgMFPIs) for miniaturized hyperspectral imaging,” Proc. SPIE 10545, 105450U (2018). 10.1117/12.2286438 [DOI] [Google Scholar]
  • 121.Gonzalez P., et al. , “A novel CMOS-compatible, monolithically integrated line-scan hyperspectral imager covering the VIS-NIR range,” Proc. SPIE 9855, 98550N (2016). 10.1117/12.2230726 [DOI] [Google Scholar]
  • 122.Poger S., Angelopoulou E., “Multispectral sensors in computer vision,” Stevens Institute of Technology Technical Report CS Report., 3 (2001).
  • 123.Näsilä A., et al. , “Cubic-inch MOEMS spectral imager,” Proc. SPIE 10931, 109310F (2019). 10.1117/12.2508420 [DOI] [Google Scholar]
  • 124.Hagen N. A., et al. , “Snapshot advantage: a review of the light collection improvement for parallel high-dimensional measurement systems,” Opt. Eng. 51(11), 111702 (2012). 10.1117/1.OE.51.11.111702 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 125.Qin J., et al. , “Line-scan hyperspectral imaging techniques for food safety and quality applications,” Appl. Sci. 7(2), 125 (2017). 10.3390/app7020125 [DOI] [Google Scholar]
  • 126.Warren C. P., et al. , “Miniaturized visible near-infrared hyperspectral imager for remote-sensing applications,” Opt. Eng. 51(11), 111720 (2012). 10.1117/1.OE.51.11.111720 [DOI] [Google Scholar]
  • 127.Zhou J., et al. , “Design and laboratory calibration of the compact pushbroom hyperspectral imaging system,” Proc. SPIE 7506, 75062M (2009). 10.1117/12.838159 [DOI] [Google Scholar]
  • 128.Du H., et al. , “A prism-based system for multispectral video acquisition,” in IEEE 12th Int. Conf. Comput. Vision (2009). 10.1109/ICCV.2009.5459162 [DOI] [Google Scholar]
  • 129.Abdo M., Badilita V., Korvink J., “Spatial scanning hyperspectral imaging combining a rotating slit with a Dove prism,” Opt. Express 27(15), 20290–20304 (2019). 10.1364/OE.27.020290 [DOI] [PubMed] [Google Scholar]
  • 130.Baek S.-H., et al. , “Compact single-shot hyperspectral imaging using a prism,” ACM Trans. Graphics 36(6), 1–12 (2017). 10.1145/3130800.3130896 [DOI] [Google Scholar]
  • 131.Saager R. B., et al. , “Portable (handheld) clinical device for quantitative spectroscopy of skin, utilizing spatial frequency domain reflectance techniques,” Rev. Sci. Instrum. 88(9), 094302 (2017). 10.1063/1.5001075 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 132.Abdo M., et al. , “Dual-mode pushbroom hyperspectral imaging using active system components and feed-forward compensation,” Rev. Sci. Instrum. 89(8), 083113 (2018). 10.1063/1.5025896 [DOI] [PubMed] [Google Scholar]
  • 133.Pisani M., Zucco M., “Simple and cheap hyperspectral imaging for astronomy (and more),” Proc. SPIE 10677, 1067706 (2018). 10.1117/12.2309835 [DOI] [Google Scholar]
  • 134.Hyvärinen T., et al. , “Compact high-resolution VIS/NIR hyperspectral sensor,” Proc. SPIE 8032, 80320W (2011). 10.1117/12.887003 [DOI] [Google Scholar]
  • 135.Ziph-Schatzberg L., et al. , “Compact, high performance hyperspectral systems design and applications,” Proc. SPIE 9482, 94820W (2015). 10.1117/12.2177564 [DOI] [Google Scholar]
  • 136.Bakker W., van der Werff H., van der Meer F., “Determining smile and keystone of lab hyperspectral line cameras,” in 2019 10th Workshop Hyperspectral Imaging and Signal Process.: Evol. in Remote Sens., pp. 1–5 (2019). [Google Scholar]
  • 137.Yang L., et al. , “Design of miniaturization hyperspectral imager based on CMOS sensor,” Proc. SPIE 11151, 1115127 (2019). 10.1117/12.2533027 [DOI] [Google Scholar]
  • 138.Bajic M., et al. , “Airborne sampling of the reflectivity by the hyperspectral line scanner in a visible and near infrared wavelengths,” in Proc. 24th Symp. Eur. Assoc. of Remote Sens. Lab., Dubrovnik, Croatia, pp. 25–27 (2004). [Google Scholar]
  • 139.Liu X., et al. , “Fast hyperspectral imager driven by a low-cost and compact galvo-mirror,” Optik 224, 165716 (2020). 10.1016/j.ijleo.2020.165716 [DOI] [Google Scholar]
  • 140.Cai F., et al. , “Compact dual-channel (hyperspectral and video) endoscopy,” Front. Phys. 8, 110 (2020). 10.3389/fphy.2020.00110 [DOI] [Google Scholar]
  • 141.Xu Z., Jiang Y., He S., “Multi-mode microscopic hyperspectral imager for the sensing of biological samples,” Appl. Sci. 10(14), 4876 (2020). 10.3390/app10144876 [DOI] [Google Scholar]
  • 142.Arablouei R., et al. , “Fast and robust pushbroom hyperspectral imaging via DMD-based scanning,” Proc. SPIE 9948, 99480A (2016). 10.1117/12.2239107 [DOI] [Google Scholar]
  • 143.Abdo M., et al. , “Automatic correction of diffraction pattern shift in a pushbroom hyperspectral imager with a piezoelectric internal line-scanning unit,” Proc. SPIE 10110, 1011004 (2017). 10.1117/12.2248467 [DOI] [Google Scholar]
  • 144.Behmann J., et al. , “Specim IQ: evaluation of a new, miniaturized handheld hyperspectral camera and its application for plant phenotyping and disease detection,” Sensors 18(2), 441 (2018). 10.3390/s18020441 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 145.Pozhar V., et al. , “Hyperspectral monitoring AOTF-based apparatus,” J. Phys.: Conf. Ser. 1368, 022046 (2019). 10.1088/1742-6596/1368/2/022046 [DOI] [Google Scholar]
  • 146.Gaponov M., et al. , “Acousto-optical imaging spectrometer for unmanned aerial vehicles,” Proc. SPIE 10466, 104661V (2017). 10.1117/12.2288303 [DOI] [Google Scholar]
  • 147.Ishida T., et al. , “A novel approach for vegetation classification using UAV-based hyperspectral imaging,” Comput. Electron. Agric. 144, 80–85 (2018). 10.1016/j.compag.2017.11.027 [DOI] [Google Scholar]
  • 148.Saptari V., Fourier Transform Spectroscopy Instrumentation Engineering, SPIE Press, Bellingham, Washington: (2003). [Google Scholar]
  • 149.Xu Y., et al. , “Ultra-compact Fourier transform imaging spectrometer using a focal plane birefringent interferometer,” Opt. Lett. 43(17), 4081–4084 (2018). 10.1364/OL.43.004081 [DOI] [PubMed] [Google Scholar]
  • 150.Dutton H. J., Understanding Optical Communications, Vol. 1, Prentice Hall PTR, New Jersey: (1998). [Google Scholar]
  • 151.Rissanen A., et al. , “VTT’s Fabry-Perot interferometer technologies for hyperspectral imaging and mobile sensing applications,” Proc. SPIE 10116, 101160I (2017). 10.1117/12.2255950 [DOI] [Google Scholar]
  • 152.Connes P., “Early history of Fourier transform spectroscopy,” Infrared Phys. 24(2-3), 69–93 (1984). 10.1016/0020-0891(84)90052-6 [DOI] [Google Scholar]
  • 153.Persky M., “A review of spaceborne infrared Fourier transform spectrometers for remote sensing,” Rev. Sci. Instrum. 66(10), 4763–4797 (1995). 10.1063/1.1146154 [DOI] [Google Scholar]
  • 154.Harvey A. R., Fletcher-Holmes D. W., “Birefringent Fourier-transform imaging spectrometer,” Opt. Express 12(22), 5368–5374 (2004). 10.1364/OPEX.12.005368 [DOI] [PubMed] [Google Scholar]
  • 155.Perri A., et al. , “Hyperspectral imaging with a TWINS birefringent interferometer,” Opt. Express 27(11), 15956–15967 (2019). 10.1364/OE.27.015956 [DOI] [PubMed] [Google Scholar]
  • 156.Vaughan M., The Fabry-Perot Interferometer: History, Theory, Practice and Applications, 1st ed., Taylor and Francis, London: (2017). [Google Scholar]
  • 157.Ebermann M., et al. , “Tunable MEMS Fabry-Pérot filters for infrared microspectrometers: a review,” Proc. SPIE 9760, 97600H (2016). 10.1117/12.2209288 [DOI] [Google Scholar]
  • 158.Gupta N., Ashe P. R., Tan S., “Miniature snapshot multispectral imager,” Opt. Eng. 50(3), 033203 (2011). 10.1117/1.3552665 [DOI] [Google Scholar]
  • 159.Näsilä A., et al. , “Hand-held MEMS hyperspectral imager for VNIR mobile applications,” Proc. SPIE 10545, 105450R (2018). 10.1117/12.2286472 [DOI] [Google Scholar]
  • 160.Luthman A. S., et al. , “Bimodal reflectance and fluorescence multispectral endoscopy based on spectrally resolving detector arrays,” J. Biomed. Opt. 24(3), 031009 (2018). 10.1117/1.JBO.24.3.031009 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 161.Blair S., et al. , “A 27-band snapshot hyperspectral imaging system for label-free tumor detection during image-guided surgery,” in Label-free Biomed. Imaging and Sens. (LBIS) (2019). [Google Scholar]
  • 162.Miao L., et al. , “Binary tree-based generic demosaicking algorithm for multispectral filter arrays,” IEEE Trans. Image Process. 15(11), 3550–3558 (2006). 10.1109/TIP.2006.877476 [DOI] [PubMed] [Google Scholar]
  • 163.Wu R., et al. , “Optimized multi-spectral filter arrays for spectral reconstruction,” Sensors 19(13), 2905 (2019). 10.3390/s19132905 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 164.Sawyer T. W., et al. , “Opti-MSFA: a toolbox for generalized design and optimization of multispectral filter arrays,” Opt. Express 30(5), 7591–7611 (2022). 10.1364/OE.446767 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 165.Tisserand S., “Custom Bayer filter multispectral imaging: emerging integrated technology,” in 2019 10th Workshop Hyperspectral Imaging and Signal Process.: Evol. in Remote Sens. (WHISPERS), pp. 1–4 (2019). [Google Scholar]
  • 166.Geelen B., Tack N., Lambrechts A., “A snapshot multispectral imager with integrated tiled filters and optical duplication,” Proc. SPIE 8613, 861314 (2013). 10.1117/12.2004072 [DOI] [Google Scholar]
  • 167.Meng Q., et al. , “Study on the optical property of the micro Fabry-Perot cavity tunable filter,” Proc. SPIE 7516, 75160T (2009). 10.1117/12.840675 [DOI] [Google Scholar]
  • 168.Wang S.-W., et al. , “16 × 1 integrated filter array in the MIR region prepared by using a combinatorial etching technique,” Appl. Phys. B 82(4), 637–641 (2006). 10.1007/s00340-005-2102-0 [DOI] [Google Scholar]
  • 169.Hahn R., et al. , “Detailed characterization of a hyperspectral snapshot imager for full-field chromatic confocal microscopy,” Proc. SPIE 11352, 113520Y (2020). 10.1117/12.2556797 [DOI] [Google Scholar]
  • 170.Dougherty J., Jennings T., Snikkers M., “Compact camera technologies for real-time false-color imaging in the SWIR band,” Proc. SPIE 8899, 889907 (2013). 10.1117/12.2032737 [DOI] [Google Scholar]
  • 171.Gonzalez P., et al. , “An extremely compact and high-speed line-scan hyperspectral imager covering the SWIR range,” Proc. SPIE 10656, 106560L (2018). 10.1117/12.2304918 [DOI] [Google Scholar]
  • 172.Yu X., et al. , “Batch fabrication and compact integration of customized multispectral filter arrays towards snapshot imaging,” Opt. Express 29(19), 30655–30665 (2021). 10.1364/OE.439390 [DOI] [PubMed] [Google Scholar]
  • 173.Žídek K., et al. , “Compact and robust hyperspectral camera based on compressed sensing,” Proc. SPIE 10151, 101510N (2016). 10.1117/12.2250268 [DOI] [Google Scholar]
  • 174.Gao L., Kester R. T., Tkaczyk T. S., “Compact Image Slicing Spectrometer (ISS) for hyperspectral fluorescence microscopy,” Opt. Express 17(15), 12293–12308 (2009). 10.1364/OE.17.012293 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 175.Gao L., Smith R. T., Tkaczyk T. S., “Snapshot hyperspectral retinal camera with the image mapping spectrometer (IMS),” Biomed. Opt. Express 3(1), 48–54 (2012). 10.1364/BOE.3.000048 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 176.Pawlowski M. E., et al. , “High speed image mapping spectrometer for biomedical applications,” in Opt. in the Life Sci. Congr., p. BoW4A.2 (2017). [Google Scholar]
  • 177.Feynman R. P., “There’s plenty of room at the bottom: an invitation to enter a new field of physics,” Resonance 16(9), 890–905 (2011). 10.1007/s12045-011-0109-x [DOI] [Google Scholar]
  • 178.Nayfeh A. H., Younis M. I., Abdel-Rahman E. M., “Dynamic pull-in phenomenon in MEMS resonators,” Nonlinear Dyn. 48(1-2), 153–163 (2007). 10.1007/s11071-006-9079-z [DOI] [Google Scholar]
  • 179.Kraft M., Kenda A., Schenk H., “Hand-held high-speed spectrometers based on micro-electro-mechanical components,” in Proc. Symp. Photonics Technol. for 7th Framework Prog., Wroclaw (2006). [Google Scholar]
  • 180.Kraft M., et al. , “MEMS-based compact FT-spectrometers-a platform for spectroscopic mid-infrared sensors,” in Sensors, IEEE; (2008). 10.1109/ICSENS.2008.4716400 [DOI] [Google Scholar]
  • 181.Jodor G., et al. , “A MEMS-based Fourier transform spectrometer,” Fourier Transf. Spectrosc. 84, FMD11 (2003). 10.1364/FTS.2003.FMD11 [DOI] [Google Scholar]
  • 182.Judy J. W., “Microelectromechanical systems (MEMS): fabrication, design and applications,” Smart Mater. Struct. 10(6), 1115 (2001). 10.1088/0964-1726/10/6/301 [DOI] [Google Scholar]
  • 183.Trops R., et al. , “Miniature MOEMS hyperspectral imager with versatile analysis tools,” Proc. SPIE 10931, 109310W (2019). 10.1117/12.2506366 [DOI] [Google Scholar]
  • 184.Rissanen A., et al. , “MEMS FPI-based smartphone hyperspectral imager,” Proc. SPIE 9855, 985507 (2016). 10.1117/12.2229575 [DOI] [Google Scholar]
  • 185.Dong X., et al. , “DMD-based hyperspectral microscopy with flexible multiline parallel scanning,” Microsyst. Nanoeng. 7(1), 68 (2021). 10.1038/s41378-021-00299-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 186.Dong X., et al. , “DMD-based hyperspectral imaging system with tunable spatial and spectral resolution,” Opt. Express 27(12), 16995–17006 (2019). 10.1364/OE.27.016995 [DOI] [PubMed] [Google Scholar]
  • 187.Wang Y., et al. , “MEMS scanner based handheld fluorescence hyperspectral imaging system,” Sens. Actuators, A 188, 450–455 (2012). 10.1016/j.sna.2011.12.009 [DOI] [Google Scholar]
  • 188.Levin P., et al. , “A wafer level packaged fully integrated hyperspectral Fabry-Perot filter with extended optical range,” in IEEE 32nd Int. Conf. Micro Electro Mech. Syst. (MEMS) (2019). 10.1109/MEMSYS.2019.8870831 [DOI] [Google Scholar]
  • 189.Long K. D., et al. , “Multimode smartphone biosensing: the transmission, reflection, and intensity spectral (TRI)-analyzer,” Lab Chip 17(19), 3246–3257 (2017). 10.1039/C7LC00633K [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 190.Zhang C., et al. , “Open-source 3D-printable optics equipment,” PLoS One 8(3), e59840 (2013). 10.1371/journal.pone.0059840 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 191.Ngo T. D., et al. , “Additive manufacturing (3D printing): a review of materials, methods, applications and challenges,” Compos. Part B 143, 172–196 (2018). 10.1016/j.compositesb.2018.02.012 [DOI] [Google Scholar]
  • 192.Ghassemi P., et al. , “Rapid prototyping of biomimetic vascular phantoms for hyperspectral reflectance imaging,” J. Biomed. Opt. 20(12), 121312 (2015). 10.1117/1.JBO.20.12.121312 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 193.Nevala N., Baden T., “A low-cost hyperspectral scanner for natural imaging and the study of animal colour vision above and under water,” Sci. Rep. 9(1), 10799 (2019). 10.1038/s41598-019-47220-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 194.Blachowicz T., Ehrmann G., Ehrmann A., “Optical elements from 3D printed polymers,” e-Polymers 21(1), 549–565 (2021). 10.1515/epoly-2021-0061 [DOI] [Google Scholar]
  • 195.Pearre B. W., et al. , “Fast micron-scale 3D printing with a resonant-scanning two-photon microscope,” Addit. Manuf. 30, 100887 (2019). 10.1016/j.addma.2019.100887 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 196.Magnan P., “Detection of visible photons in CCD and CMOS: a comparative view,” Nucl. Instrum. Methods Phys. Res., Sect. A 504(1-3), 199–212 (2003). 10.1016/S0168-9002(03)00792-7 [DOI] [Google Scholar]
  • 197.Bohr M. T., Young I. A., “CMOS scaling trends and beyond,” IEEE Micro 37(6), 20–29 (2017). 10.1109/MM.2017.4241347 [DOI] [Google Scholar]
  • 198.Janesick J., Andrews J. T., Elliott T., “Fundamental performance differences between CMOS and CCD imagers: Part 1,” Proc. SPIE 9591, 959102 (2006). 10.1117/12.2189941 [DOI] [Google Scholar]
  • 199.Sun Y., et al. , “Endoscopic fluorescence lifetime imaging for in vivo intraoperative diagnosis of oral carcinoma,” Microsc. Microanal. 19(4), 791 (2013). 10.1017/S1431927613001530 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 200.Lim H.-T., Murukeshan V. M., “A four-dimensional snapshot hyperspectral video-endoscope for bio-imaging applications,” Sci. Rep. 6, 24044 (2016). 10.1038/srep24044 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 201.Ohsaki M., et al. , “Hyperspectral imaging using flickerless active LED illumination,” Proc. SPIE 10338, 103380Z (2017). 10.1117/12.2266765 [DOI] [Google Scholar]
  • 202.Di Cecilia L., Marazzi F., Rovati L., “A hyperspectral imaging system for the evaluation of the human iris spectral reflectance,” Proc. SPIE 10045, 100451S (2017). 10.1117/12.2252184 [DOI] [Google Scholar]
  • 203.Di Cecilia L., Marazzi F., Rovati L., “Hyperspectral imaging of the human iris,” Diffuse Opt. Spectrosc. Imaging VI 10412, 104120R (2017). 10.1117/12.2286173 [DOI] [Google Scholar]
  • 204.Rees S., Dobre G., “Maximum permissible exposure of the retina in the human eye in optical coherence tomography systems using a confocal scanning laser ophthalmoscopy platform,” Proc. SPIE 8925, 89250N (2014). 10.1117/12.2044819 [DOI] [Google Scholar]
  • 205.Yan B., et al. , “Maintaining ocular safety with light exposure, focusing on devices for optogenetic stimulation,” Vis. Res. 121, 57–71 (2016). 10.1016/j.visres.2016.01.006 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 206.Ma L., Fei B., “Comprehensive review of surgical microscopes: technology development and medical applications,” J. Biomed. Opt. 26(1), 010901 (2021). 10.1117/1.JBO.26.1.010901 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 207.Amigo J. M., Grassi S., “Chapter 1.2 – Configuration of hyperspectral and multispectral imaging systems,” in Data Handling in Science and Technology, Amigo J. M., Ed., pp. 17–34, Elsevier; (2019). [Google Scholar]
  • 208.Sawyer T. W., Luthman A. S., Bohndiek S. E., “Evaluation of illumination system uniformity for wide-field biomedical hyperspectral imaging,” J. Opt. 19(4), 045301 (2017). 10.1088/2040-8986/aa6176 [DOI] [Google Scholar]
  • 209.Patel S. R., et al. , “A prototype hyperspectral system with a tunable laser source for retinal vessel imaging,” Invest. Ophthalmol. Vis. Sci. 54(8), 5163–5168 (2013). 10.1167/iovs.13-12124 [DOI] [PubMed] [Google Scholar]
  • 210.Martin M. E., et al. , “Development of an advanced hyperspectral imaging (HSI) system with applications for cancer detection,” Ann. Biomed. Eng. 34(6), 1061–1068 (2006). 10.1007/s10439-006-9121-9 [DOI] [PubMed] [Google Scholar]
  • 211.Podlipnov V. V., et al. , “Experimental determination of soil moisture on hyperspectral images,” Comput. Opt. 42(5), 877–884 (2018). 10.18287/2412-6179-2017-42-5-877-884 [DOI] [Google Scholar]
  • 212.Li H.-N., et al. , “Multi-spectral imaging using LED illuminations,” in 2012 5th Int. Congr. Image and Signal Process., pp. 538–542 (2012). [Google Scholar]
  • 213.Shrestha R., Hardeberg J. Y., “How are LED illumination based multispectral imaging systems influenced by different factors?” in Int. Conf. Image and Signal Process., Elmoataz A., et al. , Eds., pp. 61–71 (2014). [Google Scholar]
  • 214.Everdell N., et al. , “Multispectral imaging of the ocular fundus using light emitting diode illumination,” Rev. Sci. Instrum. 81(9), 093706 (2010). 10.1063/1.3478001 [DOI] [PubMed] [Google Scholar]
  • 215.Stuart M. B., et al. , “Low-cost hyperspectral imaging with a smartphone,” J. Imaging 7(8), 136 (2021). 10.3390/jimaging7080136 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 216.Fortuna J., Johansen T. A., “A lightweight payload for hyperspectral remote sensing using small UAVs,” in 2018 Workshop Hyperspectral Image and Signal Process.: Evol. in Remote Sens. (WHISPERS), pp. 1–5 (2018). [Google Scholar]
  • 217.Jeon D. S., et al. , “Compact snapshot hyperspectral imaging with diffracted rotation,” ACM Trans. Graphics 38, 117 (2019). 10.1145/3306346.3322946 [DOI] [Google Scholar]
  • 218.Riihiaho K. A., Eskelinen M. A., Pölönen I., “A do-it-yourself hyperspectral imager brought to practice with open-source Python,” Sensors 21(4), 1072 (2021). 10.3390/s21041072 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 219.Henriksen M. B., et al. , “Real-time corrections for a low-cost hyperspectral instrument,” in 2019 10th Workshop Hyperspectral Imaging and Signal Process.: Evol. in Remote Sens. (WHISPERS), pp. 1–5 (2019). [Google Scholar]
  • 220.Stuart M. B., et al. , “Low-cost hyperspectral imaging system: design and testing for laboratory,” Sensors 20(11), 3293 (2020). 10.3390/s20113293 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 221.Frank J. H., et al. , “A white light confocal microscope for spectrally resolved multidimensional imaging,” J. Microsc. 227(3), 203–215 (2007). 10.1111/j.1365-2818.2007.01803.x [DOI] [PubMed] [Google Scholar]
  • 222.Uto K., et al. , “Development of a low-cost hyperspectral whiskbroom imager using an optical fiber bundle, a swing mirror, and compact spectrometers,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 9(9), 3909–3925 (2016). 10.1109/JSTARS.2016.2592987 [DOI] [Google Scholar]
  • 223.Rateni G., Dario P., Cavallo F., “Smartphone-based food diagnostic technologies: a review,” Sensors 17(6), 1453 (2017). 10.3390/s17061453 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 224.Erickson D., et al. , “Smartphone technology can be transformative to the deployment of lab-on-chip diagnostics,” Lab Chip 14(17), 3159–3164 (2014). 10.1039/C4LC00142G [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 225.Laksanasopin T., et al. , “A smartphone dongle for diagnosis of infectious diseases at the point of care,” Sci. Transl. Med. 7(273), 273re271 (2015). 10.1126/scitranslmed.aaa0056 [DOI] [PubMed] [Google Scholar]
  • 226.McCracken K. E., et al. , “Smartphone-based fluorescence detection of bisphenol A from water samples,” RSC Adv. 7(15), 9237–9243 (2017). 10.1039/C6RA27726H [DOI] [Google Scholar]
  • 227.Wang L., et al. , “LeafSpec: an accurate and portable hyperspectral corn leaf imager,” Comput. Electron. Agric. 169, 105209 (2020). 10.1016/j.compag.2019.105209 [DOI] [Google Scholar]
  • 228.Berisha S., et al. , “SIproc: an open-source biomedical data processing platform for large hyperspectral images,” Analyst 142(8), 1350–1357 (2017). 10.1039/C6AN02082H [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 229.Rattanavarin S., et al. , “Handheld multispectral confocal microscope for cervical cancer diagnosis,” in 2012 Int. Conf. Opt. MEMS and Nanophotonics, pp. 41–42 (2012). [Google Scholar]
  • 230.Jung I. W., et al. , “2-D MEMS scanner for handheld multispectral dual-axis confocal microscopes,” J. Microelectromech. Syst. 27(4), 605–612 (2018). 10.1109/JMEMS.2018.2834549 [DOI] [Google Scholar]
  • 231.Yoon J., et al. , “A clinically translatable hyperspectral endoscopy (HySE) system for imaging the gastrointestinal tract,” Nat. Commun. 10, 1902 (2019). 10.1038/s41467-019-09484-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 232.Pilling M., Gardner P., “Fundamental developments in infrared spectroscopic imaging for biomedical applications,” Chem. Soc. Rev. 45(7), 1935–1957 (2016). 10.1039/C5CS00846H [DOI] [PubMed] [Google Scholar]
  • 233.Tasche K. K., et al. , “Definition of “close margin” in oral cancer surgery and association of margin distance with local recurrence rate,” JAMA Otolaryngol.–Head Neck Surg. 143(12), 1166–1172 (2017). 10.1001/jamaoto.2017.0548 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 234.Apalla Z., et al. , “Skin cancer: epidemiology, disease burden, pathophysiology, diagnosis, and therapeutic approaches,” Dermatol. Ther. (Heidelb.) 7(1), 5–19 (2017). 10.1007/s13555-016-0165-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 235.Bray F., et al. , “Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries,” CA: Cancer J. Clin. 68(6), 394–424 (2018). 10.3322/caac.21492 [DOI] [PubMed] [Google Scholar]
  • 236.Elbaum M., et al. , “Automatic differentiation of melanoma from melanocytic nevi with multispectral digital dermoscopy: a feasibility study,” J. Am. Acad. Dermatol. 44(2), 207–218 (2001). 10.1067/mjd.2001.110395 [DOI] [PubMed] [Google Scholar]
  • 237.Hewett J., et al. , “Fluorescence detection of superficial skin cancers,” J. Mod. Opt. 47(11), 2021–2027 (2000). 10.1080/09500340008232454 [DOI] [Google Scholar]
  • 238.Song E., et al. , “Paired comparison of the sensitivity and specificity of multispectral digital skin lesion analysis and reflectance confocal microscopy in the detection of melanoma in vivo: a cross-sectional study,” J. Am. Acad. Dermatol. 75(6), 1187–1192.e2 (2016). 10.1016/j.jaad.2016.07.022 [DOI] [PubMed] [Google Scholar]
  • 239.Marur S., Forastiere A. A., “Head and neck cancer: changing epidemiology, diagnosis, and treatment,” Mayo Clinic Proc. 83(4), 489–501 (2008). 10.4065/83.4.489 [DOI] [PubMed] [Google Scholar]
  • 240.Ma L., et al. , “Hyperspectral microscopic imaging for the detection of head and neck squamous cell carcinoma on histologic slides,” Proc. SPIE 11603, 116030P (2021). 10.1117/12.2581970 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 241.Sun L., et al. , “Diagnosis of cholangiocarcinoma from microscopic hyperspectral pathological dataset by deep convolution neural networks,” Methods 202, 22–30 (2022). 10.1016/j.ymeth.2021.04.005 [DOI] [PubMed] [Google Scholar]
  • 242.Ortega S., et al. , “Detecting brain tumor in pathological slides using hyperspectral imaging,” Biomed. Opt. Express 9(2), 818–831 (2018). 10.1364/BOE.9.000818 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 243.Sowa M. G., et al. , “Review of near-infrared methods for wound assessment,” J. Biomed. Opt. 21(9), 091304 (2016). 10.1117/1.JBO.21.9.091304 [DOI] [PubMed] [Google Scholar]
  • 244.Patel K. M., et al. , “Use of intraoperative indocyanin-green angiography to minimize wound healing complications in abdominal wall reconstruction,” J. Plast. Surg. Hand Surg. 47(6), 476–480 (2013). 10.3109/2000656X.2013.787085 [DOI] [PubMed] [Google Scholar]
  • 245.Holm C., et al. , “Intraoperative evaluation of skin-flap viability using laser-induced fluorescence of indocyanine green,” Br. J. Plast. Surg. 55(8), 635–644 (2002). 10.1054/bjps.2002.3969 [DOI] [PubMed] [Google Scholar]
  • 246.Alander J. T., et al. , “A review of indocyanine green fluorescent imaging in surgery,” Int. J. Biomed. Imaging 2012, 940585 (2012). 10.1155/2012/940585 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 247.Lindahl F., Tesselaar E., Sjöberg F., “Assessing paediatric scald injuries using laser speckle contrast imaging,” Burns 39(4), 662–666 (2013). 10.1016/j.burns.2012.09.018 [DOI] [PubMed] [Google Scholar]
  • 248.Rege A., et al. , “In vivo laser speckle imaging reveals microvascular remodeling and hemodynamic changes during wound healing angiogenesis,” Angiogenesis 15(1), 87–98 (2012). 10.1007/s10456-011-9245-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 249.Heeman W., et al. , “Clinical applications of laser speckle contrast imaging: a review,” J. Biomed. Opt. 24(8), 080901 (2019). 10.1117/1.JBO.24.8.080901 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 250.Deegan A. J., et al. , “Optical coherence tomography angiography monitors human cutaneous wound healing over time,” Quantum Imaging Med. Surg. 8(2), 135 (2018). 10.21037/qims.2018.02.07 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 251.Cobb M. J., et al. , “Noninvasive assessment of cutaneous wound healing using ultrahigh-resolution optical coherence tomography,” J. Biomed. Opt. 11(6), 064002 (2006). 10.1117/1.2388152 [DOI] [PubMed] [Google Scholar]
  • 252.Atiles L., et al. , “Laser Doppler flowmetry in burn wounds,” J. Burn Care Rehabil. 16(4), 388–393 (1995). 10.1097/00004630-199507000-00003 [DOI] [PubMed] [Google Scholar]
  • 253.Dyson M., et al. , “Wound healing assessment using 20 MHz ultrasound and photography,” Skin Res. Technol. 9(2), 116–121 (2003). 10.1034/j.1600-0846.2003.00020.x [DOI] [PubMed] [Google Scholar]
  • 254.Gnyawali S. C., et al. , “High-resolution harmonics ultrasound imaging for non-invasive characterization of wound healing in a pre-clinical swine model,” PLoS One 10(3), e0122327 (2015). 10.1371/journal.pone.0122327 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 255.Marotz J., et al. , “Extended perfusion parameter estimation from hyperspectral imaging data for bedside diagnostic in medicine,” Molecules 24(22), 4164 (2019). 10.3390/molecules24224164 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 256.Holmer A., et al. , “Hyperspectral imaging in perfusion and wound diagnostics: methods and algorithms for the determination of tissue parameters,” Biomed. Eng./Biomed. Tech. 63(5), 547–556 (2018). 10.1515/bmt-2017-0155 [DOI] [PubMed] [Google Scholar]
  • 257.Holmer A., et al. , “Oxygenation and perfusion monitoring with a hyperspectral camera system for chemical based tissue analysis of skin and organs,” Physiol. Meas. 37(11), 2064 (2016). 10.1088/0967-3334/37/11/2064 [DOI] [PubMed] [Google Scholar]
  • 258.Evans J. M., et al. , “Pressure ulcers: prevention and management,” Mayo Clinic Proc. 70(8), 789–799 (1995). 10.4065/70.8.789 [DOI] [PubMed] [Google Scholar]
  • 259.Klaessens J. H., et al. , “Non-contact tissue perfusion and oxygenation imaging using a LED based multispectral and a thermal imaging system, first results of clinical intervention studies,” Proc. SPIE 8572, 857207 (2013). 10.1117/12.2003807 [DOI] [Google Scholar]
  • 260.Klaessens J. H., et al. , “Non-invasive skin oxygenation imaging using a multi-spectral camera system: effectiveness of various concentration algorithms applied on human skin,” Proc. SPIE 7174, 717408 (2009). 10.1117/12.808707 [DOI] [Google Scholar]
  • 261.Lazarus G. S., et al. , “Definitions and guidelines for assessment of wounds and evaluation of healing,” Wound Repair Regener. 2(3), 165–170 (1994). 10.1046/j.1524-475X.1994.20305.x [DOI] [PubMed] [Google Scholar]
  • 262.Nouvong A., et al. , “Evaluation of diabetic foot ulcer healing with hyperspectral imaging of oxyhemoglobin and deoxyhemoglobin,” Diabetes Care 32(11), 2056–2061 (2009). 10.2337/dc08-2246 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 263.Jeffcoate W., et al. , “Use of HSI to measure oxygen saturation in the lower limb and its correlation with healing of foot ulcers in diabetes,” Diabetic Med. 32(6), 798–802 (2015). 10.1111/dme.12778 [DOI] [PubMed] [Google Scholar]
  • 264.Khaodhiar L., et al. , “The use of medical hyperspectral technology to evaluate microcirculatory changes in diabetic foot ulcers and to predict clinical outcomes,” Diabetes Care 30(4), 903–910 (2007). 10.2337/dc06-2209 [DOI] [PubMed] [Google Scholar]
  • 265.Yang Q., et al. , “Investigation of the performance of hyperspectral imaging by principal component analysis in the prediction of healing of diabetic foot ulcers,” J. Imaging 4(12), 144 (2018). 10.3390/jimaging4120144 [DOI] [Google Scholar]
  • 266.Yudovsky D., Nouvong A., Pilon L., “Hyperspectral imaging in diabetic foot wound care,” J. Diabetes Sci. Technol. 4(5), 1099–1113 (2010). 10.1177/193229681000400508 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 267.Lee C. J., et al. , “Quantitative results of perfusion utilising hyperspectral imaging on non-diabetics and diabetics: a pilot study,” Int. Wound J. 17, 1809–1816 (2020). 10.1111/iwj.13469 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 268.DeHoog E., Schwiegerling J., “Fundus camera systems: a comparative analysis,” Appl. Opt. 48(2), 221–228 (2009). 10.1364/AO.48.000221 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 269.Panwar N., et al. , “Fundus photography in the 21st century: a review of recent technological advances and their implications for worldwide healthcare,” Telemed. e-Health 22(3), 198–208 (2016). 10.1089/tmj.2015.0068 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 270.Wintergerst M. W., et al. , “Smartphone-based fundus imaging: where are we now?” Asia-Pac. J. Ophthalmol. 9(4), 308–314 (2020). 10.1097/APO.0000000000000303 [DOI] [PubMed] [Google Scholar]
  • 271.Loskutova E., et al. , “Macular pigment and its contribution to vision,” Nutrients 5(6), 1962–1969 (2013). 10.3390/nu5061962 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 272.Guduru A., et al. , “Oxygen saturation of retinal vessels in all stages of diabetic retinopathy and correlation to ultra-wide field fluorescein angiography,” Invest. Ophthalmol. Vis. Sci. 57(13), 5278–5284 (2016). 10.1167/iovs.16-20190 [DOI] [PubMed] [Google Scholar]
  • 273.Yap T. E., et al. , “Retinal correlates of neurological disorders,” Ther. Adv. Chronic Dis. 10, 2040622319882205 (2019). 10.1177/2040622319882205 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 274.Hadoux X., et al. , “Non-invasive in vivo hyperspectral imaging of the retina for potential biomarker use in Alzheimer’s disease,” Nat. Commun. 10, 4227 (2019). 10.1038/s41467-019-12242-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 275.Akbari H., et al. , “Hyperspectral imaging and diagnosis of intestinal ischemia,” in 2008 30th Annu. Int. Conf. IEEE Eng. in Med. and Biol. Soc., pp. 1238–1241 (2008). [DOI] [PubMed] [Google Scholar]
  • 276.Yoon J., et al. , “First experience in clinical application of hyperspectral endoscopy for evaluation of colonic polyps,” J. Biophotonics 14(9), e202100078 (2021). 10.1002/jbio.202100078 [DOI] [PubMed] [Google Scholar]
  • 277.Roblyer D. M., et al. , “Multispectral optical imaging device for in vivo detection of oral neoplasia,” J. Biomed. Opt. 13(2), 024019 (2008). 10.1117/1.2904658 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 278.Martin R., Thies B., Gerstner A. O., “Hyperspectral hybrid method classification for detecting altered mucosa of the human larynx,” Int. J. Health Geogr. 11(1), 21 (2012). 10.1186/1476-072X-11-21 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 279.Imec, “Imec and XIMEA launch industrial grade hyperspectral camera solution addressing the high-quality standards for machine vision applications,” 2020, https://www.imec-int.com/en/press/imec-and-ximea-launch-industrial-grade-hyperspectral-camera-solution-addressing-high-quality (accessed 03/01/2022).
  • 280.Holma H., et al. , “Advances in hyperspectral LWIR pushbroom imagers,” Proc. SPIE 8032, 80320X (2011). 10.1117/12.884078 [DOI] [Google Scholar]
  • 281.Niezen G., Eslambolchilar P., Thimbleby H., “Open-source hardware for medical devices,” BMJ Innov. 2(2), 78–83 (2016). 10.1136/bmjinnov-2015-000080 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 282.Moilanen J., Vadén T., “3D printing community and emerging practices of peer production,” First Monday 18(8) (2013). 10.5210/fm.v18i8.4271 [DOI] [Google Scholar]
  • 283.Ortega S., et al. , “Hyperspectral and multispectral imaging in digital and computational pathology: a systematic review,” Biomed. Opt. Express 11(6), 3195–3233 (2020). 10.1364/BOE.386338 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 284.London A., Benhar I., Schwartz M., “The retina as a window to the brain: from eye research to CNS disorders,” Nat. Rev. Neurol. 9(1), 44–53 (2013). 10.1038/nrneurol.2012.227 [DOI] [PubMed] [Google Scholar]
  • 285.Fabelo H., et al. , “In vivo hyperspectral human brain image database for brain cancer detection,” IEEE Access 7, 39098–39116 (2019). 10.1109/ACCESS.2019.2904788 [DOI] [Google Scholar]
  • 286.Leon R., et al. , “Non-invasive skin cancer diagnosis using hyperspectral imaging for in situ clinical support,” J. Clin. Med. 9(6), 1662 (2020). 10.3390/jcm9061662 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 287.Dontu S., et al. , “Combined spectral-domain optical coherence tomography and hyperspectral imaging applied for tissue analysis: preliminary results,” Appl. Surf. Sci. 417, 119–123 (2017). 10.1016/j.apsusc.2017.03.175 [DOI] [Google Scholar]
  • 288.Guay-Lord R., et al. , “Combined optical coherence tomography and hyperspectral imaging using a double-clad fiber coupler,” J. Biomed. Opt. 21(11), 116008 (2016). 10.1117/1.JBO.21.11.116008 [DOI] [PubMed] [Google Scholar]
  • 289.Lee S., et al. , “Multimodal imaging of laser speckle contrast imaging combined with mosaic filter-based hyperspectral imaging for precise surgical guidance,” IEEE Trans. Biomed. Eng. 69(1), 443–452 (2021). 10.1109/TBME.2021.3097122 [DOI] [PubMed] [Google Scholar]
  • 290.Maybury I. J., et al. , “Comparing the effectiveness of hyperspectral imaging and Raman spectroscopy: a case study on Armenian manuscripts,” Heritage Sci. 6(1), 1–15 (2018). 10.1186/s40494-018-0206-1 [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Journal of Biomedical Optics are provided here courtesy of Society of Photo-Optical Instrumentation Engineers

RESOURCES