Skip to main content
The Review of Scientific Instruments logoLink to The Review of Scientific Instruments
. 2009 Aug 5;80(8):081101. doi: 10.1063/1.3184828

Invited Review Article: Imaging techniques for harmonic and multiphoton absorption fluorescence microscopy

Ramón Carriles 1, Dawn N Schafer 2, Kraig E Sheetz 2, Jeffrey J Field 2, Richard Cisek 3, Virginijus Barzda 3, Anne W Sylvester 4, Jeffrey A Squier 2
PMCID: PMC2736611  PMID: 19725639

Abstract

We review the current state of multiphoton microscopy. In particular, the requirements and limitations associated with high-speed multiphoton imaging are considered. A description of the different scanning technologies such as line scan, multifoci approaches, multidepth microscopy, and novel detection techniques is given. The main nonlinear optical contrast mechanisms employed in microscopy are reviewed, namely, multiphoton excitation fluorescence, second harmonic generation, and third harmonic generation. Techniques for optimizing these nonlinear mechanisms through a careful measurement of the spatial and temporal characteristics of the focal volume are discussed, and a brief summary of photobleaching effects is provided. Finally, we consider three new applications of multiphoton microscopy: nonlinear imaging in microfluidics as applied to chemical analysis and the use of two-photon absorption and self-phase modulation as contrast mechanisms applied to imaging problems in the medical sciences.

INTRODUCTION

Second and third harmonic generations (SHG and THG, respectively) and two-photon excitation fluorescence (TPEF) are currently the most widely used contrast mechanisms in nonlinear optical microscopy. The nonlinear contrast is based on second and third-order nonlinear light-matter interactions that are induced at the focus of a high numerical aperture (NA) microscope objective. Since these nonlinear optical effects are proportional to the second or third power of the fundamental light intensity, essentially only the light at the focal plane of the optic efficiently drives the nonlinearity. This effectively eliminates out-of-focus contributions and results in the optical sectioning inherent to nonlinear imaging techniques. Therefore, it is a straightforward task to generate a sharp, two-dimensional (2D) image when nonlinear optical signals are utilized. The excitation beam is simply raster scanned across the focal plane, and the signal intensity (from the desired optical nonlinearity or nonlinearities) is measured as a function of this beam position. Extrapolation to three-dimensional (3D) images requires only one further step recording a series of these images as a function of depth by either scanning the specimen through the focal plane (stepping axially) or vice versa.

These nonlinear light-matter interactions can be described by a polarization P, induced by an intense optical electric field E,1

P=χ(1)E+χ(2)EE+χ(3)EEE+, (1)

where χ(1) is the linear susceptibility tensor representing effects such as linear absorption and refraction, χ(2) is the second-order nonlinear optical susceptibility, and χ(3) is the third-order nonlinear susceptibility. SHG is a second-order process, whereas, TPEF and THG are both third-order processes. Notably, Eq. 1 shows that the same excitation source can induce several nonlinear effects simultaneously.

In the microscope, nonlinear signals are induced in a femtoliter focal volume. This reduces the sampled molecular ensemble as compared to macroscopic measurements, where the nonlinear responses are observed over a large volume. Microscopy measurements can reveal unique molecular features of microscopic objects that are otherwise obscured by the ensemble-averaged measurements. For example, SHG is symmetry forbidden in homogeneous suspensions even for noncentrosymmetric structures;1 however, if the laser focal volume is smaller than the investigated object, spatially confined excitation at an interface between two media provides centrosymmetry breaking that reveals the second-order nonlinearity of the molecules.2, 3 Therefore, molecular aggregates and cellular structures that form interfaces can be readily investigated with SHG microscopy.

Nonlinear optical effects are characterized by new components of the radiated E-field generated from the acceleration of charges in the media as the nonlinear polarization [second, third, and higher terms in Eq. 1] is driven by the incident electric field. The generation of harmonics is a parametric process described by a real susceptibility. It differs significantly from nonparametric processes, such as multiphoton absorption, which have a complex susceptibility associated with them.1 For parametric processes, the initial and final quantum-mechanical states are the same, as illustrated in Fig. 1 panels (b)–(d). In the time between these states, the population can momentarily reside in a “virtual” level (represented by dashed horizontal lines in Fig. 1), which is a superposition of one or more photon radiation fields and an eigenstate of the molecule. Since parametric processes conserve photon energy, no energy is deposited into the system. In nonparametric processes, the initial and final states are different as represented in Fig. 1a, so there is a net population transfer from one real level to another. Nonparametric interactions lead to photon absorption that may induce effects such as bleaching and thermal damage; however, near resonant parametric processes are resonantly enhanced and at these wavelengths the absorption of the laser radiation also increases. A tradeoff between the laser intensity and the choice of radiation wavelength is needed in order to obtain the strongest nonlinear signals while minimizing bleaching and∕or photodamage.

Figure 1.

Figure 1

Schematic representation of nonlinear processes: (a) TPEF, (b) SHG, (c) THG, and (d) CARS. Wiggly lines represent incoming and radiated photons, dashed lines represent virtual states, and dashed arrows nonradiative relaxation processes.

It is important to know the nonlinear absorption spectrum of the sample in order to make a good choice of optimal wavelength for excitation. For biological samples with absorption in the visible spectral range, the excitation wavelength has to be tuned away from the linear absorption bands of the sample. Usually, infrared (IR) excitation wavelengths are employed for this purpose but they also provide deeper penetration of the light into biological tissue, reaching up to a few hundred microns in highly scattering specimens.4 The noninvasiveness of nonlinear microscopy has been demonstrated by imaging fruit fly embryos during development using harmonic generation5 and by imaging periodically contracting cardiomyocytes6 and myocytes from fruit fly larva.7

Although TPEF is the most frequently used contrast mechanism in nonlinear microscopy, the first nonlinear microscope constructed was based on SHG.8, 9 The applications of nonlinear microscopy multiplied after the initial demonstration of a TPEF microscope and its application for biological imaging by Denk et al.10 The introduction of stable solid state titanium:sapphire (Ti:sapphire) femtosecond lasers further facilitated the development of nonlinear microscopy. THG microscopy was introduced by Barad et al.11 in 1997. Since then, the use of several other nonlinear contrast mechanisms such as sum frequency generation,12 the optical Kerr effect,13 and coherent anti-Stokes Raman scattering (CARS)14, 15, 16 has been implemented [see Fig. 1d]. Also, different nonlinear contrast mechanisms have been combined into a single multicontrast instrument. For example, simultaneous detection of TPEF, SHG, and THG signals has been realized for chloroplasts4, 17 and cardiomyocytes.18 These simultaneously obtained images can be correlated and compared, giving rich information about the structural architecture and molecular distribution within the sample.19, 20

Nonlinear microscopy is a rapidly growing area of research. With the exception of TPEF, which is firmly established as a valuable approach for biomedical imaging, other nonlinear contrast mechanisms are going through the stage of major technical development. Harmonic generation microscopy applications are partly limited by the lack of commercial instrumentation offered by the major microscope manufacturers. Most of the work on nonlinear microscopy has been published in engineering literature; however, biological investigations are starting to emerge, especially with SHG detected in the excitation (epi) direction, which can be readily detected with modifications to commercially available TPEF microscopes.

In Sec. 2 we introduce and discuss various aspects of the instrumentation involved in nonlinear microscopy. First, we introduce the general layout for a multicontrast microscope. Then, we discuss the measurement of ultrashort laser pulses at the focus of a high NA optic, an essential tool for characterizing and optimizing the instrument performance. The section concludes with a review of the different scanning mechanisms and techniques in use for high-speed imaging. The second major section of this work presents introductions to, and discussions about, the applications of TPEF, SHG, THG, as well as multicontrast microscopy. Finally, some recent applications of TPEF for chemical analysis and developments involving new nonlinear contrast mechanisms in microscopy are discussed.

INSTRUMENTATION

Nonlinear microscopes share many common features with confocal laser scanning microscopes. In fact, many research groups have implemented multiphoton excitation fluorescence by coupling femtosecond or picosecond lasers into a confocal scanning microscope21, 22 and using a nondescanned port for efficient detection of the nonlinear signal (see discussion by Zipfel et al.23). The functionality of multiphoton microscopes can be enhanced by implementing detection schemes to allow multicontrast detection in the transmission and∕or excitation direction. The first part of this section describes a typical layout for such instrument, followed by an introduction to techniques that allow the characterization of the excitation laser pulses at the focal plane of a high NA objective. Such characterization is critical not only for optimal nonlinear imaging but also, as explained later, for reducing the effects of photobleaching in TPEF. Finally, this section concludes with a discussion of factors that limit high-speed imaging and, in particular, we offer a review of scanning techniques.

Multicontrast microscope architecture

Traditionally, three parallel detection channels have been used in confocal and TPEF microscopes, where the emission signal is divided into different spectral ranges by dichroic mirrors and optical filters or by separating the signal with a dispersive optical element. Similarly, spectral separation can be applied for detecting second and third harmonic signals along with fluorescence. Backward scattered and backward coherently generated harmonics are usually much weaker than the forward generated signals; therefore, forward detection of harmonics can be accomplished with lower excitation intensities. On the other hand, detection in transmission requires a more extensive modification of the scanning microscope setup. The easiest way of building a transmission mode harmonic generation microscope is by using a high NA condenser, available on some commercial microscope models, followed by the detector.21, 24 Three-channel simultaneous backward and forward detection of harmonics requires extensive modifications of commercial microscopes25 or construction of a whole setup from scratch.19 The instrumentation of nonlinear microscopes has been extensively reviewed by different authors.17, 25, 26

Figure 2 presents the optical outline of a multicontrast nonlinear microscope capable of TPEF, SHG, and THG detection. The laser is coupled to the microscope via two mirrors (not shown). Inside the microscope, the beam is expanded with a telescope (lenses L1 and L2) to fill the clearance aperture of the scanning mirrors. The same telescope also spatially filters the beam with an appropriate pinhole located at the focus between the two lenses. The expanded beam is coupled to two galvanometric scanning mirrors that can raster scan the beam in lateral directions. A second telescope, consisting of an achromatic lens (L3) and a tube lens (L4), is used to expand the beam to match the entrance aperture of the excitation objective (EO). The tube lens is designed to correct for aberrations of the objective, thus it is important to match lens and objective from the same manufacturer. After the second telescope, the collimated beam is transmitted through the dichroic mirror (DM1) and is coupled into the EO. Almost all nonlinear microscopes are constructed using commercially available refractive objectives; however, since most objectives are designed for the visible spectral range, care must be taken to choose objectives that have been designed to work in the infrared region.27 For achieving optimal resolution with an objective, a high uniformity of the excitation beam intensity across the entrance aperture of the optic is required. Overfilling the entrance aperture often helps to achieve good uniformity and meet the specified NA of the objective. It is recommended to test the alignment of the microscope on a regular basis by recording the lateral and axial point spread functions (PSFs).28

Figure 2.

Figure 2

Schematic of a multicontrast microscope (not to scale). L: lenses; PH: pinholes; DM: dichroic mirrors; EO: excitation objective; F: optical filter; D: detector; CO: collection objective; DAQ: data acquisition card.

The generated nonlinear signals can be collected with the same microscope objective EO, separated by the dichroic mirror (DM1), which is specifically chosen for the given fundamental and fluorescence or harmonic emission wavelengths and focused with a lens (L5) through the filter (F1) onto the detector (D1) (see Fig. 2). Interference or band pass filters are used in front of the detector for filtering scattered fundamental light and spurious signals outside the desired bandwidth. The detectors might consist of single element devices, such as photomultiplier tubes (PMT) or avalanche photodiodes, or charge-coupled device (CCD) cameras—in this last case, L5 is placed so that it forms the image of interest onto the CCD. When single element detectors are used, pixel and line clock signals must be used to synchronize the data acquisition with the scanner system; for CCD cameras, it is only necessary to synchronize the shutter with the start and end of frame.

It is also possible to detect the signals in the forward direction using a high NA collection objective (CO). The CO has to have a NA similar or higher than the EO in order to achieve optimal collection of the signals. Attention must be given to the transmission curve of CO, especially for THG detection when lasers with excitation wavelengths less than 1000 nm are used. After CO, the collimated beam is passed to the detectors. Either one detector with appropriate filters or several detectors recording different signals separated by dichroic mirrors can be used. Alternatively, the nonlinear optical response can be coupled to a spectrometer, and a whole spectrum can be recorded at each pixel.29 In this case, the harmonic and fluorescence images are constructed by obtaining the pixel intensities from a selected spectral range. In Fig. 2, after CO, the collimated beam is passed through a dichroic mirror (DM2) for separation of SHG and THG signals. The second harmonic is focused onto the detector (D2) by lens (L6) and filtered by the interference filter (F2). Similarly, THG is reflected from dichroic mirror (DM2) and focused onto detector (D3) with lens (L7), after filtration with interference filter (F3).

For single element detection, signal integration, lock-in, or photon counting methods can be employed for recording the nonlinear signal. Most microscope manufacturers use a signal integration approach; however, nonlinear responses that originate from biological samples typically have very low intensities. In most cases less than one photon per excitation pulse is detected. In this situation, photon counting detection becomes the method of choice; however, photon counting can be saturated with high excitation intensities. This has to be taken with caution, but usually does not present a big problem because the excitation power can be reduced or higher scanning rates can be implemented.

For the microscope presented in Fig. 2, a three-channel counter card is used to record the signal from all three detectors simultaneously. The three recorded images can be directly compared and a statistical analysis can be performed on a pixel by pixel basis.19 A simultaneous detection scheme eliminates the problem of artifacts from signal bleaching or movement of the sample during imaging.

To obtain 3D images, optical sectioning can be performed by translating the sample, moving the EO, or changing the divergence of the collimated beam at the entrance of the EO–which results in focusing at different depths.30 If transmission detection is implemented, the focal volumes of the excitation and COs have to overlap; therefore, axial scanning by translating the sample along the optical axis with a piezoelectric stage (see Fig. 2, piezo) usually gives the best results.

The laser source for a nonlinear microscope has to be carefully chosen for suitability with the particular sample and microscope setup. Parameters such as power output, repetition rate, energy per pulse, and emission wavelength have a direct impact on imaging. An average power of around 10 mW at the sample is typically required for harmonic imaging; however, more power is needed for weak harmonic generation sources as well as for bleaching measurements. The typical transmission of a laser scanning microscope is usually between 15% and 40%. The repetition rate of the laser is another important factor to consider. For constant average powers, the increase in repetition rate must be balanced against the loss in per pulse energy. More importantly, in TPEF measurements, fast repetition rates may not allow relaxation of long lasting molecular excited states which can build up leading to annihilation and may result in reduced fluorescence lifetimes31 or in increased intersystem crossing which can lead to photobleaching. Furthermore, if time correlated single photon counting (TCSPC) measurements are used, the repetition period should be several times longer than the measured lifetime. Extended cavity oscillators can be used to increase the signal yield while addressing all these problems.32

Since many endogenous fluorophores (autofluorophores) absorb light in the visible region and tissue penetration is greater for longer wavelengths, IR lasers are a good choice for nonlinear microscopy. Some examples of autofluorophores are nicotine adenine dinucleiotide (NADH) in muscle cells19 and chlorophyll in plants.29 For THG excited by laser emission wavelengths under 1000 nm, the microscope coverslips, the CO, as well as the detector focusing lens and interference filters must transmit the UV light. Also, the water absorption lines must be avoided; hence lasers in the 800–1300 nm spectral window are desired. The most commonly used femtosecond lasers are Ti:sapphire, typically they offer pulse durations of around 100 fs and 76–100 MHz repetition rates. Several publications have focused on the benefits of femtosecond Yb:KGd(WO4)2 and Cr:forsterite lasers emitting at 1030 and 1250 nm, respectively.4, 33 Both lasers provide good penetration into tissue and decreased sample damage for certain biological samples. Due to the high laser stability, pulsed Yb doped fiber lasers emitting around 1030 nm are becoming very attractive and might become the most popular laser sources for nonlinear microscopy in the future.

Characterization of pulses at the focus of a high NA optic

It is critical to efficient nonlinear imaging, especially in the presence of photobleaching (see section about photobleaching in two-photon fluorescence), to create a focus that is transform limited in space and time. However, the pulse needs to be transform limited at the plane of the sample and not at the input to the microscope; thus, it is important to characterize the spatiotemporal focal volume at the full NA of the excitation optic. In general, this means that collinear methods must be used to interrogate the focal volume. A basic system for measuring the pulsewidth at focus is shown in Fig. 3. Essentially, the excitation laser is first tuned to the wavelength and pulsewidth that will be used for imaging. The laser output is directed through a Michelson interferometer. This creates two beams that are fully collinear. One arm of the interferometer needs to be equipped with a rapid scanning stage—this allows the pulses from the two arms to be rapidly sheared against one another as a function of time. The output of the interferometer then goes through any dispersion compensation optics, through the scan optics, and into the microscope. This configuration makes it possible to characterize the focus exactly as it will be used for imaging and to optimize the pulsewidth in real time.34 Lateral shearing interferometers have also been used to characterize the pulsewidth at focus.35, 36 This is a compact interferometer design that can be inserted into the beam path much as you would a filter.35

Figure 3.

Figure 3

System schematic for pulse measurement at the focus of high NA optic. By adding a simple Michelson interferometer at the input of the beam path to the microscope system, interferometric autocorrleation traces of the pulse can be made at the focus of the objective in the microscope. A second-order intensity autocorrelation trace is shown, prior to being optimized—the wings of the trace show uncompensated dispersion as a result of the scan optics, and microscope optics.

Independent of the interferometer design, an optical nonlinearity must be introduced at the focal plane of the microscope—this acts as the “gate” and enables collinear, nonlinear intensity correlation measurements. The nonlinear medium can be something as simple as a solution of the fluorophore that will be used to tag the specimen. In this case the signal is produced by TPEF and an interferometric second-order intensity autocorrelation measurement results (see Fig. 3). The fluorophore can be prepared in a simple cell mounted between coverslips—particularly important when using objectives that are designed to image through a standard coverslip glass type and thickness—in this way important aberrations, such as spherical aberration, are properly compensated and the measurement is performed in a manner commensurate with the actual imaging conditions.

Amat-Roldán et al.37 pointed out a particularly useful specimen for performing pulse characterization measurements in microscopes—corn starch. The specimen is entirely innocuous and readily available from your local grocer! Starch granules mounted in water between coverslips produce a strong SHG signal with excitation wavelengths ranging from 700 to 1300 nm—essentially covering the entire tuning range of Ti:sapphire as well as Yb-based and Cr:Fosterite ultrafast lasers. Normally SHG signals are detected in transmission; however, it has been our experience that there is sufficient signal from the starch granules that the measurements can, in fact, be made in the epi direction. By using a SHG signal as opposed to TPEF, more sophisticated pulse characterization is possible. For example, Amat-Roldán et al.37 demonstrated that by spectrally resolving the autocorrelation signal, the phase of the excitation pulse can be extracted. This can be extremely useful in those instances where pulse shape impacts the imaging. TPEF autocorrelation measurements are useful for “tweeking” up the system—but they do not provide an unambiguous determination of the pulse shape, and all phase information is lost.

Another approach that works over an extensive wavelength range and is straightforward to implement is the use of GaAsP (Refs. 38, 39) or ZnSe (Ref. 40) photodiodes. When properly mounted in the specimen plane of the EO, they can be used to characterize the pulsewidth through a photocurrent produced by two-photon absorption (TPA) within the photodiode. The pulsewidth can be measured through an autocorrelation measurement, and the spatial focal quality can be determined by scanning the photodiode along the excitation axis. Notably, LaFratta et al.38 have used GaAsP diodes to perform noninterferometric cross-correlation measurements between two lasers—important for any multibeam imaging applications.

THG from a glass coverslip can also be an effective method for pulse characterization resulting in third-order interferometric autocorrelation traces.41, 42, 43 Fringe-free, background-free autocorrelation measurements can be made by using two beams that are circularly polarized, but in opposite directions—left and right. The background-free methods enable spatial characterization of the beam not only along the excitation direction but also along the lateral direction as well by performing spatial autocorrelation measurements of the beam—in this case shearing the two beams across one another in space as opposed to time. In background-free mode, if the temporal autocorrelation trace is spectrally resolved, the phase information of the pulse can also be retrieved.44

Finally, it should be noted that entirely linear, interferometric methods can be used to characterize the focus.40, 45, 46 Jasapara et al. used spectrally resolved interferometry to characterize how 10 fs pulses were distorted when focused by a 205 and a 1005 0.85 NA. objectives. Amir et al. demonstrated how this method can be used in a single-shot configuration to extract the spatial and temporal characteristics of an ultrashort pulse at the focus of a high NA objective.

High-speed multiphoton imaging

The ultimate goal in microscopy is an instrument capable of acquiring 3D images in real time with submicron spatial resolution. Typically real time is understood as at least 30 frames∕s (video rate); nonetheless, for many dynamical processes this speed is insufficient. Therefore, the meaning of “real time” depends on the rate of the particular process that one is ultimately interested in studying. Independent of our definition of real time, the imaging speed is ultimately limited by the excitation and emission cross sections of the sample for the particular nonlinear process involved. In principle one could increase the excitation power to increase the photon yield; however, damage to the sample imposes a hard limit on this approach.

While the signal production rate depends on the nature of the sample, the choice of excitation source will also have an impact in the emitted signal level and ultimately the frame rate that can be achieved as illustrated here. Assuming a typical Ti:sapphire oscillator with a pulse duration of 100 fs, repetition rate of 76 MHz, and an average power of 1 W, for a focal area of 5 μm in diameter, these numbers translate to a peak intensity of 670 GW∕cm2—well above the damage threshold of most specimens. For the sake of argument, we clearly have sufficient per pulse intensity even with modest focus conditions to efficiently excite an optical nonlinearity. As a reference point, if we assume no signal limitations from the sample or collection system and are using our typical laser source, the maximum limit for image production would be 76×106 pixels∕s (corresponding to one excitation pulse per pixel and one photon per excitation pulse). However, if we require an image contrast of 8 bits, at least 255 pulses per pixel are therefore needed. Thus, for a 256 by 256 pixel image the maximum image rate would be approximately 4.6 images∕s. Clearly, methods exist for achieving high frame rates—these numbers simply provide an initial guide as to the potential scaling limitations that are encountered when designing a high-speed imaging system.

In order to estimate a reasonable frame rate, we assumed that only a single photon is produced per excitation pulse. In practice we have found this to be an entirely reasonable assumption, and, in fact, find that this number is even high for many samples! To further illustrate this point consider the following. The number of absorbed photon pairs per excitation pulse can be estimated by the expression10

nP2ατf2π2NA4(hcλ)2, (2)

where P is the average laser power, α is the molecular TPA cross section, τ is the pulse duration, f is the laser repetition rate, NA the NA of the objective, and λ is the excitation wavelength. The reported TPA cross sections for a wide array of intracellular absorbers are between 10−48 and 10−50 GM (GM, Goppert–Mayer, is the unit for the TPA cross section, 1 GM=10−50 cm4 s photon−1).47, 48, 49, 50 Assuming α=10−49 GM, 5 mW of light at a central wavelength of 800 nm at the sample, a pulse duration of 100 fs, a repetition rate of 76 MHz, focused down with a 1.2 N.A. objective, we would then expect approximately 2.6×106 photon pairs absorbed per second per absorber. This means that only one pair of photons is absorbed per fluorophore for every 29 laser pulses!

In addition to the photons produced per excitation pulse, two other practical factors limit the speed of image acquisition: the scan speed and the detection system. Many systems use a low noise CCD camera for detection because this eliminates the need for synchronization with the scanning device. On the other hand, the use of imaging devices, such as CCD cameras is ineffective at imaging deep into scattering specimens—the scattered light creates an undesirable background. Ultimately the frame rate of the CCD can be a limiting factor. Alternatively, the use of fast detectors, such as PMTs, enables fast acquisition rates even when used with scattering media, at the cost of a more complex synchronization scheme. We now proceed to discuss different scanning and detection mechanisms and their tradeoffs with respect to high-speed imaging.

Scanning systems

In this subsection we review the different scanning strategies that have been implemented in nonlinear microscopy, including multifocal approaches. We also present some scanning mechanisms that might become attractive in the future, as their technologies mature. In a typical microscope, a telescope (L3 and L4 in Fig. 2) is used to image the deflected beam coming out of the scanning device to the back aperture pupil of the EO. This ensures that the angular displacement of the scanners gets translated into a tilt of the beam in the back aperture without any beam displacement at this location. In turn, this means that the focal volume will raster a 2D area as the scanners move. The choice of scanner will have an impact not only on the image acquisition speed but also in the field of view, beam quality, aberrations, dispersion characteristics, and total power throughput of the instrument.

Acousto-optic scanning

The working principle of these devices is acousto-optic (AO) diffraction based on the elasto-optical effect.51, 52 When an acoustic wave travels in an optical medium, it induces a periodic modulation in the index of refraction of the material, thus effectively creating a diffraction grating responsible for the deflection. The relative amplitude of the beam diffracted into different orders is given by the Raman–Nath equations.53, 54 This set of equations describes different regimes for the diffraction process determined by the ratio of the interaction length L within the crystal, to the characteristic length L02∕λ, where Λ is the wavelength of the acoustic wave and λ is the optical wavelength inside the material. The cases LL0 and L>L0 accept analytic solutions. The first limit is called the Raman–Nath regime, and in this case the AO diffraction results in a large number of diffracted orders. The maximum intensity diffracted into the first order is limited to 34% of the total incident laser power. The second limiting case is called the Bragg regime. In this case the diffraction appears predominantly in the first order with a theoretical maximum diffraction efficiency of 100%. For this reason, most AO devices are designed to work in the Bragg regime. In general, the total deflection angle ΔΘ with respect to the direction of propagation of the zero-order optical beam, in the small angle approximation, is linearly proportional to the applied acoustic frequency bandwidth Δf, namely,

ΔΘ=λΔfV, (3)

where V is the velocity of sound in the medium.

There are two types of AO devices used for beam deflection: AO deflectors (AODs) and AO modulators (AOMs). Both devices are very similar and based on the same working principle. An AOD relies mainly on the variation of the acoustic frequency in order to change the angle of deflection and thus steer the optical beam. AOMs are used, as their name implies, to modulate the amplitude or frequency of the diffracted beam. In order to satisfy momentum conservation in the operation of AO devices over their entire bandwidth, the acoustic beam needs to be narrow. In addition, for AOMs the optical beam must have a divergence approximately equal to that of the acoustic beam, in order to achieve the desired intensity modulation.

For practical applications of AODs there are two important parameters to consider: speed and resolution. The resolution (N) is defined as the total angular deflection ΔΘ divided by the angular divergence of the diffracted beam (δθ). If the divergence of the optical beam is much smaller than the divergence of the sonic beam, then the divergence of the diffracted beam is equal to that of the incident beam. This translates into δθ being inversely proportional to the diameter D of the incident optical beam. Thus it follows that

N=ΔΘδθ=λΔfVλD=τΔf, (4)

here τ=DV is the transit time of the acoustic wave across the optical beam and is therefore a measure of the random-access time of the device. This relationship clearly shows that there is a tradeoff between resolution and speed, and that for design purposes it is desirable to maximize the frequency bandwidth applied to the AOD.

Under the right conditions, AOMs can also be used as beam scanners.55 Their main advantage over AODs is that they feature shorter access times as a result of the requirement that the optical beam be focused—the value of D is decreased. On the other hand, they also have decreased resolution compared to AODs due to the fact that the achievable scan range is smaller. Bullen et al.56 have reported on a microscope that uses both AOMs (Isomet 1205C) and AODs (Isomet LS55V) interchangeably. For their system, they report the following operational parameters: (a) for the AOMs: a 7 μm spot size at the sample (using a 100×, Fluar, 1.3 NA, Zeiss objective), 15 resolvable spots (as given by the Rayleigh criteria), a scan angle of 56 mrad, a diffraction efficiency of 40%–80% (depending on scan angle), and maximum scan rate of 200 000 points∕s; (b) for the AODs: a 2 μm spot size at the sample (using a 100×, Fluar, 1.3 NA, Zeiss objective), 65 resolvable spots (as given by the Rayleigh criteria), a scan angle of 80 mrad, a diffraction efficiency of 60% (independent of scan angle), and maximum scan rate of 100 000 points∕s. These numbers nicely illustrate the tradeoffs involved in choosing devices.

One of the main attractions for choosing AO scanners (over mechanical scanners) is the possibility of direct random access to different points within the sample. If specific points of interest are identified within the sample, it is possible to use AO devices to address them individually and sequentially at high speeds as reported by Rózsa et al.57 In this reference, the authors report that their system is, in principle, capable of acquiring data from as many as 100 different 3D points within the sample at kilohertz repetition rates. They demonstrated the system for only ten data points and measured throughputs of 80% with access times of 1–3 μs per data point.

A major challenge to the use of AO devices is that they are made of highly dispersive materials. Propagation of ultrashort pulses through such media radically alters the pulse duration, and thus is detrimental to the efficiency of the generated signal. Several authors have shown that it is possible to compensate these spatiotemporal distortions and have achieved a large field of view by using a pair of large aperture (13 mm) AOD in combination with an AOM.58, 59, 60, 61

Galvanometric and resonant scanning

This is probably the most widespread scanning technology used in microscopy. This is due to its flexibility, throughput, and price. On the other hand this approach is significantly slower than AO solutions; but for acquisition speeds of a few frames per second or slower, it is sufficient. Galvanometric scanners consist of a mirror attached to a shaft that can rotate through a given angular range. Since the technology uses a mirror to redirect the beam, there are no problems due to dispersion and losses are minimal. Obviously in order to scan larger beams, bigger, and thus bulkier, mirrors are needed; this has a direct impact on the achievable scanning speed due to inertial effects. A galvanometric scanner can operate over a broad frequency range: from zero to a top frequency, that is, limited to slightly less than the scanners’ mechanical resonant frequency. It is also possible to use a galvanometric scanner to position the beam anywhere within the allowed linear scan region.

The principle of operation of a galvanometer is similar to that of a motor.51 The magnetic field produced by an arrangement of permanent magnets is augmented or diminished by the field from a variable current electromagnet. The change in the field forces a magnet or an iron to move angularly. As noted above, a given scanner is bandwidth limited; therefore, it is not possible to exactly track an arbitrarily applied waveform. Typically a sawtooth is used as the driving waveform, but the scanner is incapable of reaching the “instantaneous” fly-back time required by such a waveform. It is important to keep these bandwidth limitations in mind when scanning and confine the angular range for data acquisition to the region where the scan response is linear with respect to the applied current. If this limitation is not observed, the dwell time for a pixel in the middle of the angular range will not be the same as that of a pixel on the edge, possibly resulting in image artifacts.62

A second form of scan technology, very close to galvanometers, is resonant scanning.63, 64 The main difference, as compared with galvanometers, is that the only moving part in the resonant scanner is a single-turn coil, which dramatically lowers the damping in the scanning system and allows it to be able to vibrate at very high frequencies, close to its mechanical resonance. Since these scanners are designed to work close to resonant frequency, their angular displacement can be quite large. For example, there are 10 kHz resonant scanners capable of 60° scan angles. These scanners do not offer random access of points within their scanning range. They produce a sinusoidal scanning pattern at frequencies as high as 24 kHz.65 In order to take advantage of the full sinusoidal scan, data must be acquired in both, the forward and backward directions. Also, due to the variation in data rate acquisition when using a sinusoidal oscillation, in practical implementations, one typically acquires data only during the “linear” part of the sine wave.66 In principle one can compensate for the different data rates at different parts of the sinusoidal wave by adjusting the pixel acquisition time. It is possible to follow the position of the scanner and use this information to generate a lookup table of pixel dwell times versus position; however, this requires stringent temporal synchronization that complicates the data acquisition.

Polygonal mirror scanning

Polygonal mirrors have been used to obtain very high frame rates with two-photon and CARS microscopy.67, 68 In this scheme, the geometrical center of a multifaceted (polygonal) mirror is attached to the shaft of a fast rotating (thousands of revolutions per minute) motor. As the motor rotates the facets of the mirror will deviate an incoming beam, thus creating a scanned linear pattern. The required data rate is the main factor to consider in deciding the rotation speed needed for a particular application. Several factors play a role in the decision of the number of facets needed as well as the polygon size. These factors include incoming beam diameter (D), its angle of incidence (α), desired duty cycle (η, ratio of active scan time to total time), number of points per scan (N), and angular deviation needed (Θ, in degrees). The last three factors are usually determined by the particular application. Any irregularities in the mirror facets or between them will result in errors in the scanning pattern. Also, it is important to realize that the transition between facets lead to dead times between scans since the beam is clipped and scattered at these regions.

As an example, we provide expressions that relate the parameters enumerated above to the characteristics of the polygonal mirror, for details, the reader is referred to the literature.51, 69 As defined above the duty cycle is the ratio between the active scan time to the total time. An equivalent definition relating beam diameter to facet width (W) is η=1−DmW, where Dm=Dt∕cos α is the effective beam diameter projected onto the facet and t is a factor that provides a safety range to avoid clipping the beam with the edge of each facet as the mirror is rotated. Notice that small values of α lead to a smaller footprint of the beam and thus to a smaller mirror facet, which reduces the cost of the system. In general, the larger the required duty cycle (40%–80% duty cycles are common), the more expensive the device; this is due to the fact that large values of η require wider mirrors (reflected in larger values of W). The required number of facets (n) can be found by the expression n=720η∕Θ. Once the width of each mirror facet and their number are known, one can calculate the outer polygon diameter using

Douter=W(1η)sin(πn). (5)

These expressions make clear the tradeoffs involved in selecting a mirror. The cost will increase with increasing mirror size and with speed. For low velocity polygonal mirrors, ball bearings are used in order to keep the price reasonable. However, at high rotation speeds or large payloads some problems might arise with precision bearings, such as lubrication, vibration control, particle generation, and even shipment methods. Fast rotating mirrors (>4000 rpm) require aerodynamic air bearings, which are very expensive. If, on top of this, the required mirror diameter is large, then several sets of bearings might be needed in order to correctly balance the mirror.

Multifocal approaches

While it is possible to employ faster scanning devices to achieve higher image acquisition rates in multiphoton microscopy, this approach has fundamental technological limitations. A possible alternative is to parallelize the image acquisition process using more than one focal volume at a time,70 thus reducing the acquisition time significantly for existing scanning technologies. Typical maximum peak intensities to avoid undesirable effects such as continuum generation, self-focusing, or damage to biological samples are roughly of the order of 200 GW∕cm2.71, 72 For the typical Ti:sapphire system, as specified earlier, this peak intensity is achieved with only 12 mW of power when focused down to a diameter of 1 μm. Assuming a microscope with a total throughput of 40% means that only approximately 3% of the available average power can be used without damaging the sample. This is an enormous waste of expensive laser photons!

One possible approach that takes advantage of the available laser power is to use widefield illumination. This technique was implemented for TPEF microscopy by underfilling the back aperture of the objective with the excitation beam. This reduces the effective NA of the system and therefore creates a plane of illumination instead of just a point.73, 74 The detection is done with the same EO but at full NA, thus recovering diffraction limited resolution. In this modality, the signal must be imaged onto a camera. A frame rate of 30 Hz was reported using this method. However, by reducing the effective NA of the excitation beam, the sectioning capability is sacrificed which results in increased signal to noise and decreased contrast. Nonetheless, this technique is very easy to implement and requires no scanning. The image acquisition speed in this instance is essentially limited by the signal production rate of the sample and∕or the frame rate of the CCD camera.

One of the first schemes proposed to use the excitation laser more effectively in multiphoton microscopy involved using a line focus instead of point excitation. The line focus is created with cylindrical optics and is scanned in a perpendicular direction (relative to the line) with a galvanometric scanner in order to form a 2D image. Since only one scanner is involved in this approach, extremely high frame rates are in principle possible. This scheme was first demonstrated for TPEF by Brakenhoff et al.75 in 1996 and has more recently been applied to THG by Oron et al.76

Notably, when a line focus is used, the sectioning capability of the nonlinear signal is strongly degraded due to the fact that the beam is focused in only one dimension. For example, Brakenhoff reported that using a 1.3 NA objective, a point focus would have an axial sectioning of ∼1 μm, but using the line excitation with the same objective increases the axial resolution to ∼5 μm.75, 77 Oron et al.78 recently proposed and demonstrated a novel scheme to overcome this limitation. Their technique relies on angular dispersion of the input beam, which is more effective at eliminating out-of-focus contributions along the long axis of the excitation beam.79 Using this method for TPEF line-focus microscopy, an axial resolution of 1.5 μm has been reported.80 The frame rate for this report was10 Hz but is easily scalable using faster detectors.

A different approach to parallelize the image acquisition in nonlinear microscopy is to use the excess laser power to generate an array of focal points. This idea was first proposed and demonstrated in 1998.77, 81 In the first reference, the authors used a rotating 5×5 microlens array to produce frame rates as high as 225 frames∕s, essentially limited by the readout time of their camera. Authors of the second report proposed to use a static microlens array and a pair of galvanometric scanners driven in a Lissajous pattern to avoid edge effects. They also studied the relation between axial sectioning capability and foci separation at the sample; they found that in order to avoid degrading the sectioning ability, a foci separation of approximately 7.3 μm (using a 1.3 NA objective) at the sample is required. The degradation is due to interference of the different focal points when they are too close together, and can be avoided by introducing a delay of a few picoseconds between foci. For example, Egner et al.82 produced delays by introducing a thin glass slide of variable thicknesses placed next to the lenslet array. They also concluded that for an aberration-free imaging system, interfoci distances smaller than approximately seven wavelengths lead to sectioning degradation. Several other groups have developed the microlens technique refining different aspects and extending it beyond TPEF.71, 83, 84, 85 Properly implemented, multifocal multiphoton microscopy (MMM) makes a much more efficient use of laser power and can reduce the image acquisition time from ∼1 s to 10–50 ms.71

An elegant method of creating multiple foci while simultaneously delaying consecutive beams is to employ beamsplitters.73, 86, 87 Indeed, this implementation has found its way into commercial systems (TriM scope, LaVision BioTec, Goettingen, Germany).88 A temporal delay of a few picoseconds is introduced between adjacent foci by the different optical path lengths that the beams have to travel within the beamsplitter system. A nice advantage of the beamsplitter approach is that the relative spacing between focal points can be smoothly varied. If the spacing between the foci in a 2D array is small (or they are made to overlap), it is, in fact, possible to use the array to produce an image without scanning. For example, Fricke et al.86 reported producing an 8×8 array of foci, with a 10 ps time delay between foci, and an interfoci separation of 0.5 μm, to generate an image on a CCD camera without scanning.

More recently Sacconi and co-workers reported using a high efficiency diffractive optical element (DOE) to create multiple foci.89, 90, 91 Used in combination with two galvanometric scanners, they obtained TPEF images with a field of view of 100×100 μm and a resolution of 512×512 pixels in approximately 100 ms.92 Their DOE generates a 4×4 foci array with 25 μm separation between foci (using a 60×, 1.4 NA objective) with 75% diffraction efficiency. The use of a DOE permits a very uniform (∼1%) intensity distribution at the focal plane, this is in sharp contrast with microlens arrays that might have intensity fluctuations as high as 50% from their central part to their edges. Jureller et al.93 used a DOE to produce a 10×10 hexagonal array of foci which was scanned using two galvanometric mirrors driven by white noise. This stochastic scanning strategy is designed to efficiently fill the image space and the authors estimate that they can achieve image rates of ∼100 frames∕s.

All the schemes presented so far for MMM operate in an imaging modality, and thus they are not well suited for imaging deep into scattering samples. Recently some solutions to this limitation have been explored. Two of these solutions are based on a compromise between a detector with larger individual pixels and the number of foci that can be excited. For example, Kim et al.94 proposed using a novel type of detector, namely, a multianode PMT (MAPMT). It is based on the same operating principle as a PMT except that it has 64 independent active areas arranged in a 8×8 matrix. This detection can be coupled into a MMM with an 8×8 array of foci. The main modification to a traditional MMM imaging setup is that, in order to have a one-to-one correspondence between foci and active areas on the MAPMT, it is necessary to descan the signal. The authors report 320×320 pixel images taken at 19 frames∕s. The second proposed solution involves the use of a new readout design for a CCD camera. It is called a segmented CCD and consists of a CCD chip that has been divided in 16 segments for readout purposes.72 In MMM the main frame rate limitation comes from the readout time of the CCD. In this new design, each segment of the CCD is read by an independent amplifier; thus increasing the overall frame rate. The maximum frame rate achievable with this hardware is 1448 Hz, a single segment CCD chip of comparable size would be limited to 160 Hz. In their paper, the authors demonstrate the principle using an array of 36 foci scanned onto the sample by two resonant scanners. They report a frame rate of 640 Hz, this constitutes the highest frame rate for MMM achieved yet.

By extending the time delay between foci from picoseconds to nanoseconds, MMM becomes possible using single element detectors such as PMTs, in a nonimaging modality. This has the advantage that multifocal techniques can be extended for use with highly scattering specimens. In addition, this approach has the unique capability of simultaneously imaging two or more separate focal planes within the sample.30, 95 In this case, the signal from the detector must be electronically demultiplexed in order to assign a spatial position to each signal photon and render an image. Using this technique, Sheetz et al.96 have been able to image up to six different focal planes simultaneously.

Other approaches

In this subsection we briefly touch on other alternative scanning technologies. These techniques are either still under development or have not been applied to scanning microscopy. It is uncertain if they will provide robust alternatives to the better established methods. These scanning techniques include liquid crystal modulators, microelectromechanical systems (MEMSs), piezoelectric actuators, and electro-optic modulators. A liquid crystal modulator consists of a thin layer of liquid crystals sandwiched between two transparent electrodes and placed between crossed polarizers. Applying a bias to the electrodes changes the orientation of the molecules, this allows the modulation of the transmitted light. It is possible to achieve phase-only or amplitude-only modulation using elliptical polarization states.97 Using this technology it is possible to steer a laser beam to achieve scanning patterns.98, 99 Some advantages of this approach are that it provides access to custom scanning patterns, and that it can also be used to implement axial scanning by modulating the phase of the optical beam. This last capability has been demonstrated;98 the authors report a 200 μm axial range with 3.5 μm resolution using a 20× objective and a 30 μm range with 1 μm resolution for a 100× objective.

A second technology that holds potential for high-speed scanning is MEMS. In general, MEMS devices consist on the integration of mechanical elements, sensors, electronics, and actuators at very small dimensions. In particular, the elements relevant for scanning are micromirror arrays. They consist of a small mirror suspended by torsional bars and that are driven by a magnetic field produced by a coil around it.100, 101 These devices are essentially a very small-scale version of galvanometric mirrors. A commercial microscope by Olympus has incorporated this technology with a single element MEMS measuring 4.2×3 mm and is used to perform horizontal scans. The resonant frequency of this device ranges from 3.9 to 4.1 kHz and is capable of angular deviations from 2.1° to 16°.102 A 2D MEMS scanner has been incorporated into a confocal hand-held microscope capable of 4 images∕s with a field of view of 400×260 μm by Ra et al.103

Another alternative to galvanometric scanners is piezoelectric driven actuators. Several companies offer mirrors with resonant frequencies of several kilohertz.104, 105 These actuators could be implemented into current scanning systems, although their small deviation angles (few milliradians) are a limitation for broader use. Finally, electro-optic modulators can also be employed for beam steering.51 This technology is based on inducing a gradient in the index of refraction across a crystal by inducing a standing wave. This gradient produces increasing retardation transversely to the beam profile, thus deviating it. Although this is similar to AOM, these technologies have fundamental differences in that an electro-optic modulator (EOM) is terminated reflectively to induce the standing wave and the sound wavelength used is longer than the beam diameter. Light is deflected by θ=cLVw2, where L and w are the crystal length and diameter, V is the applied voltage, and c is a constant that depends on the material properties. Although their displacement power is small, EOM offers the advantage of deviating the full beam and not only one diffraction order. Also, they provide better pointing stability than an AOM. EOM can achieve rates of 100 kHz. Although they have not been applied to scanning microscopy, they have found application in laser tweezers due to their increased throughput.106

CONTRAST MECHANISMS

In this second part, we present a brief introduction to each of the three main contrast mechanisms used in nonlinear microscopy. Each section includes a review of the applications that have been demonstrated, especially in biological systems. The paper concludes with a review of a novel application of TPEF for chemical analysis in microfluidics and two new contrast mechanisms that hold promise for biological and medical sciences: TPA and self-phase-modulation.

TPEF

Since its first demonstration nearly two decades ago,10 TPEF microscopy has revolutionized the field of scanning laser microscopy and provided a tremendous tool for biological imaging. The basic principle of TPEF is shown schematically in Fig. 1a. Two photons from the laser source are absorbed near simultaneously by the fluorophore molecule. After some nonradiative decay, a fluorescent photon is emitted and can be collected to generate an image. As all contrast mechanisms based in fluorescence, TPEF suffers from photobleaching, this phenomenon is discussed in more detail later. Additionally, the use of very high intensities for excitation can lead to higher-order (>2) photon interactions in the focal volume, excitation saturation,23 increased photobleaching,107 and photodamage.108, 109

Equation 2 shows that the number of TPEF signal photons depends linearly on the TPA cross section (α), which is a quantitative measure of the probability that a particular molecule will absorb two photons simultaneously. The more commonly reported quantity is actually the two-photon action cross section which is the product of α with the fluorescence quantum yield.49, 50, 110, 111 Two-photon action cross sections at peak absorption wavelengths range from 10−4 GM for NADH49, 112 to about 50 000 GM for cadmium selenide-zinc sulfide quantum dots.111 NADH exhibits very weak autofluorescence,113, 114, 115, 116 yet it is often present in high enough concentrations as to provide an important window into cellular metabolism.117 Comparing relative amounts of reduced NADH in vivo has been used to noninvasively monitor changes in metabolism and provide a potential indicator of carcinogenesis.118

For those cases where endogenous fluorescence is absent, one can use fluorophores that are engineered to have two important application-specific properties: they have electron transitions that absorb at commonly available laser source wavelengths and they have an affinity for particular molecules and will attach to the molecules of interest within the sample. Cell labeling can be done in vivo by injecting synthetic dyes directly into the vasculature or by introducing fluorescent proteins such as green fluorescent proteins and its spectral variants via molecular genetics. As an example, Fig. 4 shows two TPEF images obtained from a maize leaf that is labeled with yellow fluorescent protein (YFP) and taken at two different excitation wavelengths: 800 and 1040 nm. In panel (a) essentially only the Guard cells (arrow) in the epidermis of the maize leaf are visible—they exhibit a strong endogenous autofluorescence for the excitation wavelength of 800 nm. Panel (b) is the same image area but an excitation wavelength of 1040 nm is used. The 1040 nm light effectively excites the protein tagged with the YFP (RAB2A::YFP). The image clearly shows protein localization in the cell cytoplasm (arrow) and around the nuclei in cells (arrowheads). (The localization of RAB2A::YFP shown in B is expected for the protein, which is involved in vesicle trafficking in maize cells.119, 120) Additionally, “caged” compounds such as Ca2+, which are activated upon irradiation, provide useful probes for calcium-sensitive cellular processes and are predominant reporters of neuronal activity.121, 122, 123 A very promising advancement in exogenous labeling is the development of quantum dots. These semiconductor nanocrystals have very high TPA cross sections and provide access to intracellular processes and long-term in vivo observations of cell trafficking.111, 124 It is worth noting that endogenous fluorophores can be used in conjunction with the various labeling options to provide multicolor labeling of various tissue elements and subcellular domains within a single biological sample.125

Figure 4.

Figure 4

Autofluorescence and expression of YFP-labeled protein in maize leaves. Panel (a), Guard cells (arrow) in the epidermis of a maize leaf show cell wall autofluorescence at excitation wavelength of 800 nm. Panel (b) A plant expressing a protein tagged with YFP (RAB2A::YFP) shows protein localization in the cell cytoplasm (arrow) and around the nuclei in cells (arrowheads) at excitation wavelength of 1040 nm. The localization of RAB2A::YFP shown in (b) is expected for the protein, which is involved in vesicle trafficking in maize cells.

Creative engineering of bright fluorescent labels that span a wide range of excitation wavelengths126 continues to advance TPEF microscopy as an in vivo and in vitro biological imaging tool. Combined with advances in ultrafast lasers, fluorescence imaging now enables examination of live organ tissue at depths of up to 1 mm.125, 127 Studies have been done to investigate cerebral blood flow,128, 129 neuronal activity,130, 131 and spine morphology,132, 133, 134 yielding key insights into the functionality and possible presence of disorders in the brain. Long-term, high-resolution TPEF imaging of the neocortex in living animals has enabled researchers to study the growth and progression of implanted tumors135, 136 and other pathological conditions such as Alzheimer disease.137, 138, 139 Visualizing the blood flow response in three dimensions to induced aneurisms enables a deeper understanding of how the brain functions during a stroke.129 Similar works have been done to study the mammalian kidney,140, 141 heart,117 and skin.115

TPEF imaging technologies have evolved from optics laboratory research platforms to compact, mechanically flexible microscopes for in vivo imaging in a clinical setting. High throughput fiber optic imaging configurations combined with miniature gradient-index (GRIN) lens has led to the design of miniaturized two-photon microscopes122, 142 and microendoscopes.143, 144, 145 These portable, lightweight microscopes show tremendous promise for biomedical research and noninvasive imaging—potentially eliminating the need for multiple biopsies when identifying and tracking cancerous cells.

The development of commercially available, turnkey, tunable femtosecond lasers that can span the 800–1300 nm wavelength range has elevated TPEF microscopy to a powerful tool for examining cellular and subcellular functions within living tissue. Advancements in the engineering of high action cross section fluorophores will likely further improve the ability to image deeply into scattering tissue while minimizing photobleaching and photodamage. In the next subsection we describe how the lifetime of the fluorescence can be use as an important image contrast mechanism. This is followed by a discussion of the basic mechanisms of photobleaching and some strategies to reduce its effects in TPEF imaging and fluorescent lifetime imaging.

Fluorescence lifetime Imaging

Images recorded by TPEF show a fluorescence intensity distribution map of the sample where the fluorescence intensity is proportional to the concentration of the fluorophores; however, the intensity also depends on the fluorescence lifetime. Fluorescence lifetime can be used as another contrast mechanism for imaging, this is achieved in fluorescence lifetime imaging microscopy (FLIM).146 This technique can be implemented in time or frequency domain detection. For the frequency domain detection, continuous-wave (cw) lasers modulated at a high frequency are employed; while for time domain, FLIM is based on pulsed ultrafast lasers. Since nonlinear excitation inherently requires using a femtosecond or picosecond laser, time-domain FLIM is easily implemented. FLIM can be accomplished in wide field or in a scanning configuration. In wide-field microscopy a gated CCD camera is synchronized with the laser pulses.147, 148 High frequency camera gating is usually accomplished by coupling an image intensifier to a CCD camera. In time-domain measurements, a sequence of time-gated images is recorded at different delays with respect to the excitation pulse. The lifetime at each pixel is obtained by fitting exponentials to the reconstructed fluorescence decays.148 Recently, spatially resolved FLIM measurements in a wide-field microscope were performed using TCSPC detection with a multichannel plate photomultiplier combined with a quadrant detector.149, 150 This method can give higher time resolution and low background noise taking advantage of TCSPC detection. A wide-field FLIM has parallel detection over many thousands of pixels, which enables fast measurements of spatially resolved fluorescence decays at a high temporal resolution. However, it is applicable only to fluorescence mapping from surfaces such as intact leaves,151 or investigations of thin low scattering samples such as a single layer of cells,152, 153 because out-of-focus fluorescence largely contributes to the image. Axial resolution in the fluorescence as well as regular bright field microscopy is limited—especially with thicker specimens. The axial resolution can be dramatically improved with tightly focused femtosecond pulses in a nonlinear excitation laser scanning microscope where a single channel detector is used as compared to a 2D array detector in wide-field microscopy. Fluorescence lifetimes can be recorded for each pixel with TCSPC.154 TCSPC has inherent background signal rejection, and therefore is highly suited for low excitation intensity applications. In TCSPC detection, the arrival time at the detector for each signal photon is obtained, and a histogram of the arrival times is constructed, representing the fluorescence decay. The fluorescence decay is fitted with a multiexponential decay function to obtain the fluorescence lifetimes. A fluorescence lifetime image can be constructed from the lifetime values obtained at each pixel. Fluorescence lifetimes can then be used as a contrast mechanism for imaging, revealing differences in fluorescence quenching, or distribution of several fluorophores within the image.154

Fluorescence lifetime imaging can be combined with spectral resolution, recording the fluorescence decay information at several wavelengths. This multidimensional detection technique can be implemented in spectrally resolved fluorescence lifetime microscopy (SLIM). In SLIM, a monochromator is placed after a confocal pinhole and fluorescence is dispersed onto a linear detector array. For each detected photon, the linear detector array records the arrival time, and also the channel number. Therefore, temporal and spectral information are recorded for each pixel.154 The spectrotemporal information provides better differentiation of fluorophores in mixtures. FLIM and SLIM are very promising techniques that have recently emerged.154

Photobleaching of fluorescent markers in two-photon excitation microscopy

Photobleaching of fluorescent molecules is a well known, although not fully understood phenomena in fluorescence microscopy. Also known simply as bleaching or fading, the process of photobleaching reduces the ability of a fluorophore to re-emit absorbed energy in the form of a fluorescent photon. While this property can be useful for some experimental measurements,155, 156 photobleaching is, in general, an undesirable effect in fluorescence microscopy. The most obvious adverse affect of photobleaching is a decrease in viewing time of fluorescent samples owing to a reduction in the flux of signal photons as a function of time. For example, biological time scales are often much longer than the time scale for appreciable bleaching to occur, limiting the window in which a biological process can be viewed. Furthermore, the spatial resolution of an image is closely related to the signal-to-noise ratio;28 this is due to the Poisson-statistical nature of photons, the upshot of which being that the measurement of more photons results in a more accurate determination of position within the sample. Clearly then, while photobleaching can be advantageous in certain experiments, it is undesirable in fluorescence microscopy.

Two photon excitation microscopy (TPEM) eliminates some of the problems associated with bleaching in confocal microscopy simply by the nature of TPA.10 Since TPA only occurs efficiently in the focus of the laser, bleaching is limited to that same volume. This is in contrast to confocal microscopy, in which the sample fluoresces over the entire path of the focused laser due to the much larger linear absorption cross section, leading to bleaching in regions of the sample that are not imaged to the detector. Thus TPEM avoids out-of-focus photobleaching and allows for the sample to be optically sectioned without bleaching occurring prior to image collection. Nonetheless, it has been demonstrated that photobleaching in TPEM occurs more quickly than in confocal microscopy,107 again serving to limit the viewing time and leading to degradation of image quality over successive images.

There has been considerable interest in studying the underlying mechanism of photobleaching in the hopes of reducing, and ideally eliminating, its presence in TPEM. While a solution for eliminating photobleaching entirely has thus far gone undiscovered, much has been learned about the processes that contribute to bleaching, thereby allowing for numerous advances to be made in reducing rates of bleaching and increasing fluorescence yield in typical TPEM setups. The majority of these advances have been made by manipulation of the femtosecond laser pulse trains used to activate TPA. In order to understand how these various techniques work to minimize bleaching, it is useful to first examine the photokinetics of fluorescent molecules from a theoretical standpoint.

A fluorescent molecule can be represented schematically by a Jablonski diagram, as shown in Fig. 5, which shows the electronic (bold lines) and vibrational (light lines) energy levels of the fluorophore. Here S and T denote singlet and triplet states, respectively, while the subscript integers refer to the level of the excited state. The Jablonski diagram shows the pathways that an excited molecule may take for relaxation. Some of those pathways, such as fluorescence and phosphorescence, are radiative and result in the emission of a photon. Other processes such as vibrational relaxation are nonradiative. Using a simplified version of the Jablonski diagram (in which the vibrational energy levels are ignored), it is a straightforward exercise to develop a set of differential equations that describe the population dynamics of each level of the molecule based on rates of competing population transfer mechanisms.157, 158, 159, 160 If it is assumed that no photobleaching occurs, the population densities within the molecule soon reach the steady state. By including a loss of population from excited energy levels within the molecule, each occurring with some rate constant, it is possible to account for photobleaching in this model.157

Figure 5.

Figure 5

Schematic representation of the electronic (bold lines) and vibrational (light lines) energy levels within a fluorescent molecule. Nonradiative transitions such as vibrational relaxation are shown with wavy lines, while radiative transitions such as TPA and fluorescence are represented with straight arrows. There can also be a nonradiaive internal transition (not shown) of excited molecules down to the ground state.

Photobleaching has been found to result from several different energy transfer pathways inside fluorescent molecules. For example, it is well known from various studies that the first excited singlet state of a fluorescent molecule has a lifetime on the order of a few nanoseconds, while the lifetime of the first triplet state typically extends from a few microseconds to milliseconds. This difference in decay rates means that shortly after excitation begins, excited molecules will begin to accumulate in the triplet state. Because it is the radiative relaxation from the S1 state to the ground state S0 that is responsible for the fluorescence signal, a decrease in the population of S1 clearly leads to a decrease in fluorescence yield. One solution to this problem is reported by Donnert et al.,161 in which they reduced the repetition rate of the excitation source to allow the excited triplet state to decay between successive pulses. Dubbed T-Rex (triplet-state relaxation) or D-Rex (dark-state relaxation), this scheme allows those molecules that exist in excited states other than S1 to relax back to the ground state before the next excitation pulse. By permitting these molecules to relax between successive excitations, the accumulation of molecules in undesirable excited states, especially T1, is significantly reduced, and the bleaching rates as well as fluorescence yield are dramatically improved. In another study by Ji et al.,162 the repetition rate of the excitation source was increased by splitting each pulse in the excitation train into 128 pulses of equal energy. Similar gains in fluorescence signal and reductions in bleaching rates were found with this method.

There are, however, multiple processes that govern the photobleaching rates of fluorescent molecules. For example, it has also been shown that bleaching rates are closely related to chemical reactions between the dye and its environment, as well as the dye with itself.159, 160 In this case, it is typically the result of oxidation of the fluorophore, due to reactions with a triplet excited state of the molecule, that chemically alters the molecules such that they can no longer fluoresce. This has been known for some time, and several techniques to reduce the interaction of excited states with oxygen have been introduced, all with some measure of success in reducing photobleaching.159, 160, 163 These techniques have not, however, succeeded in turning off photobleaching altogether.

Other studies have been aimed at examining what effect the shape of the temporal intensity envelope has on photobleaching in TPEM. It has long been known that it is necessary to compensate the dispersion induced by the microscope elements in order to maintain the highest possible two-photon efficiency.164 In recent years, several studies have been reported in which high-order phase correction has been employed to study not only the improvement in image contrast but also rates of photobleaching as well.141 In another study, an adaptive learning algorithm was used in an attempt to find the optimal pulse shape for minimizing photobleaching rates.165 While it was reported that the rate of photobleaching was indeed decreased by a factor of 4, the two-photon fluorescence signal was significantly decreased as compared to imaging with nearly transform limited pulses. Conversely, it has been shown that the optimal pulse for imaging under photobleaching conditions is indeed transform limited.141 Despite increased bleaching rates found with these pulses, the total number of fluorescence photons emitted is dramatically improved for typical imaging time scales.

Although photobleaching in TPEM is an obstacle to many biological assays, especially in vivo, many schemes have been developed to slow bleaching rates and increase total fluorescence photon flux as a function of time. With the rapid increase in techniques to slow bleaching in the past several years, it is apparent that bleaching is becoming a limiting factor to many applications of fluorescent microscopy. While many schemes exist to minimize the affects of photobleaching in TPEM, in some cases it is possible to eliminate the need for fluorescence detection, and therefore photobleaching, by imaging with harmonic generation or absorption.

SHG microscopy

SHG microscopy has been gaining popularity due to being label-free, biologically compatible, and noninvasive. This imaging modality can probe molecular organization on the micro- as well as the nanoscale. Figure 1b represents the process schematically. Since the second-order polarizability depends on the square of the electric field, reversing the electric field does not reverse the polarizability direction, hence, the tensor vanishes in centrally symmetric media.1 This SHG cancellation occurs whenever emitters are aligned in opposing directions within the focal volume of the laser. Such situation occurs in isotropic media and media with cubic symmetry. Nonlinear emission dipoles aligned in an antiparallel arrangement produce SHG exactly out of phase, and hence the signals cancel due to destructive interference. For example, noncentrally symmetric glucose molecules arranged in a noncentrosymmetric crystalline structure produce intense SHG; however, when dissolved in water, the glucose becomes randomly dispersed, and no SHG is produced. Selective SHG cancellation due to central symmetry has recently been observed in several biologically relevant systems including lipid vesicles infused with styryl dye,166 anisotropic bands in muscle cells,7 and plant starch granules.17 In biological materials where SHG emitters are well organized in noncentrosymmetric microcrystalline structures, the SHG from different emitters adds coherently resulting in very intense SHG. Some examples of such structures include the aforementioned starch granules and other polysaccharides,4, 167 collagen,168, 169 striated muscle,19, 168 and chloroplasts.4 As previously mentioned, SHG from some biological samples, for example, starch granules, is strong enough to allow pulse characterization measurements not only in the forward37 but also in the backward direction.170

Backward propagated SHG comes from backscattered forward propagated SHG and from direct emission, possible from surfaces3 as well as scatterers smaller than the wavelength of light.171 SHG radiation has been detected in the epidirection in live human muscle tissue,172 small cellulose fibrils,173 the outer rim of starch granules,174 enamel,175 and live animal muscle imaging.172 Backward propagated SHG promises to be an indispensable contrast mechanism for microendoscope investigations in the near future.

Intense second harmonic can be generated at an interface or boundary between two optically different materials since the central symmetry is broken.176 Microscopic SHG imaging can be achieved from a monolayer of molecules at an interface or molecules asymmetrically arranged in lipid membranes.166 This provides a sensitive tool for studying molecules adsorbed on a surface.3 Even in centrally symmetric media, an interface can generate intense SHG due to the high electric field gradient.177 This effect recently has been shown to occur in semiconductor nanowires.178 In practice, the surface SHG might be difficult to distinguish from the bulk SHG.179 For reviews on the surface SHG techniques, the reader is referred to the literature.180, 181 The importance of crystalline order and centrosymmetric organization of molecules for SHG is demonstrated in the following two examples: starch granule and anisotropic bands of striated muscle cells.

Starch granule imaging

A starch granule is a centrosymmetric biocrystalline structure that extends radially outward from a hilum, the nucleus of the grain. Figure 6 shows an image of a starch granule from potato recorded with SHG microscopy using a Ti:sapphire laser. Images were obtained using linearly polarized light oriented horizontally in (a) and vertically in (b), as well as using circularly polarized light in panels (d) and (e). Panel (c) shows a polarization anisotropy A image calculated pixel by pixel from images (a) and (b) by the formula A=(IIp)∕(I+Ip), where I and Ip are the pixel intensities for linearly polarized lights oriented horizontally and vertically, respectively. Circularly polarized light highlights the entire starch granule except for the central part, where the centrosymmetric structure of the hilum is located, as shown schematically in Fig. 6f. The hilum also remains dark in the SHG images recorded with linearly polarized light [Figs. 6a, 6b]. Linearly polarized excitation reveals two lobes with SHG generated parallel to the polarization of the laser beam. The highlighted SHG structure can be understood by noting that the focal volume of the microscope objective is much smaller than the granule. The volume is raster scanned in the focal plane to construct the image pixel by pixel. Therefore, the SHG intensity in each pixel depends on the net orientation of the nonlinear dipoles and the polarization of the laser beam. The images constructed from individual pixels reveal that the SHG intensity follows a cos2 θ dependency, where θ is the angle between the linear polarization of the laser and net orientation of nonlinear dipoles. The lobelike structure reveals the radial arrangement within the granule, with a centrosymmetrically arranged hilum in the center of the grain [see Fig. 6f]. The polarization anisotropy image, Fig. 2c, is very similar to the so-called maltese cross image of the starch granule observed in a linear polarization microscope.182 The anisotropy value along the vertical and horizontal axes in this image is nearly 1, indicating a good radial alignment of the nanocrystallites in the starch granule. The anisotropy image shows that the nanocrystallites bend away from the radial orientation throughout the granule, and that the periphery of the granule is particularly well aligned.

Figure 6.

Figure 6

SHG imaging of potato starch granule. Different polarizations of the fundamental laser radiation were used as indicated in the bottom left corner of the panels, including linearly polarized light oriented horizontally (a) and vertically (b), as well as circularly polarized light (d) and (e). The polarization anisotropy image shown in (c) was calculated from images (a) and (b). Image (e) presents a pre-treated starch granule in 68 °C water for 20 s and shows a reduction in SHG intensity due to the heat treatment. The scale bar in (a) is 3 μm and 20 μm for (e). A schematic model of a starch granule is shown in (f).

Muscle imaging

Similarly to the starch granule, the biocrystalline structure of the anisotropic bands (A-band) of sarcomeres in muscle cells generates strong SHG. The characteristic striated structure of the myocytes from Drosophila melanogaster larva muscle can be seen in Fig. 7a. Myofilaments in each anisotropic band are arranged in a hexagonal crystalline structure.183 Each myofilament consists of myosin molecules arranged antiparallel in the center of the filament (M-line) and protrudes over several microns in two opposite directions along the myofibril. The M-line manifests itself in a dark band, resulting in a double-banded appearance of the anisotropic band in Fig. 7a. When myocytes are stretched, the period of the striated structure increases mainly due to the elongation of the dark regions, the so-called isotropic bands (I-band) [Fig. 7b]. Unexpectedly, the double bands evolve into single bands and the M-line disappears in most of the sarcomeres. This is clearly seen from the anisotropic band profiles of the stretched and unstretched myocyte in Fig. 7c. The centrosymmetric arrangement that leads to the cancellation of the SHG radiation gets distorted during stretching. Some dipoles separate from each other and lose their counterpart, or separation of the dipoles changes the emitted SHG phase differences, leading to constructive interference and to an increase in the SHG intensity. This feature can be used to correlate the change in sarcomere size with SHG intensity during myocyte contraction.7

Figure 7.

Figure 7

A comparison between SHG imaged structures of unstretched (a) and stretched (b) muscle cells from Drosophila melanogaster larva. The arrows point to the same sarcomere which appears in its double-banded form in the nonstretched myocyte (a), and in its single banded structure for the stretched myocyte (b). A line profile of the indicated (arrow) sarcomeres is plotted in (c) showing that unstretched sarcomeres have a double peak and stretched sarcomeres have a single peaked A-band. The anisotropic muscle regions, indicated by A, produce SHG; while the isotropic muscle regions, indicated by I, show no SHG. The position of the M-line is indicated by M. The scale bar is 8 μm.

In both examples given above, the starch hilum and the M-lines of the myocytes appear as dark regions inside the bright SHG areas of the semicrystalline biological structure. Similar SHG cancellation effects can be observed in noncentrosymmetric membranes doped with SHG emitters and adhered to one another184 or in chloroplasts due to the antiparallel stacking of photosynthetic thylakoid membranes in the grana.17

THG microscopy

THG is dipole allowed in inhomogeneous as well as homogeneous media.1 Although THG is induced in bulk media, certain restrictions on the generation are imposed when focusing a laser beam with a high NA microscope objective.185 Already in 1969, during the early stage of nonlinear optics investigations on harmonic generation, Ward and New186 showed that THG vanishes under tight focusing in gases. These results can be understood in terms of the phase anomaly or Gouy phase—a π phase shift experienced when going through a focus in normally dispersive materials. This sign reversal results in the destructive interference of THG signals originating from opposite sides of the focal point, thus canceling any signal in the far field.186 The solution of the paraxial wave equation for the amplitude of third harmonic (A) can be written as follows:1

A3ω(z)=i6πωncχ(3)Aω3J3ω,whereJ3ω(Δk,z0,z)=z0zeiΔkz(1+2izb)2dz. (6)

This equation shows that the third harmonic depends on the third power of the amplitude of the fundamental electric field Aω, χ(3), and the phase-matching integral J. This integral can be calculated numerically for any arbitrary structural configuration with z0 being the z value at the entrance to the nonlinear medium. In Eq. 6, b represents the confocal parameter and Δk is the phase matching expressed as Δk=3kωk. For focused Gaussian beams inside a homogeneous nonlinear medium, the phase-matching integral J can be expressed analytically as

J3ω(Δk,z0,z)={0,Δk012πb2ΔkebΔk2,Δk>0.} (7)

This equation shows that the integral equals zero for normally dispersive materials, even for perfect phase matching (Δk=0). Thus no THG is observed in the far field when a beam is focused into a material with normal dispersion.

However, THG can be observed if the focal symmetry is broken, for example, by the presence of an interface between two materials with different refractive indexes or third-order susceptibilities. Although THG appears at interfaces,187 it is a volume effect since THG is radiated from the bulk of the media on both sides of the interface.188 There has been some recent reviews of THG microscopy in the literature that the reader is encouraged to consult.17, 25, 26 We now proceed to examine two examples of THG imaging: polystyrene beads and yeast. The first example is relevant in order to illustrate some of the features and artifacts that are present in nonlinear imaging.

Polystyrene bead imaging

The sensitivity of THG to heterogeneities present in media forms the basis for THG microscopy.11 In order to demonstrate that THG depends on the size of the object and orientation of the interfaces with respect to the laser beam propagation, several different sized polystyrene beads were imaged in 3D. The image stacks were rendered in 3D and are presented in the axial view in Fig. 8, where the laser enters from the bottom, and the signal is collected above the bead. This is in contrast with all other images in this paper, which are presented in the lateral view, where the laser propagates into the page. The axial PSF of a 1030 nm laser focused by a 1.3 NA oil immersion objective is about 1 μm. Therefore, the two beads shown in panels (a) and (b) of Fig. 8, with diameters of 10 and 3 μm, respectively, are larger than the PSF, while beads equal or smaller than the PSF are shown in panels (c) and (d) for 1 and 0.1 μm beads, respectively.

Figure 8.

Figure 8

Axial views of polystyrene beads of different sizes imaged with THG. The bead sizes are 10, 3, 1, and 0.1 μm, for panels (a) through (d), respectively. The laser propagation direction is indicated by the arrow. The scale bar is 5 μm.

It is important to understand typical images produced with THG and relate them to the actual structure of the imaged object. Figure 8 shows that beads larger than the PSF, panels (a) and (b), emit THG only from the top and bottom interfaces, where the laser focal volume experiences a change in index of refraction between the surrounding matrix and the polystyrene material. The bottom interface, at the input side of the laser, appears to be more compact than the top; this is due to the distortion of the beam induced by the propagation through the structure that results in a larger PSF. It is important to note that no signal originates from the middle of the large beads, where the laser focal volume experiences a homogenous polystyrene media. Also, there is no THG radiated from the sides of the bead, this is because the interface is parallel to the laser propagation direction; therefore, breaking of the transverse beam symmetry of the focal volume is almost negligible. These characteristics of the radiated THG for objects larger than the PSF could lead to misinterpretations of acquired images, producing the impression that the image consists of two disjoint objects, while in reality it is just one object.

In the case of the polystyrene beads smaller or comparable to the axial PSF, panels (c) and (d) of Fig. 8, the THG reveals a continuous volume comparable in size to the PSF. Interestingly, even for the 1 μm bead [Fig. 8c], the bottom side appears to be narrower with better defined border than the top of the bead, thus indicating some beam distortion even over such short propagation distance. For the 0.1 μm bead, Fig. 8d, a very weak signal is observed, it is at least one order of magnitude weaker than the larger beads. Since THG is generated in the bulk, the reduced volume of the bead explains this effect. It is also noteworthy that the image of the 0.1 μm bead, panel (d), does not appear to be significantly smaller than the image of the 1 μm bead, panel (c). Of course this is just due to the inability of the instrument to resolve objects below the diffraction limited resolution.

THG imaging selection rules are the same regardless of particle size. Thus, subwavelength spatial heterogeneities could serve to enhance the THG in certain systems, i.e., a multilayer structure.187, 189 THG intensity also depends on the difference of the refractive indexes of the structure and the surrounding media, as well as on the hyperpolarizabilities of the molecules and their ordering in the structure. THG from different structural arrangements has been modeled by numerical integration of the phase-matching integral, Eq. 6.190, 191

Yeast imaging

An example of THG imaging of a biological sample is presented for baker’s yeast, Saccharomyces cerevisiae. Figure 9b shows the intense THG image produced by this sample, while Fig. 9a shows the same sample imaged with TPEF. Using an excitation wavelength of 1030 nm, there is almost no fluorescence produced by the yeast cells. This points to very low linear and multiphoton absorption at this wavelength, thus allowing the use of high laser powers to generate strong harmonic signals. Imaging yeast with a 800 nm wavelength from a Ti:sapphire laser produces photodamage at low intensities due to strong TPA. The THG image reveals the typical morphology of the cell, the cell wall emits strong THG most probably due to the microfibril structure of glucan and chitin.192 The inner part of the cell contains several organelles, of size comparable to the PSF, that are very strongly highlighted by the THG. Similarly to the small bead examples in Figs. 8c, 8d, subdiffraction sized cellular organelles show up as ∼1 μm solid bodies within the cell. The isolation of the organelles and staining with a mitochondria staining dye, tetramethylrhodamine methyl ester (TMRM), revealed that the observed organelles are predominantly mitochondria, while lipid bodies might give a strong THG signal as well.193, 194 Mitochondria are multilamellar membranous bodies reaching up to 500 nm in length for baker’s yeast.195 The multilamellar arrangement of the organelle renders enhancement of the third harmonic. The image in Fig. 9b also reveals other characteristic subcellular structures of yeast: the nucleus and vacuole can be seen as the two larger dark areas of 2–3 μm in diameter with a weak intensity rim. The absence of signal inside one of the bodies, compared to the weak background signal in the rest of the inner cell volume occupied by the cytoplasm, indicates a higher homogeneity of the medium inside the body.

Figure 9.

Figure 9

Baker’s yeast imaged with TPEF (a) and THG (b). The THG image clearly shows the cell wall and some internal organelles, predominately mitochondria. The scale bar is 3 μm.

Besides baker’s yeast imaging, THG has been used for visualization of biological membranes,185 cell walls,185 and multilayered structures such as grana of chloroplasts,4, 17, 185 aggregates of LHCII,17 and cristae in mitochondria.19 THG was also observed in rhizoids from green algae,196 erythrocytes,197 cultured neurons and yeast cells,198 human glial cells,199 muscle cells,19, 183 Drosophila embryos,200 sea urchin larval spicules,201 hamster oral muscosa,202 lipid bodies,193 and tooth enamel.175

Multicontrast nonlinear microscopy

Since laser excitation can simultaneously generate several nonlinear optical responses, different contrast mechanisms can be used to record parallel images of the same structure with the instrumentation previously discussed.4, 19, 203, 204 Multicontrast microscopy appears to be very beneficial when different nonlinear responses reveal different functional structures of the same biological object. For example, a multicontrast SHG and TPEF microscope was used to image labeled lipid vesicles,166 labeled neuroblastoma cells,205 muscle and tubulin structures,168 and labeled neurons.206 Simultaneous THG and TPEF detection was used for imaging human glial cells,199 while THG and SHG microscopies were used to monitor mitosis in a live zebrafish embryo.5 All three contrast mechanisms were implemented to image mitochondria in cardiomyocytes,19 chloroplasts4, 203, 204 and photosynthetic pigment-protein complexes.203

Parallel images can be directly compared on a pixel by pixel basis. Although SHG, THG, and TPEF images originate from the same structure, their image contrast mechanisms are fundamentally different. The comparison of images obtained with coherent and noncoherent contrast mechanisms can be very challenging because homogeneous structures cannot be visualized in SHG or THG, but might be visible in fluorescence. Additionally, differences appear for signal generation at structural interfaces, where optical properties change between two media. For THG and surface SHG, the signal maximum appears at the central position of the interface, whereas for the interface between bulk-fluorescing and nonfluorescing structures, only half of the onset intensity will be reached at the interface position. The maximum TPEF signal intensity is observed when the full focal volume is immersed in the media. Therefore, image comparisons will always have to be taken with caution.

It is always beneficial to deconvolve microscopy images with the PSF of the particular contrast mechanism. PSFs are different for each nonlinear response and unique to the optical setup, so a different PSF should be recorded for each contrast mechanism and for each microscope objective. Nondeconvolved images appear to be blurred due to the finite size of the focal volume. If deconvolution is not performed, comparison may lead to artifacts; for example, two neighboring structures may be revealed by different contrast mechanisms and may appear to overlap because of the effect of blurring.

Structural cross-correlation image analysis

Images obtained with different nonlinear contrast mechanisms can be directly compared using the method of structural cross-correlation image analysis (SCIA), initially developed for multicontrast imaging of myocytes.19 The method relies on a pixel by pixel comparison of simultaneously acquired images. The algorithm can also be applied for comparison of images recorded sequentially, as long as the scan conditions for all the images are identical. The SCIA procedure starts with a standardization of the images by applying lower and upper pixel intensity thresholds to remove (set to zero) low signal noise as well as artificially occurring high signal spikes or “gliches” in the image. Next, the images, A(x,y) and B(x,y), are normalized to the maximum intensity and compared to each other pixel by pixel. The coordinates of the pixel in the image are denoted by x and y. The cross-correlated image, I(x,y), can be calculated as follows:

I(x,y)=A(x,y)B(x,y). (8)

In addition to the correlated image, the algorithm produces two uncorrelated images, A∩¬B, and B∩¬A, where ∩ is the logical intersection and ¬ is the logical not. The uncorrelated images are constructed as follows:

(A¬B)(x,y)={A(x,y),I(x,y)=00,I(x,y)0,} (9)
(B¬A)(x,y)={B(x,y),I(x,y)=00,I(x,y)0.}

The three images are mutually exclusive and can be represented by three distinct colors within a single image. The algorithm can be naturally extended to simultaneously correlate three images, this would result in seven mutually exclusive images. One triple correlation image ABC, three partial double correlations; AB∩¬C, AC∩¬B, and BC∩¬A, as well as the uncorrelated images; A∩¬B∩¬C, B∩¬A∩¬C, and C∩¬A∩¬B. The seven image correlations resulting from three-channel SCIA are mutually exclusive, so for each pixel, only one of the seven channels will have intensity greater than zero. This allows us to color each of the seven images in different colors, and then to combine them into one image. The colors will not overlap and the combined colocalized image will reveal multicontrast information from all three channels in one single image.

SCIA is advantageous over a standard overlay, even in the simpler two-channel case. In overlays, when two pixel intensities are nonzero, the resulting color is not unique and depends on the colors and intensities of the two pixels, very often resulting in a confusing picture. The SCIA algorithm avoids this problem because each of the seven resulting correlation images is assigned a unique color. For three-channel correlations, overlay is typically not attempted due to the confusion of colors. The 2D correlated structures can be assembled into 3D structures if the same settings of the SCIA algorithm are used for each 2D slice. The three-channel SCIA has been implemented for multicontrast microscopy of chloroplasts.17 What follows are two examples of SCIA images: two channel SCIA of mitochondria in cardiomyocytes and three channel SCIA in chloroplasts.

Imaging of mitochondria in cardiomyocytes

Figure 10 shows an example of multicontrast imaging which was used to provide evidence that THG is effectively generated from mitochondria in muscle cells.19 Only a small portion of the cardiomyocyte is shown in the image. Muscle cells were stained with TMRM, a mitochondria-specific dye,207 and imaged with TPEF and THG as shown in Figs. 10a, 10b, respectively. TMRM is a cationic dye permeable to lipid membranes and accumulates in compartments with a high negative potential according to the Nernst equilibrium. The TPEF image, Fig. 10a, shows spherical structures organized in rows protruding from the upper right corner to the lower left corner of the image. The structural pattern is very characteristic for the spatial arrangement of mitochondria, which are situated in rows along the myofibrils in a cardiomyocyte.208 The THG image, Fig. 10b, resembles a very similar pattern to the TPEF image. The correlated image, Fig. 10c, shows that most of the THG overlaps with TPEF, confirming that the THG signal originates mostly from the mitochondria. The THG generated by the mitochondria is rather intense; this is due, probably, to the multilamellar structure of mitochondria containing cristae, which is the densely folded inner membrane and outer membrane. As mentioned before, it has been shown that multilayer structures can enhance the THG signal if the third harmonic generated from each interface interferes constructively with the other ones.187

Figure 10.

Figure 10

2D projections of 3D rendered multicontrast microscopy images of TMRM labeled mitochondria in myocytes. TPEF from TMRM is shown in (a), THG in (b), two-channel correlation in (c), uncorrelated MPF in (d), and uncorrelated THG in (e). The scale bar is 5 μm.

The two uncorrelated images, pure TPEF and THG signals, are presented in Figs. 10d, 10e, respectively. Only a small part of the structure appears in uncorrelated images. The uncorrelated TPEF signal might be due to the presence of autofluorescence and accumulation of TMRM not only in mitochondria but also in the sarcoplasmic reticulum, which is not highlighted with THG. In addition, TPEF has a larger PSF than THG resulting in the appearance of hollow shell-like structures in the uncorrelated TPEF image, Fig. 10d. The uncorrelated THG, Fig. 10e, is mostly located at the periphery of the myocyte structure; it originates from membranous cellular structures other than labeled mitochondria. Since the strongest THG signal is generated when an interface is positioned perpendicular to the propagation of the beam, it is not surprising that there are additional structures highlighted by the THG that are not labeled by TMRM.

Imaging of chloroplasts

Multicontrast imaging of individual chloroplasts and photosynthetic subcellular organelles from plant cells can be performed in situ and in vivo.4, 17, 185 Chloroplasts contain grana, which are photosynthetic membranes organized in a multilamellar fashion, somewhat similar to the crista of mitochondria discussed above. Since harmonics are mostly propagated in the forward direction, usually thin slices of a leaf or isolated chloroplasts are chosen for multicontrast microscopy. Upon illumination with Yb:KGW laser radiating at 1030 nm, carotenoid molecules undergo two-photon excitation.209 Carotenoids are located at close proximity with chlorophylls in photosynthetic pigment-protein complexes; therefore, efficient excitation energy transfer from carotenoids to chlorophylls takes place. Subsequently, chlorophyll fluoresces in the 650–720 nm spectral range. Figures 11a, 11b, 11c show isolated chloroplasts immobilized in polyacrylamide gel and imaged simultaneously with TPEF emitted from chlorophylls, SHG, and THG, respectively. The SCIA correlation image is presented in Fig. 11d. The chlorophyll fluorescence from the TPEF channel can be used to identify grana and stroma regions, where pigment-protein complexes reside inside the chloroplast. Although the fluorescence can be easily observed at low excitation powers, higher illumination powers are required to simultaneously generate harmonics but they induce fluorescence quenching in the photosynthetic membranes.210 The fluorescence intensity distribution is more homogeneous than the other signals, but varies from one chloroplast to another due to fluorescence quenching as illustrated by Fig. 11a.

Figure 11.

Figure 11

Multicontrast microscopy of freshly isolated face aligned chloroplasts from pea leaves Pisum savitum. Chlorophyll fluorescence in the range of 650–720 nm is shown in (a), SHG in (b), THG in (c), and the combined correlated image in (d). Uncorrelated SHG is green, uncorrelated THG is blue, uncorrelated TPEF is red and correlated TPEF and THG is cyan. The scale bar is 3 μm.

The SHG image, Fig. 11b, shows a two-lobed structure in the middle of one of the chloroplasts. The signal originates from a starch granule. The lower chloroplast does not contain starch and shows very little SHG. Strong SHG has previously been observed from isolated pigment-protein complexes (LHCII), the most abundant pigment-protein complexes in chloroplasts.178, 203 Each LHCII is embedded at the same orientation in the grana and stroma membranes of the chloroplast; however, the double stroma membranes and stacked thylakoid membranes in the grana are staggered, where every second membrane is flipped. This organization creates a centrosymmetric arrangement of pigment-protein complexes resulting in great reduction of SHG.

Intense THG can be generated from chloroplasts in plant cells due to the high hyperpolarizability of chlorophyll and especially carotenoid molecules. The photosynthetic membranes span across the chloroplast generating THG in the whole volume. In addition, the concentration of pigment-protein complexes is higher in the grana region, and the multilayer structure of the grana enhances THG as compared to the corresponding monolayer interfaces. The grana structures are on the order of a few hundred nanometers in diameter; therefore, THG reveals the grana regions as structures comparable to the size of the PSF inside the chloroplast. The grana arranged in a ringlike fashion at the periphery of the chloroplasts can be seen in Fig. 11c. The THG intensity of the grana changes depending on whether the thylakoid membranes are oriented perpendicular or parallel to the beam propagation axis, as well as on the number of membrane layers within the focal volume.17

The correlation image, Fig. 11d, reveals the starch granule as uncorrelated SHG in green. As was shown in the previous section, starch granules do not produce bulk THG or TPEF at this wavelength. The correlation image also reveals regions related to the grana, which are colored predominantly purple in the lower chloroplast and blue in the upper chloroplast. The fluorescence of the upper chloroplast is highly quenched; therefore, correlation lights up the grana in blue as a noncorrelated THG region. The purple color represents the grana, which has relatively high fluorescence and THG signals. The red regions in the middle of the chloroplasts show that uncorrelated TPEF originates mostly from the stroma membrane. Imaging with multicontrast microscopy at different depths and different orientations of polarization can reveal many details of the 3D organization of individual chloroplasts.

New applications

In this section we offer examples of two new applications in the field of nonlinear microscopy. The first example addresses the use of TPEF, in combination with microfluidic devices, to perform chemical analysis, while the second actually refers to new contrast mechanisms (TPA and self phase modulation) in the medical sciences.

Multiphoton imaging in microfluidics

An emerging application for multiphoton microscopy is for the measurement of dynamic molecular and cellular properties in miniaturized analysis systems, often called microfluidics. Molecular kinetics studies in microfluidics have revealed properties such as the activation energy,211 the presence of rate-limiting steps,212 and the structures of transitional states.213 On the cellular level microfluidics have been used to classify and to sort cells by size and fluorescence.214, 215 Microfluidic channel dimensions are typically tens to hundreds of micrometers. In microfluidics there is no turbulence and mixing is diffusion limited. The driving force and mixing geometry can select the local fluid velocity and species concentration so as to isolate the effect of a given stimulus on a substrate. Detection in microfluidics requires focused, sensitive optical probes such as is possible with multiphoton microscopy. In addition tightly focused excitation reduces the intensity of scattered light from the narrow channel walls and increases the accuracy with which the effect of the local velocity and concentration can be identified.

In situ quantitative imaging of product formation inside a microfluidic was demonstrated using CARS microscopy by Schafer et al.216 Spectra collected during the proton transfer reaction between acetic acid and pyrrolidine were decomposed into contributions from reactants and product with submicron spatial resolution, with ∼mM sensitivity and on a millisecond time scale. Label-free detection of the size distribution of mouse adipocyte cells was demonstrated by Wang et al. also using CARS microscopy.217 Detection was provided by a laser line scan oriented perpendicular to the flow. The length of each cell was calculated given the scan rate of the laser and the local fluid velocity.

Zugel et al. employed TPEF to detect the natural fluorescence of proteins and aromatic compounds in the deep UV in a common non-UV transparent microfluidic.218 They determined the migration rates of constituents in the reaction of the substrate L-leucine beta-naphthylamine with the enzyme leucine aminopeptidase by microfluidic electrophoresis. The sensitivity of their system in a continuous flow arrangement was 30 nM for the enzyme solution. Excitation was provided by a cavity-dumped dye laser producing 10 ps pulses at 580 nm with 120 mW average power. In another microfluidic electrophoresis experiment Schulze et al. detected the separation of a mixture of aromatic compounds by TPEF in a borofloat glass microfluidic.219 They used a Ti:sapphire laser producing 320 mW of average power and ∼100 fs pulses and reached a detection sensitivity of 10–50 μg ml−1 at 420 nm.

The fluid velocity and mixing behavior within the microfluidic—important factors in understanding molecular events that exhibit shear, time, or concentration dependence—can also be quantified by multiphoton microscopy. Dittrich et al. determined the local fluid velocity by two-photon spectroscopy.220 Two foci were displaced laterally within the channel and by fluorescence cross correlation the magnitude of the fluid velocity parallel to the vector between the foci was calculated. Mixing behavior can also be evaluated by TPEF as demonstrated by Schafer and co-workers.221 A quenching reaction was resolved at a mixing junction, and the local concentration of the quenching species, potassium iodide, was calculated using the Stern–Volmer model.

These experiments demonstrate the utility of multiphoton microscopy both for dynamic molecular and cellular analyses and for tracking the local concentration and fluid velocity. The multiphoton probe is well suited for the sensitive, confined detection required in microfluidics, and there is much potential for real-time analysis in this carefully controlled environment.

TPA and self-phase modulation

Although TPEF, SHG, and THG provide important and powerful tools for the study of biological systems, new contrast mechanisms continue to be explored. In particular, two nonlinear interactions have recently found applicability in microscopy, namely, self-phase modulation (SPM). In contrast with TPEF, that is restricted to molecules that exhibit fluorescent transitions from the excited state, TPA can be observed in all molecules. SPM results from variations of the index of refraction of a medium with the incident light intensity, this, in turn, leads to a modulation of the light phase. This modulation can depend on factors such as environment and the presence of localized structures. Both of these effects occur at the excitation wavelength and are very weak compared to linear processes such as scattering or absorption. These make it very hard to detect any signatures produced by them. One current implementation uses pulse shaping to force these processes to produce a signal at a wavelength other than the exciting beam.222, 223 This is accomplished by shaping the input pulse spectra—a hole is placed in the middle of the pulse spectra. Both TPA and SPM tend to refill this spectral hole, and since there are no other signals in this particular spectral region, they can be detected in the spectral domain free of background contributions. Using the SPM technique Yurtsever et al. have been able to identify the oxygenation level of hemoglobin,224 while Fischer et al.225 have applied the SPM technique to study neuronal activity.

PERSPECTIVES

Multiphoton microscopy provides a flexible and powerful tool for the study of biological systems both in situ and in vitro. TPEF has been widely used and is now considered a well established imaging tool. Harmonic imaging microscopy is a relatively new imaging technique that allows visualization of biological structures without staining; although still in its development stage, it has encountered many applications as exemplified above. The demonstration of epidetected harmonic signals opens new possibilities to apply harmonic signals for endoscopic imaging in biomedical research and diagnostics. The simultaneous combination of second and third harmonics yields structural details, which are otherwise unattainable; such as, sensitivity for multilayered structures, or, changes in nonlinear properties of the materials. These different contrast mechanisms can be implemented in a single instrument; thus, providing rich information about the structure of the object of interest. In particular, the nonlinear signals can yield interesting orientation dependent information about the organization of structures below the diffraction limit. Multimodal imaging and guided endoscopic microsurgery are examples of future directions, within the biomedical sciences, where harmonic microscopy will certainly play an important role.

It is certain that in the future we will see more biological applications of harmonic generation microscopy especially in live cellular imaging. Also, new applications of existing contrast mechanisms will continue to be explored not only in microfluidics but also in other fields. The field of nonlinear microscopy will continue to grow via development of new imaging contrast mechanisms, such as TPA and SPM, as well as their implementation into multicontrast microscopes. Development of novel laser sources, new fluorophores and better understanding of photobleaching will also impact the field. Nonlinear microscopy also has the perspective of being combined with other imaging modalities such as ultrasound or magnetic resonance imaging; therefore providing complementary information for biological or medical studies. The future of the field, and especially the expansion of its applicability, seem to have much to offer to the advancement of science.

ACKNOWLEDGMENTS

R.C., D.S., K.S., J.F., and J.S. wish to acknowledge support form the National Institute of Biomedical Imaging and Bioengineering under Grant No. BRPEB003832.

R. Cisek and V. Barzda acknowledge support from the Natural Sciences and Engineering Research Council of Canada, the Canada Foundation of Innovation and the Ontario Innovation Trust. Also, they would like to thank Catherine Greenhalgh and Nicole Prent for help with the manuscript preparation.

A.W.S. acknowledges support from the National Science Foundation through Grant No. DBI #0501862.

References

  1. Boyd R. W., Nonlinear Optics (Academic, Boston, 1992). [Google Scholar]
  2. Kriech M. A. and Conboy J. C., J. Am. Chem. Soc. 127, 2834 (2005). 10.1021/ja0430649 [DOI] [PubMed] [Google Scholar]
  3. Shen Y. R., Annu. Rev. Phys. Chem. 40, 327 (1989). 10.1146/annurev.pc.40.100189.001551 [DOI] [Google Scholar]
  4. Chu S. W., Chen I. H., Liu T. M., Chen P. C., Sun C. K., and Lin B. L., Opt. Lett. 26, 1909 (2001). 10.1364/OL.26.001909 [DOI] [PubMed] [Google Scholar]
  5. Chu S. W., Chen S. Y., Tsai T. H., Liu T. M., Lin C. Y., Tsai H. J., and Sun C. K., Opt. Express 11, 3093 (2003). [DOI] [PubMed] [Google Scholar]
  6. Greenhalgh C., Stewart B., Cisek R., Prent N., Major A., and Barzda V., Proc. SPIE 6343, 634301 (2006). 10.1117/12.706306 [DOI] [Google Scholar]
  7. Prent N., Green C., Greenhalgh C., Cisek R., Major A., Stewart B., and Barzda V., J. Biomed. Opt. 13, 041318 (2008). 10.1117/1.2950316 [DOI] [PubMed] [Google Scholar]
  8. Hellwarth R. and Christensen P., Opt. Commun. 12, 318 (1974). 10.1016/0030-4018(74)90024-8 [DOI] [Google Scholar]
  9. Sheppard C. J. R., Gannaway J. N., Kompfner R., and Walsh D., IEEE J. Quantum Electron. 13, D100 (1977). [Google Scholar]
  10. Denk W., Strickler J. H., and Webb W. W., Science 248, 73 (1990). 10.1126/science.2321027 [DOI] [PubMed] [Google Scholar]
  11. Barad Y., Eisenberg H., Horowitz M., and Silberberg Y., Appl. Phys. Lett. 70, 922 (1997). 10.1063/1.118442 [DOI] [Google Scholar]
  12. Florsheimer M., Phys. Status Solidi A 173, 15 (1999). [DOI] [Google Scholar]
  13. Potma E. O., de Boeij W. P., and Wiersma D. A., Biophys. J. 80, 3019 (2001). 10.1016/S0006-3495(01)76267-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Duncan M. D., Reintjes J., and Manuccia T. J., Opt. Lett. 7, 350 (1982). 10.1364/OL.7.000350 [DOI] [PubMed] [Google Scholar]
  15. Muller M., Squier J., De Lange C. A., and Brakenhoff G. J., J. Microsc. 197, 150 (2000). 10.1046/j.1365-2818.2000.00648.x [DOI] [PubMed] [Google Scholar]
  16. Zumbusch A., Holtom G. R., and Xie X. S., Phys. Rev. Lett. 82, 4142 (1999). 10.1103/PhysRevLett.82.4142 [DOI] [Google Scholar]
  17. Barzda V., in Biophysical Techniques in Photosynthesis, edited by Matysik T. J. A. a. J. (Springer, Dordrecht, 2008), Vol. 2, p. 35. [Google Scholar]
  18. Greenhalgh C., Prent N., Green C., Cisek R., Major A., Stewart B., and Barzda V., Appl. Opt. 46, 1852 (2007). 10.1364/AO.46.001852 [DOI] [PubMed] [Google Scholar]
  19. Barzda V., Greenhalgh C., Aus der Au J., Elmore S., Van Beek J. H. G. M., and Squier J., Opt. Express 13, 8263 (2005). 10.1364/OPEX.13.008263 [DOI] [PubMed] [Google Scholar]
  20. Greenhalgh C., Cisek R., Prent N., Major A., Aus der Au J., Squier J., and Barzda V., Proc. SPIE 5969, 59692F1 (2005). [Google Scholar]
  21. Cox G., Moreno N., and Feijo J., J. Biomed. Opt. 10, 024013 (2005). 10.1117/1.1896005 [DOI] [PubMed] [Google Scholar]
  22. Plotnikov S. V., Millard A. C., Campagnola P. J., and Mohler W. A., Biophys. J. 90, 693 (2006). 10.1529/biophysj.105.071555 [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Zipfel W. R., Williams R. M., and Webb W. W., Nat. Biotechnol. 21, 1369 (2003). 10.1038/nbt899 [DOI] [PubMed] [Google Scholar]
  24. Millard A. C., Campagnola P. J., Mohler W., Lewis A., and Loew L. M., Biophotonics, Part B, Methods in Enzymology (Academic, New York, 2003), Washington, D.C., Vol. 361, pp. 47–69. [DOI] [PubMed] [Google Scholar]
  25. Sun C. K., Adv. Biochem. Eng./Biotechnol. 95, 17 (2005). [DOI] [PubMed] [Google Scholar]
  26. Squier J. A. and Muller M., Rev. Sci. Instrum. 72, 2855 (2001). 10.1063/1.1379598 [DOI] [Google Scholar]
  27. Squier J. A., Müller M., and Barzda V., in Ultrafast Optics, edited by Trebino R. and Squier J. A., 2008, www.physics.gatech.edu/gcuo/ultratext.html. [Google Scholar]
  28. Müller M., Introduction to Confocal Fluorescence Microscopy, 2nd ed. (SPIE Press, Washington, 2006). [Google Scholar]
  29. Chu S. W., Chen I. H., Liu T. M., Sun C. K., Lee S. P., Lin B. L., Cheng P. C., Kuo M. X., Lin D. J., and Liu H. L., J. Microsc. 208, 190 (2002). 10.1046/j.1365-2818.2002.01081.x [DOI] [PubMed] [Google Scholar]
  30. Amir W., Carriles R., Hoover E. E., Planchon T. A., Durfee C. G., and Squier J. A., Opt. Lett. 32, 1731 (2007). 10.1364/OL.32.001731 [DOI] [PubMed] [Google Scholar]
  31. Barzda V., Gulbinas V., Kananavicius R., Cervinskas V., van Amerongen H., van Grondelle R., and Valkunas L., Biophys. J. 80, 2409 (2001). 10.1016/S0006-3495(01)76210-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Major A., Barzda V., Piunno P. A. E., Musikhin S., and Krull U. J., Opt. Express 14, 5285 (2006). 10.1364/OE.14.005285 [DOI] [PubMed] [Google Scholar]
  33. Major A., Cisek R., and Barzda V., Opt. Express 14, 12163 (2006). 10.1364/OE.14.012163 [DOI] [PubMed] [Google Scholar]
  34. Müller M., Squier J., and Brakenhoff G. J., Opt. Lett. 20, 1038 (1995). 10.1364/OL.20.001038 [DOI] [PubMed] [Google Scholar]
  35. Quercioli F., Ghirelli A., Tiribilli B., and Vassalli M., Microsc. Res. Tech. 63, 27 (2004). 10.1002/jemt.10420 [DOI] [PubMed] [Google Scholar]
  36. Quercioli F., Tiribilli B., and Vassalli M., Opt. Express 12, 4303 (2004). 10.1364/OPEX.12.004303 [DOI] [PubMed] [Google Scholar]
  37. Amat-Roldan I., Cormack I. G., Loza-Alvarez P., and Artigas D., Opt. Lett. 29, 2282 (2004). 10.1364/OL.29.002282 [DOI] [PubMed] [Google Scholar]
  38. LaFratta C. N., Linjie L., and Fourkas J. T., Opt. Express 14, 11215 (2006). 10.1364/OE.14.111215 [DOI] [PubMed] [Google Scholar]
  39. Millard A. C., Fittinghoff D. N., Squier J. A., Müller M., and Gaeta A. L., J. Microsc. 193, 179 (1999). 10.1046/j.1365-2818.1999.00480.x [DOI] [Google Scholar]
  40. Jasapara J. and Rudolph W., Opt. Lett. 24, 777 (1999). 10.1364/OL.24.000777 [DOI] [PubMed] [Google Scholar]
  41. Fittinghoff D., Aus der Au J., and Squier J., Opt. Commun. 247, 405 (2005). 10.1016/j.optcom.2004.11.062 [DOI] [Google Scholar]
  42. Meshulach D., Barad Y., and Silberberg Y., J. Opt. Soc. Am. B 14, 2122 (1997). 10.1364/JOSAB.14.002122 [DOI] [Google Scholar]
  43. Squier J. A., Fittinghoff D. N., Barty C. P. J., Wilson K. R., Müller M., and Brakenhoff G. J., Opt. Commun. 147, 153 (1998). 10.1016/S0030-4018(97)00584-1 [DOI] [Google Scholar]
  44. Chadwick R., Spahr E., Squier J., Durfee C., Walker B. C., and Fittinghoff D., Opt. Lett. 31, 3366 (2006). 10.1364/OL.31.003366 [DOI] [PubMed] [Google Scholar]
  45. Amir W., Planchon T. A., Durfee C., Squier J., Gabolde P., Trebino R., and Müller M., Opt. Lett. 31, 2927 (2006). 10.1364/OL.31.002927 [DOI] [PubMed] [Google Scholar]
  46. Radzewicz C., La Grone M. J., and Krasinski J. S., Opt. Commun. 126, 185 (1996). 10.1016/0030-4018(96)00060-0 [DOI] [Google Scholar]
  47. König K., J. Microsc. 200, 83 (2000). 10.1046/j.1365-2818.2000.00738.x [DOI] [PubMed] [Google Scholar]
  48. Xu C., Guild J., Webb W. W., and Denk W., Opt. Lett. 20, 2372 (1995). 10.1364/OL.20.002372 [DOI] [PubMed] [Google Scholar]
  49. Xu C. and Webb W. W., J. Opt. Soc. Am. B 13, 481 (1996). 10.1364/JOSAB.13.000481 [DOI] [Google Scholar]
  50. Xu C., Williams R. M., Zipfel W., and Webb W. W., Bioimaging 4, 198 (1996). [DOI] [Google Scholar]
  51. Bass M., Handbook of Optics (McGraw-Hill, New York, 1995), Vol. 2. [Google Scholar]
  52. Chang I. C., IEEE Trans. Sonics Ultrason. SU-23, 2 (1976). [Google Scholar]
  53. Milton G., Ireland C. L. M., and Ley J. M., Electo-optic and Acoustic-optic Scanning and Deflection, Optical Engineering Vol. 3 (Marcel Dekker, New York, and Basel, 1983). [Google Scholar]
  54. Raman C. V. and Nath N. S. N., Proc. Ind. Acad. Sci., Sect. A 3, 459 (1936). [Google Scholar]
  55. Lechleiter J. D., Lin D. -T., and Sieneart I., Biophys. J. 83, 2292 (2002). 10.1016/S0006-3495(02)73989-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. Bullen A., Patel S. S., and Saggau P., Biophys. J. 73, 477 (1997). 10.1016/S0006-3495(97)78086-X [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Rózsa B., Katona G., Vizi E. S., Várallyay Z., Sághy A., Valenta L., Maák P., Fekete J., Bányász A., and Szipöcs R., Appl. Opt. 46, 1860 (2007). 10.1364/AO.46.001860 [DOI] [PubMed] [Google Scholar]
  58. Iyer V., Losavio B. E., and Saggau P., J. Biomed. Opt. 8, 460 (2003). 10.1117/1.1580827 [DOI] [PubMed] [Google Scholar]
  59. Kremer Y., Léger J. -F., Lapole R., Honnorat N., Candela Y., Dieudonné S., and Bourdieu L., Opt. Express 16, 10066 (2008). 10.1364/OE.16.010066 [DOI] [PubMed] [Google Scholar]
  60. Zeng S., Bi K., Xue S., Liu Y., Lv X., and Luo Q., Rev. Sci. Instrum. 78, 015103 (2007). 10.1063/1.2409868 [DOI] [PubMed] [Google Scholar]
  61. Zeng S., Lv X., Zhan C., Chen W. R., Xiong W., Jacques S. L., and Luo Q., Opt. Lett. 31, 1091 (2006). 10.1364/OL.31.001091 [DOI] [PubMed] [Google Scholar]
  62. Blais F., Opt. Eng. (Bellingham) 27, 104 (1988). [Google Scholar]
  63. Fan G. Y., Fujisaki H., Miyakawi A., Tsai R. Y., and Ellisman M. H., Biophys. J. 76, 2412 (1999). 10.1016/S0006-3495(99)77396-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  64. Nguyen O. T., Callamaras N., Hsich C., and Parker I., Cell Calcium 30, 383 (2001). 10.1054/ceca.2001.0246 [DOI] [PubMed] [Google Scholar]
  65. Kurth S., Kaufmann C., Hahn R., Mehner J., Doetzel W., and Gessner T., Proc. SPIE 5721, 23 (2005). 10.1117/12.590847 [DOI] [Google Scholar]
  66. Tweed D. G., Opt. Eng. (Bellingham) 24, 1018 (1985). [Google Scholar]
  67. Evans C. L., Potma E. O., Puoris’haag M., Côté D., Lin C. P., and Xie X. S., Proc. Natl. Acad. Sci. U.S.A. 102, 16807 (2005). 10.1073/pnas.0508282102 [DOI] [PMC free article] [PubMed] [Google Scholar]
  68. Kim K. H., Buehler C., and So P. T. C., Appl. Opt. 38, 6004 (1999). 10.1364/AO.38.006004 [DOI] [PubMed] [Google Scholar]
  69. http://www.lincolnlaser.com/pdf_appNotes/225.pdf.
  70. Straub M. and Hell S. W., Bioimaging 6, 177 (1998). [DOI] [Google Scholar]
  71. Andresen V., Egner A., and Hell S. W., Opt. Lett. 26, 75 (2001). 10.1364/OL.26.000075 [DOI] [PubMed] [Google Scholar]
  72. Bahlmann K., So P. T. C., Kirber M., Reich R., Kosicki B., McGonagle W., and Bellve K., Opt. Express 15, 10992 (2007). 10.1364/OE.15.010991 [DOI] [PubMed] [Google Scholar]
  73. Fittinghoff D. N., Wiseman P. W., and Squier J. A., Opt. Express 7, 273 (2000). [DOI] [PubMed] [Google Scholar]
  74. Sonnleitner M., Schutz G. J., and Schmidt T., Chem. Phys. Lett. 300, 221 (1999). 10.1016/S0009-2614(98)01330-X [DOI] [Google Scholar]
  75. Brakenhoff G. J., Squier J., Norris T., Bilton A. C., Wade M. H., and Athey B., J. Microsc. 181, 253 (1996). 10.1046/j.1365-2818.1996.97379.x [DOI] [PubMed] [Google Scholar]
  76. Oron D. and Silberberg Y., J. Opt. Soc. Am. B 21, 1964 (2004). [Google Scholar]
  77. Buist A. H., Müller M., Squier J. A., and Brakenhoff G. J., J. Microsc. 192, 217 (1998). 10.1046/j.1365-2818.1998.00431.x [DOI] [Google Scholar]
  78. Oron D., Tal E., and Silberberg Y., Opt. Express 13, 1468 (2005). 10.1364/OPEX.13.001468 [DOI] [PubMed] [Google Scholar]
  79. Oron D. and Silberberg Y., J. Opt. Soc. Am. B 22, 2660 (2005). 10.1364/JOSAB.22.002660 [DOI] [Google Scholar]
  80. Tal E., Oron D., and Silberberg Y., Opt. Lett. 30, 1686 (2005). 10.1364/OL.30.001686 [DOI] [PubMed] [Google Scholar]
  81. Bewersdorf J., Pick R., and Hell S. W., Opt. Lett. 23, 655 (1998). 10.1364/OL.23.000655 [DOI] [PubMed] [Google Scholar]
  82. Egner A. and Hell S. W., J. Opt. Soc. Am. A Opt. Image Sci. Vis. 17, 1192 (2000). 10.1364/JOSAA.17.001192 [DOI] [PubMed] [Google Scholar]
  83. Fujita K., Nakamura O., Kaneko T., Kaxata S., Oyamada M., and Takamatsu T., J. Microsc. 194, 528 (1999). 10.1046/j.1365-2818.1999.00493.x [DOI] [PubMed] [Google Scholar]
  84. Hell S. W. and Andresen V., J. Microsc. 202, 457 (2001). 10.1046/j.1365-2818.2001.00918.x [DOI] [PubMed] [Google Scholar]
  85. Kobayashi M., Fujita K., Kaneko T., Takamatsu T., Nakamura O., and Kawata S., Opt. Lett. 27, 1324 (2002). 10.1364/OL.27.001324 [DOI] [PubMed] [Google Scholar]
  86. Fricke M. and Nielsen T., Appl. Opt. 44, 2984 (2005). 10.1364/AO.44.002984 [DOI] [PubMed] [Google Scholar]
  87. Nielsen T., Frick M., Hellweg D., and Andresen P., J. Microsc. 201, 368 (2001). 10.1046/j.1365-2818.2001.00852.x [DOI] [PubMed] [Google Scholar]
  88. Wegner F. V., Both M., Fink R. H. A., and Friedrich O., IEEE Trans. Med. Imaging 26, 926 (2007). [DOI] [PubMed] [Google Scholar]
  89. Lohmann A. W. and Paris D. P., Appl. Opt. 6, 1739 (1967). 10.1364/AO.6.001739 [DOI] [PubMed] [Google Scholar]
  90. Taghizadeh M. R., Blair P., Layet B., Barton I. M., Waddie A. J., and Ross N., Microelectron. Eng. 34, 219 (1997). 10.1016/S0167-9317(97)00188-3 [DOI] [Google Scholar]
  91. Taghizadeh M. R., Miller J. M., Blair P., and Tooley F. A. P., IEEE MICRO 14, 10 (1994). 10.1109/40.331379 [DOI] [Google Scholar]
  92. Sacconi L., Froner E., Antolini R., Taghizadeh M. R., Choudhury A., and Pavone F. S., Opt. Lett. 28, 1918 (2003). 10.1364/OL.28.001918 [DOI] [PubMed] [Google Scholar]
  93. Jureller J. E., Kim H. Y., and Scherer N. F., Opt. Express 14, 3406 (2006). 10.1364/OE.14.003406 [DOI] [PubMed] [Google Scholar]
  94. Kim K. H., Buehler C., Bahlmann K., Ragan T., Lee W. -C. A., Nedivi E., Heffer E. L., Fantini S., and So P. T. C., Opt. Express 15, 11658 (2007). 10.1364/OE.15.011658 [DOI] [PMC free article] [PubMed] [Google Scholar]
  95. Carriles R., Sheetz K. E., Hoover E. E., Squier J. A., and Barzda V., Opt. Express 16, 10364 (2008). 10.1364/OE.16.010364 [DOI] [PubMed] [Google Scholar]
  96. Sheetz K. E., Hoover E. E., Carriles R., Kleinfeld D., and Squier J. A., Opt. Express 16, 17574 (2008). 10.1364/OE.16.017574 [DOI] [PubMed] [Google Scholar]
  97. Marquez A., Iemmi C., Moreno I., Davis J., Campos J., and Yzuel M., Opt. Eng. (Bellingham) 40, 2558 (2001). 10.1117/1.1412228 [DOI] [Google Scholar]
  98. Capeluto M. G., La Mela C., Iemmi C., and Marconi M. C., Opt. Commun. 232, 107 (2004). 10.1016/j.optcom.2003.12.080 [DOI] [Google Scholar]
  99. Resler D. P., Hobbs D. S., Sharp R. C., Friedman L. J., and Dorschner T. A., Opt. Lett. 21, 689 (1996). 10.1364/OL.21.000689 [DOI] [PubMed] [Google Scholar]
  100. Ferreira L. O. S. and Moehlecke S., Sens. Actuators, A 73, 252 (1999). 10.1016/S0924-4247(98)00288-X [DOI] [Google Scholar]
  101. Miyajima H., Murakami K., and Katashiro M., IEEE J. Sel. Top. Quantum Electron. 10, 514 (2004). 10.1109/JSTQE.2004.828487 [DOI] [Google Scholar]
  102. Miyajima H., Asaoka N., Isokawa T., Ogata M., Aoki Y., Imai M., Fujimori O., Katashiro M., and Matsumoto K., J. Microelectromech. Syst. 12, 243 (2003). 10.1109/JMEMS.2003.809961 [DOI] [Google Scholar]
  103. Ra H., Piyawattanametha W., Mandella M. J., Hsiung P. -L., Hardy J., Wang T. D., Contag C. H., Kino G. S., and Solgaard O., Opt. Express 16, 7224 (2008). 10.1364/OE.16.007224 [DOI] [PMC free article] [PubMed] [Google Scholar]
  104. http://www.physikinstrumente.com/en/pdf_extra.
  105. http://www.kineticceramics.com/pdf/Catalog_2008.pdf.
  106. Valentine M. T., Guydosh N. R., Gutiérrez-Medina B., Fehr A. N., Andreasson J. O., and Bloch S. M., Opt. Lett. 33, 599 (2008). 10.1364/OL.33.000599 [DOI] [PMC free article] [PubMed] [Google Scholar]
  107. Patterson G. H. and Piston D. W., Biophys. J. 78, 2159 (2000). 10.1016/S0006-3495(00)76762-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  108. Hopt A. and Neher E., Biophys. J. 80, 2029 (2001). 10.1016/S0006-3495(01)76173-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  109. Tauer U., Exp. Physiol. 87, 709 (2002). 10.1113/eph8702464 [DOI] [PubMed] [Google Scholar]
  110. Albota M. A., Xu C., and Webb W. W., Appl. Opt. 37, 7352 (1998). 10.1364/AO.37.007352 [DOI] [PubMed] [Google Scholar]
  111. Larson D. R., Zipfel W., Williams R. M., Clark S. W., Bruchez M. P., Wise F. W., and Webb W. W., Science 300, 1434 (2003). 10.1126/science.1083780 [DOI] [PubMed] [Google Scholar]
  112. Zipfel W., Williams R., Christie R. H., Nikitin A. Y., Hyman B. T., and Webb W. W., Proc. Natl. Acad. Sci. U.S.A. 100, 7075 (2003). 10.1073/pnas.0832308100 [DOI] [PMC free article] [PubMed] [Google Scholar]
  113. Fu D., Ye T., Matthews T. E., Chen B. J., and Yurtserver G., Opt. Lett. 32, 2641 (2007). 10.1364/OL.32.002641 [DOI] [PubMed] [Google Scholar]
  114. Huang S. L., Heikal A. A., and Webb W. W., Biophys. J. 82, 2811 (2002). 10.1016/S0006-3495(02)75621-X [DOI] [PMC free article] [PubMed] [Google Scholar]
  115. Laiho L. H., Pelet S., Hancewicz T. M., Kaplan P. D., and So P. T. C., J. Biomed. Opt. 10, 024016 (2005). 10.1117/1.1891370 [DOI] [PubMed] [Google Scholar]
  116. Piston D. W., Kirby M. S., Cheng H. P., Lederer W. J., and Webb W. W., Appl. Opt. 33, 662 (1994). 10.1364/AO.33.000662 [DOI] [PubMed] [Google Scholar]
  117. Rubart M., Circ. Res. 95, 1154 (2004). 10.1161/01.RES.0000150593.30324.42 [DOI] [PubMed] [Google Scholar]
  118. Skala M. C., Riching K. M., Gendron-Fitzpatrick A., Eickhoff J., Eliceiri K. W., White J. G., and Ramanujam N., Proc. Natl. Acad. Sci. U.S.A. 104, 19494 (2007). 10.1073/pnas.0708425104 [DOI] [PMC free article] [PubMed] [Google Scholar]
  119. Mohanty A., Luo A., DeBlasio S., Ling X., Yang Y., Tuthill D. E., Williams K. E., Hill D., Zadrozny T., Chan A., Sylvester A. W., and Jackson D., Plant Physiol. 149, 601 (2009). 10.1104/pp.108.130146 [DOI] [PMC free article] [PubMed] [Google Scholar]
  120. Zhang J., Hill D. R., and Sylvester A. W., J. Integr. Plant Biol. 49, 1129 (2007). 10.1111/j.1672-9072.2007.00520.x [DOI] [Google Scholar]
  121. Heim N., Garaschuk O., Friedrich M. W., Mank M., Milos R. I., Kovalchuk Y., Konnerth A., and Griesbeck O., Nat. Methods 4, 127 (2007). 10.1038/nmeth1009 [DOI] [PubMed] [Google Scholar]
  122. Helmchen F. and Denk W., Curr. Opin. Neurobiol. 12, 593 (2002). 10.1016/S0959-4388(02)00362-8 [DOI] [PubMed] [Google Scholar]
  123. Mertz J., Curr. Opin. Neurobiol. 14, 610 (2004). 10.1016/j.conb.2004.08.013 [DOI] [PubMed] [Google Scholar]
  124. Michalet X., Pinaud F. F., Bentolila L. A., Tsay J. M., Doose S., Li J. J., Sundaresan G., Wu A. M., Gambhir S. S., and Weiss S., Science 307, 538 (2005). 10.1126/science.1104274 [DOI] [PMC free article] [PubMed] [Google Scholar]
  125. Helmchen F. and Denk W., Nat. Biotechnol. 2, 932 (2005). [DOI] [PubMed] [Google Scholar]
  126. Lucitti J. L. and Dickinson M. E., Pediatr. Res. 60, 1 (2006). 10.1203/01.pdr.0000220318.49973.32 [DOI] [PMC free article] [PubMed] [Google Scholar]
  127. Theer P. and Denk W., J. Opt. Soc. Am. A Opt. Image Sci. Vis. 23, 3139 (2006). 10.1364/JOSAA.23.003139 [DOI] [PubMed] [Google Scholar]
  128. Kleinfeld D., Mitra P. P., Helmchen F., and Denk W., Proc. Natl. Acad. Sci. U.S.A. 95, 15741 (1998). 10.1073/pnas.95.26.15741 [DOI] [PMC free article] [PubMed] [Google Scholar]
  129. Nishimura N., Schaffer C. B., Friedman B., Lyden P. D., and Kleinfeld D., Proc. Natl. Acad. Sci. U.S.A. 104, 365 (2007). 10.1073/pnas.0609551104 [DOI] [PMC free article] [PubMed] [Google Scholar]
  130. Dombeck D. A., Khabbaz A. N., Collman F., Adelman T. L., and Tank D. W., Neuron 56, 43 (2007). 10.1016/j.neuron.2007.08.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  131. Göbel W., Kampa B. M., and Helmchen F., Nat. Methods 4, 73 (2007). 10.1038/nmeth989 [DOI] [PubMed] [Google Scholar]
  132. Cavanaugh L. L. and Weninger W., Immunol. Cell Biol. 86, 428 (2008). 10.1038/icb.2008.25 [DOI] [PubMed] [Google Scholar]
  133. Nimchinsky E. A., Sabatini B. L., and Svoboda K., Annu. Rev. Physiol. 64, 313 (2002). 10.1146/annurev.physiol.64.081501.160008 [DOI] [PubMed] [Google Scholar]
  134. Pan F. and Gan W. B., Dev. Neurobiol. 68, 771 (2008). 10.1002/dneu.20630 [DOI] [PubMed] [Google Scholar]
  135. Breart B., Lemaître F., Celli S., and Bousso P., J. Clin. Invest. 118, 1390 (2008). 10.1172/JCI34388 [DOI] [PMC free article] [PubMed] [Google Scholar]
  136. Granot D., Addadi Y., Kalchenko V., Harmelin A., Kunz-Schughart L. A., and Neeman M., Cancer Res. 67, 9180 (2007). 10.1158/0008-5472.CAN-07-0684 [DOI] [PMC free article] [PubMed] [Google Scholar]
  137. Christie R. H., Bacskai B. J., Zipfel W., and Williams R. M., J. Neurosci. 21, 858 (2001). [DOI] [PMC free article] [PubMed] [Google Scholar]
  138. Eichhoff G., Busche M. A., and Garaschuk O., Eur. J. Nucl. Med. Mol. Imaging 35, 99 (2008). 10.1007/s00259-007-0709-6 [DOI] [PubMed] [Google Scholar]
  139. Skoch J., Hickey G. A., Kajdasz S. T., and Hyman B. T., In Vivo Imaging of Amyloid-p Deposits in Mouse Brain With Multiphoton Microscopy (Humana, Clifton, NJ, 2005). [DOI] [PubMed] [Google Scholar]
  140. Molitoris B. A. and Sandoval R. M., Am. J. Physiol. Renal Physiol. 288, 1084 (2005). 10.1152/ajprenal.00473.2004 [DOI] [PubMed] [Google Scholar]
  141. Xi P., Andegeko Y., Weisel L. R., Lozovoy V. V., and Dantus M., Opt. Commun. 281, 1841 (2008). 10.1016/j.optcom.2007.09.066 [DOI] [Google Scholar]
  142. Engelbrecht C. J., Johnston R. S., Seibel E. J., and Helmchen F., Opt. Express 16, 5556 (2008). 10.1364/OE.16.005556 [DOI] [PubMed] [Google Scholar]
  143. Jung J. and Schnitzer M., Opt. Lett. 28, 902 (2003). 10.1364/OL.28.000902 [DOI] [PubMed] [Google Scholar]
  144. Kim H. M., Jeong B. H., Hyon J. Y., An M. J., Seo M. S., Hong J. H., Lee K. J., Kim C. H., Joo T., Hong S. C., and Cho B. R., J. Am. Chem. Soc. 130, 4246 (2008). 10.1021/ja711391f [DOI] [PubMed] [Google Scholar]
  145. Piyawattanametha W., Barretto R. P., Ko T. H., Flusberg B., Cocker E., Ra H., Lee D., Solgaard O., and Schnitzer M., Opt. Lett. 31, 2018 (2006). 10.1364/OL.31.002018 [DOI] [PMC free article] [PubMed] [Google Scholar]
  146. Becker W. and Bergmann A., in Handbook of Biomedical Nonlinear Optical Microscopy, edited by Masters B. and So P. (Oxford University Press, New York, New York, 2008), pp. 499–556. [Google Scholar]
  147. Clegg R. M. and Holub O., Biophotonics Pt A 360, 509 (2003). 10.1016/S0076-6879(03)60126-6 [DOI] [PubMed] [Google Scholar]
  148. van Munster E. B. and Gadella T. W. J., Adv. Biochem. Engin./Biotechnol. 95, 143 (2005). [DOI] [PubMed] [Google Scholar]
  149. Eckert H. J., Petrasek Z., and Kemnitz K., Proc. SPIE 6372, 637207 (2006). 10.1117/12.685958 [DOI] [Google Scholar]
  150. Emiliani V. and Sanvitto D., Appl. Phys. Lett. 83, 2471 (2003). 10.1063/1.1604938 [DOI] [Google Scholar]
  151. Oxborough K. and Baker N. R., Plant, Cell Environ. 20, 1473 (1997). 10.1046/j.1365-3040.1997.d01-42.x [DOI] [Google Scholar]
  152. Berman-Frank I. and Lundgren P., Science 294, 1534 (2001). 10.1126/science.1064082 [DOI] [PubMed] [Google Scholar]
  153. Kupper H. and Setlik I., Photosynthetica 38, 553 (2000). 10.1023/A:1012461407557 [DOI] [Google Scholar]
  154. Becker W. and Bergmann A., Microsc. Res. Tech. 63, 58 (2004). 10.1002/jemt.10421 [DOI] [PubMed] [Google Scholar]
  155. Soumpasis D. M., Biophys. J. 41, 95 (1983). 10.1016/S0006-3495(83)84410-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  156. White J. and Stelzer E., Trends Cell Biol. 9, 61 (1999). 10.1016/S0962-8924(98)01433-0 [DOI] [PubMed] [Google Scholar]
  157. Eggeling C., Volkmer A., and Seidel C., ChemPhysChem 6, 791 (2005). 10.1002/cphc.200400509 [DOI] [PubMed] [Google Scholar]
  158. Gavrilyuk S., Polyutov S., Jha P. C., Rinkevicius Z., Ågren H., and Gel’mukhanov F., J. Phys. Chem. A 111, 11961 (2007). 10.1021/jp074756x [DOI] [PubMed] [Google Scholar]
  159. Song L., Hennink E., Young I., and Tanke H., Biophys. J. 68, 2588 (1995). 10.1016/S0006-3495(95)80442-X [DOI] [PMC free article] [PubMed] [Google Scholar]
  160. Song L., Varma C. A., Verhoeven J. W., and Tanke H., Biophys. J. 70, 2959 (1996). 10.1016/S0006-3495(96)79866-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  161. Donnert G., Eggeling C., and Hell S. W., Nat. Methods 4, 81 (2007). 10.1038/nmeth986 [DOI] [PubMed] [Google Scholar]
  162. Ji N., Magee J., and Betzig E., Nat. Methods 5, 197 (2008). 10.1038/nmeth.1175 [DOI] [PubMed] [Google Scholar]
  163. Giloh H. and Sedat J. W., Science 217, 1252 (1982). 10.1126/science.7112126 [DOI] [PubMed] [Google Scholar]
  164. Bardeen C. J., Yakovlev V. V., Squier J. A., and Wilson K. R., J. Biomed. Opt. 4, 362 (1999). 10.1117/1.429937 [DOI] [PubMed] [Google Scholar]
  165. Kawano H., Nabekawa Y., Suda A., Oishi Y., Mizuno H., Miyawaki A., and Midorikawa K., Biochem. Biophys. Res. Commun. 311, 592 (2003). 10.1016/j.bbrc.2003.09.236 [DOI] [PubMed] [Google Scholar]
  166. Moreaux L., Sandre O., and Mertz J., J. Opt. Soc. Am. B 17, 1685 (2000). 10.1364/JOSAB.17.001685 [DOI] [Google Scholar]
  167. Mizutani G., Sonoda Y., Sano H., Sakamoto M., Takahashi T., and Ushioda S., J. Lumin. 87–89, 824 (2000). 10.1016/S0022-2313(99)00428-7 [DOI] [Google Scholar]
  168. Campagnola P. J., Millard A. C., Terasaki M., Hoppe P. E., Malone C. J., and Mohler W. A., Biophys. J. 82, 493 (2002). 10.1016/S0006-3495(02)75414-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  169. Raub C. B., Unruh J., Suresh V., Krasieva T., Lindmo T., Gratton E., Tromberg B. J., and George S. C., Biophys. J. 94, 2361 (2008). 10.1529/biophysj.107.120006 [DOI] [PMC free article] [PubMed] [Google Scholar]
  170. Anisha Thayil K. N., Gualda E. J., Cormack I. G., Soria S., and Loza-Alvarez P., Proc. SPIE 6442, 64421S (2007). 10.1117/12.700222 [DOI] [Google Scholar]
  171. Cheng J. -X., Volkmer A., and Xie X. S., J. Opt. Soc. Am. B 19, 1363 (2002). 10.1364/JOSAB.19.001363 [DOI] [Google Scholar]
  172. Llewellyn M. E., Barretto R. P. J., Delp S. L., and Schnitzer M. J., Nature (London) 454, 784 (2008). [DOI] [PMC free article] [PubMed] [Google Scholar]
  173. Nadiarnykh O., LaComb R., Campagnola P. J., and Mohler W. A., Opt. Express 15, 3348 (2007). 10.1364/OE.15.003348 [DOI] [PubMed] [Google Scholar]
  174. Psilodimitrakopoulos S., Amat-Roldan I., Santos S., Mathew M., Thayil A. K. N., Zalvidea D., Artigas D., and Loza-Alvarez P., Proc. SPIE 6860, 68600E (2008). 10.1117/12.763168 [DOI] [Google Scholar]
  175. Chen S. Y., Hsu C. Y. S., and Sun C. K., Opt. Express 16, 11670 (2008). [PubMed] [Google Scholar]
  176. Bloembergen N. and Pershan P. S., Phys. Rev. 128, 606 (1962). 10.1103/PhysRev.128.606 [DOI] [Google Scholar]
  177. Bloembergen N., Chang R. K., Jha S. S., and Lee C. H., Phys. Rev. 174, 813 (1968). 10.1103/PhysRev.174.813 [DOI] [Google Scholar]
  178. Barzda V., Cisek R., Spencer T. L., Philipose U., Ruda H. E., and Shik A., Appl. Phys. Lett. 92, 113111 (2008). 10.1063/1.2901023 [DOI] [Google Scholar]
  179. Sipe J. E., Mizrahi V., and Stegeman G. I., Phys. Rev. B 35, 9091 (1987). 10.1103/PhysRevB.35.9091 [DOI] [PubMed] [Google Scholar]
  180. Fischer P. and Hache F., Chirality 17, 421 (2005). 10.1002/chir.20179 [DOI] [PubMed] [Google Scholar]
  181. Zhang W. K., Wang H. F., and Zheng D. S., Phys. Chem. Chem. Phys. 8, 4041 (2006). 10.1039/b608005g [DOI] [PubMed] [Google Scholar]
  182. Buleon A., Veronese G., and Putaux J. L., Aust. J. Chem. 60, 706 (2007). 10.1071/CH07168 [DOI] [Google Scholar]
  183. Chu S. W., Chen S. Y., Chern G. W., Tsai T. H., Chen Y. C., Lin B. L., and Sun C. K., Biophys. J. 86, 3914 (2004). 10.1529/biophysj.103.034595 [DOI] [PMC free article] [PubMed] [Google Scholar]
  184. Mertz J., C. R. Acad. Sci., Ser IV: Phys., Astrophys. 2, 1153 (2001). [Google Scholar]
  185. Muller M., Squier J., Wilson K. R., and Brakenhoff G. J., J. Microsc. 191, 266 (1998). 10.1046/j.1365-2818.1998.00399.x [DOI] [PubMed] [Google Scholar]
  186. Ward J. F. and New G. H. C., Phys. Rev. 185, 57 (1969). 10.1103/PhysRev.185.57 [DOI] [Google Scholar]
  187. Tsang T. Y. F., Phys. Rev. A 52, 4116 (1995). 10.1103/PhysRevA.52.4116 [DOI] [PubMed] [Google Scholar]
  188. Saeta P. N. and Miller N. A., Appl. Phys. Lett. 79, 2704 (2001). 10.1063/1.1412434 [DOI] [Google Scholar]
  189. Kolthammer W. S., Barnard D., Carlson N., Edens A. D., Miller N. A., and Saeta P. N., Phys. Rev. B 72, 045446 (2005). 10.1103/PhysRevB.72.045446 [DOI] [Google Scholar]
  190. Naumov A. N., Sidorov-Biryukov D. A., Fedotov A. B., and Zheltikov A. M., Opt. Spectrosc. 90, 778 (2001). 10.1134/1.1374669 [DOI] [Google Scholar]
  191. Schins J. M., Schrama T., Squier J., Brakenhoff G. J., and Muller M., J. Opt. Soc. Am. B 19, 1627 (2002). 10.1364/JOSAB.19.001627 [DOI] [Google Scholar]
  192. Osumi M., Micron 29, 207 (1998). 10.1016/S0968-4328(97)00072-3 [DOI] [PubMed] [Google Scholar]
  193. Debarre D., Supatto W., Pena A. M., Fabre A., Tordjmann T., Combettes L., Schanne-Klein M. C., and Beaurepaire E., Nat. Methods 3, 47 (2006). 10.1038/nmeth813 [DOI] [PubMed] [Google Scholar]
  194. Murphy D. J., Prog. Lipid Res. 40, 325 (2001). 10.1016/S0163-7827(01)00013-3 [DOI] [PubMed] [Google Scholar]
  195. Barnett J. A. and Robinow C. F., Yeast 19, 151 (2002). 10.1002/yea.813 [DOI] [PubMed] [Google Scholar]
  196. Squier J. A., Muller M., Brakenhoff G. J., and Wilson K. R., Opt. Express 3, 315 (1998). [DOI] [PubMed] [Google Scholar]
  197. Millard A. C., Wiseman P. W., Fittinghoff D. N., Wilson K. R., Squier J. A., and Muller M., Appl. Opt. 38, 7393 (1999). 10.1364/AO.38.007393 [DOI] [PubMed] [Google Scholar]
  198. Yelin D. and Silberberg Y., Opt. Express 5, 169 (1999). [DOI] [PubMed] [Google Scholar]
  199. Barille R., Canioni L., Rivet S., Sarger L., Vacher P., and Ducret T., Appl. Phys. Lett. 79, 4045 (2001). 10.1063/1.1425450 [DOI] [PubMed] [Google Scholar]
  200. Supatto W., Debarre D., Moulia B., Brouzes E., Martin J. L., Farge E., and Beaurepaire E., Proc. Natl. Acad. Sci. U.S.A. 102, 1047 (2005). 10.1073/pnas.0405316102 [DOI] [PMC free article] [PubMed] [Google Scholar]
  201. Oron D., Tal E., and Silberberg Y., Opt. Lett. 28, 2315 (2003). 10.1364/OL.28.002315 [DOI] [PubMed] [Google Scholar]
  202. Tai S. P., Lee W. J., Shieh D. B., Wu P. C., Huang H. Y., Yu C. H., and Sun C. K., Opt. Express 14, 6178 (2006). 10.1364/OE.14.006178 [DOI] [PubMed] [Google Scholar]
  203. Prent N., Cisek R., Greenhalgh C., Aus der Au J., Squier J., and Barzda V., in Photosynthesis: Fundamental Aspects to Global Perspectives, edited by Van der Est A. and Bruce D. (ACG Publishing, 2005), p. 1037. [Google Scholar]
  204. Prent N., Cisek R., Greenhalgh C., Sparrow R., Rohitlall N., Milkereit M. S., Green C., and Barzda V., Proc. SPIE 5971, 5971061 (2005). [Google Scholar]
  205. Campagnola P. J., Wei M. D., Lewis A., and Loew L. M., Biophys. J. 77, 3341 (1999). 10.1016/S0006-3495(99)77165-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  206. Moreaux L., Sandre O., Charpak S., Blanchard-Desce M., and Mertz J., Biophys. J. 80, 1568 (2001). 10.1016/S0006-3495(01)76129-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  207. Farkas D. L., Wei M. D., Febbroriello P., Carson J. H., and Loew L. M., Biophys. J. 56, 1053 (1989). 10.1016/S0006-3495(89)82754-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  208. Romashko D. N., Marban E., and O’Rourke B., Proc. Natl. Acad. Sci. U.S.A. 95, 1618 (1998). 10.1073/pnas.95.4.1618 [DOI] [PMC free article] [PubMed] [Google Scholar]
  209. Walla P. J., Linden P. A., Ohta K., and Fleming G. R., J. Phys. Chem. A 106, 1909 (2002). 10.1021/jp011495x [DOI] [Google Scholar]
  210. Cisek R., Aus der Au J., Squier J., and Barzda V., in Photosynthesis: Fundamental Aspects to Global Perspectives, edited by Van der Est A. and Bruce D. (Allen and Unwin, London, 2005), p. 776. [Google Scholar]
  211. Mao H., Yang T., and Cremer P. S., J. Am. Chem. Soc. 124, 4432 (2002). 10.1021/ja017625x [DOI] [PubMed] [Google Scholar]
  212. Hertzog D. E., Michalet X., Jager M., Kong X., Santiago J. G., Weiss S., and Bakajin O., Anal. Chem. 76, 7169 (2004). 10.1021/ac048661s [DOI] [PMC free article] [PubMed] [Google Scholar]
  213. Pan D., Ganim Z., Kim J., Verhoeven M., Lugtenburg J., and Mathies R. A., J. Am. Chem. Soc. 124, 4857 (2002). 10.1021/ja012666e [DOI] [PMC free article] [PubMed] [Google Scholar]
  214. R. W.ApplegateJr., Schafer D. N., Amir W., Squier J., Vestad T., Oakey J., and Marr D. W. M., J. Opt. A, Pure Appl. Opt. 9, S122 (2007). 10.1088/1464-4258/9/8/S03 [DOI] [Google Scholar]
  215. Wolff A., Perch-Nielsen I. R., Larsen U. D., Goravic G., Poulsen C. R., Kutter J. P., and Telleman P., Lab Chip 3, 22 (2003). 10.1039/b209333b [DOI] [PubMed] [Google Scholar]
  216. Schafer D., Squier J. A., van Maarseveen J., Bonn D., Bonn M., and Müller M., J. Am. Chem. Soc. 130, 11592 (2008). 10.1021/ja804158n [DOI] [PubMed] [Google Scholar]
  217. Wang H. F., Bao N., Le T. T., Lu C., and Cheng J., Opt. Express 16, 5782 (2008). 10.1364/OE.16.005782 [DOI] [PMC free article] [PubMed] [Google Scholar]
  218. Zugel S. A., Burke B. J., Regnier F. E., and Lytle F. E., Anal. Chem. 72, 5731 (2000). 10.1021/ac000801k [DOI] [PubMed] [Google Scholar]
  219. Schulze P., Schüttpelz M., Sauer M., and Belder D., Lab Chip 7, 1841 (2007). 10.1039/b710762e [DOI] [PubMed] [Google Scholar]
  220. Dittrich P. S. and Schwille P., Anal. Chem. 74, 4472 (2002). 10.1021/ac025625p [DOI] [PubMed] [Google Scholar]
  221. Schafer D., Gibson E. A., Amir W., Erikson R., Lawrence J., Vestad T., Squier J., Jimenez R., and Marr D. W. M., Opt. Lett. 32, 2568 (2007). 10.1364/OL.32.002568 [DOI] [PubMed] [Google Scholar]
  222. Fischer M. C., Liu H. C., Piletic I. R., and Warren W. S., Opt. Express 16, 4192 (2008). 10.1364/OE.16.004192 [DOI] [PubMed] [Google Scholar]
  223. Fischer M. C., Ye T., Yurtsever G., Miller A., Ciocca M., Wagner W., and Warren W. S., Opt. Lett. 30, 1551 (2005). 10.1364/OL.30.001551 [DOI] [PubMed] [Google Scholar]
  224. Yurtserver G., Ye T., Weaver K., and Warren W. S., Two-Photon Absorption of Oxyhemoglobin and Methemoglobin for Microscopic Imaging (Optical Society of America, Washington, D.C., 2006), paper WF4. [Google Scholar]
  225. Fischer M. C., Liu H. C., Piletic I. R., Escobedo-Lozoua Y., Yasuda R., and Warren W. S., Opt. Lett. 33, 219 (2008). 10.1364/OL.33.000219 [DOI] [PubMed] [Google Scholar]

Articles from The Review of Scientific Instruments are provided here courtesy of American Institute of Physics

RESOURCES