Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Compressive light-field microscopy for 3D neural activity recording

Open Access Open Access

Abstract

Understanding the mechanisms of perception, cognition, and behavior requires instruments that are capable of recording and controlling the electrical activity of many neurons simultaneously and at high speeds. All-optical approaches are particularly promising since they are minimally invasive and potentially scalable to experiments interrogating thousands or millions of neurons. Conventional light-field microscopy provides a single-shot 3D fluorescence capture method with good light efficiency and fast speed, but suffers from low spatial resolution and significant image degradation due to scattering in deep layers of brain tissue. Here, we propose a new compressive light-field microscopy method to address both problems, offering a path toward measurement of individual neuron activity across large volumes of tissue. The technique relies on spatial and temporal sparsity of fluorescence signals, allowing one to identify and localize each neuron in a 3D volume, with scattering and aberration effects naturally included and without ever reconstructing a volume image. Experimental results on live zebrafish track the activity of an estimated 800+ neural structures at 100 Hz sampling rate.

© 2016 Optical Society of America

1. INTRODUCTION

Brain tissue is a dense network of neurons that exchange information by means of electrical signals called action potentials. Understanding the mechanisms by which the brain processes information requires the ability to detect action potentials from many individual neurons simultaneously across large volumes of tissue. Engineered calcium-sensitive proteins [1] and voltage-sensitive dyes [2] enable optical detection of action potentials without disturbing the neuron’s physiology. However, in deep layers of brain tissue, optical aberrations [3] and scattering are generally too strong to resolve individual neurons with conventional fluorescence microscopy, so more advanced methods are required. The most popular of these, two-photon microscopy [4], uses a nonlinear effect to restrict fluorescence excitation to a small spot or plane [5] which scans or hops through the volume point by point [6,7]. Light-sheet microscopy [8] achieves faster 3D acquisition by scanning in only one dimension, and confocal light-sheet microscopy [9] gives improved performance in strongly scattering tissue at the expense of photon efficiency and speed. Another variant implements light-sheet imaging with a single objective, giving practical benefits at the expense of spatial resolution [10]. All of these methods involve scanning, so frame rates are limited for large-volume imaging.

Light-field imaging [1113] captures full-volume 3D information in a single shot. A light-field measurement includes both the position (x, y) and angle of incidence (θx,θy) of light rays reaching the sensor. In contrast, a traditional 2D sensor only captures the position of the rays. With 4D light-field information, it is possible to later adjust focus, change perspective, or retrieve 3D images in post-processing. Conveniently, any microscope can be converted into a light-field imager by making a simple and inexpensive hardware modification: a microlens array placed in front of the sensor. Traditional light-field imaging makes a ray optics assumption that breaks down for microscopy, but can be corrected by wave-optics models [14,15]. The main advantages of light-field microscopy for 3D imaging are its fast capture speed (limited only by the camera’s frame rate) and its photon efficiency, since all the photons that reach the image plane are captured. Unfortunately, these benefits come at the cost of a severe loss of spatial resolution since the limited number of pixels on the sensor must be spread across four dimensions instead of two. Various attempts have been made to improve resolution though deconvolution [16] or additional measurements [1719].

Light-field microscopy has already provided promising results for functional brain imaging [20] with 3D volume image reconstructions to quantify the fluorescence levels of individual neurons. However, the number of pixels on the sensor limits the number of voxels that can be reconstructed with fidelity, and thus the number of neurons that can be monitored. Here, we incorporate sparsity-based algorithms that enable large volumes to be captured with high spatial resolution, provided that only a sparse set of neurons are active at once. We skip the step of explicitly reconstructing a 3D image and instead attempt to simply distinguish and localize each neural structure in 3D.

The prime advantage of our method over previous light-field microscopy work is that the data collection requirements scale not with the number of voxels to be reconstructed, but rather with the number of active neurons at a particular time. Hence, it may be possible in the future to use a conventional sensor for recording the activity of thousands or millions of neurons in real time. Since brain activity is not always sparse, we add a processing step that implements an independent component analysis on the raw video data to separate out temporally correlated neural activity. This results in spatially sparse components that satisfy our model, even for densely active neural experiments.

A major impediment for neural activity tracking is optical scattering. Digitally undoing the effects of multiple scattering in 3D is an ill-posed nonlinear problem that is difficult or impossible to solve [21]. Conventional light-field microscopy ignores scattering effects when reconstructing volume images, so deeper sources blur. Recently, we showed that phase space (e.g., light field) measurements can be robust to scattering when an appropriate wave-optical multislice forward model is used together with compressive methods for sparse (in 3D) samples [19]. Here, we extend these ideas to the problem of localizing neurons and quantifying 3D fluorescence in brain tissue.

We further exploit the fact that functional imaging need not reconstruct a visual rendering of the 3D shape of neurons, but only needs to distinguish and localize them. Our algorithm estimates the light-field signature for each neuron and maps its 3D location without ever producing a traditional image. In reality, because different structures of one neuron may be spatially distributed, our method may distinguish any active structures (e.g., neurons, axons, dendrites, astrocytes, glia cells, membranes, synaptic terminals). Aberrations and scattering effects are incorporated into the light-field signatures, so do not degrade discrimination ability, though they may impact 3D localization accuracy. Video data can then be decomposed directly, skipping the error-prone 3D image reconstruction step and directly reaching the final goal: a quantitative measurement of fluorescence in each individual neural structure. The result is a task-based approach that is well-suited for 3D in vivo functional brain monitoring. We demonstrate our method experimentally for zebrafish neural activity tracking with 800+ neural structures at 100 fps.

2. METHODS

A. Experimental Setup

The experimental setup (shown in Fig. 1) is a fluorescence microscope that has been modified by introducing a microlens array at the imaging plane, with the sensor placed at the back focal plane of the array. By the principles of light-field imaging, both position (x,y) and direction of propagation (θx,θy) of light rays can be reconstructed from the 2D intensity captured by the sensor. This is because each microlens’ sub-image corresponds to the local pupil plane (angular distribution of rays). Capturing a 4D light field, I(x,y,θx,θy), using a 2D sensor necessitates a tradeoff between spatial and angular sampling. Our microlens array design choice is controlled by two parameters. The pitch (microlens diameter) controls spatial resolution in the (x, y) plane; here, we use 150 μm pitch, which corresponds to d=4μm at the sample. The focal length of each microlens (f=5.2mm) controls the range of angles measured based on numerical aperture (NAML=0.014), which is chosen to match the microscope output port for the case of a 40× water immersion objective (NA=0.5). Angular range and sampling will determine the axial resolution of the reconstruction.

 figure: Fig. 1.

Fig. 1. Experimental setup and post-processing steps for samples tagged with engineered fluorescent proteins to track brain activity. A fluorescence microscope is fitted with a microlens array for light-field data acquisition. A training video of sparse frames is acquired or computed by ICA. (Step 1) Training: light-field measurements are processed to separate and identify individual calcium sources (neural structures) by their 3D position. The extracted “light-field signature” represents the measurement (including scattering and aberrations) that would be made if only that corresponding neural structure was active. (Step 2) Subsequent data frames are decomposed as a linear positive combination of the light-field signatures in the dictionary. The coefficients of this decomposition represent a quantitative measure of calcium-induced fluorescence in each identified neuron.

Download Full Size | PDF

We call the 2D intensity measurement at the sensor plane, I(u,t), a light-field measurement since it maps to the 4D light-field information for each time, t: I(u,t)=I(x,y,θx,θy,t). The sampling of the 4D light field on a 2D plane of pixels is given by ux=Np(x/p+θx/(2NA)) and uy=Np(y/p+θy/(2NA)), where u=(ux,uy) are the lateral coordinates at the sensor, p is the pitch of the microlens array (square lattice), and is the floor function. We use a 4f relay with 1.7× magnification to record the signal I(u,t) on the sensor with a square of edge length Np=40 pixels under each microlens. This achieves good angular (and hence, axial) sampling by sacrificing lateral resolution, which will be improved in post-processing. The resulting field of view (at the sample) is a 200 μm square, with Nl=50 microlenses in each direction, for a total of Np2Nl2=4×106 pixels on the sCMOS sensor (Andor Zyla 4.2). Each acquired frame contains full-volume fluorescence data, with temporal resolution equal to the camera’s frame rate, 1/δt=100fps.

For functional brain activity monitoring, calcium ions entering the cell through specialized channels will change the conformational state of the genetically encoded calcium dye, GCaMP6, so that the fluorescence of each neuron correlates with its action potential firing rate [1]. The timescale of calcium diffusion is faster than the response time of GCaMP6. The light-field measurement at a given time, I(u,t), can then be written as a linear superposition of the individual contributions of the N neurons. We decompose the measurement into a set of independent spatial components that change over time:

I(u,t)=j=1NIj(u)aj(t),
where aj(t) represents our ultimate goal: the time-dependent magnitude of fluorescence in the jth neuron. We call Ij(u) the corresponding light-field signature—the measurement that would result if only the jth neural structure were active. Each neuron has a unique light-field signature due to its unique location in 3D space. Conveniently, the signature naturally encodes any shape variations and effects of aberrations and scattering. We assume here that the light-field signatures do not change with time, i.e., every time the neuron fires, the light passes through the same path.

Our goal is to build up a dictionary of light-field signatures for each neuron, so that subsequent data frames can be decomposed into their constituent neural signatures. Of course, it is not feasible to sequentially activate each neuron and directly measure the dictionary; hence, we must calibrate the system using only uncontrolled data. We do this using a training video of light-field measurements in which all of the neurons of interest activate at least once during the video’s capture time. This may be done either before the experiment of interest or as part of the actual experiment. From this video, we are able to extract the light-field signatures of individual neurons and their 3D positions, which together make up our dictionary. Since our extraction method requires spatial sparsity, an optional preprocessing step may be employed to exploit temporal correlations for generating sparse components from nonsparse video data.

B. Light-Field Signature Identification

Our training routine aims to extract the light-field signature for each neuron from video data of many neurons firing at random. We cannot assume that only one neuron is active in each frame, but we will assume, for now, that the active neurons in any one frame, I(u,t), are sparse in 3D (spatial sparsity condition). We can then use a compressed sensing approach to separate and localize individual neural structures in 3D from scattered and aberrated light-field measurements having multiple neurons active at once.

Every point source traces out a 2D hyperplane in the 4D light-field space (see Fig. 2). The (x,y) position where this plane crosses the (θx,θy) axes defines the source’s lateral position and the tilt of the plane defines its depth. Scattering repeatedly spreads information along the angle dimensions as light propagates, causing deeper sources to both tilt and spread. Neurons have fairly compact cell bodies, so calcium fluorescence in the cytoplasm is mainly confined to a 5 μm region around the nucleus, which behaves similarly to a point source. Using this forward model and searching for a sparse solution, it is possible to robustly estimate the 3D position of each active neuron.

 figure: Fig. 2.

Fig. 2. Extracting light-field signatures and 3D positions of individual neural structures. (a) One of the 40 sparse light-field components. (b) Light-field slice along the red dashed line. Each distinct structure prescribes a line in the space-angle plot, whose position and tilt indicates lateral position and depth, respectively. Individual neural structures are distinguished and localized—shown here as different colors. (c) Overlay of extracted light-field signatures for multiple neural structures, each with a different color. (d) Estimated 3D positions for each of the neurons in this component.

Download Full Size | PDF

We use an accelerated proximal gradient algorithm to solve the following 1-regularized optimization problem for c [19]:

minc>0(riI(u,t)I^(u,t)2+μric(ri,t)),
where ri=(xi,yi,zi) discretizes the volume, μ is a hand-tuned regularization constant which enforces sparsity, and
I^(u,t)=riI^i(u,t)=ric(ri,t)A(ri,ux,uy),
is the predicted measurement, given by applying our forward model to the current estimate of the spatial distribution of sources in the sample, c(ri,t). Our forward model, A(ri,ux,uy), describes the light-field measurement from a point source located at position ri in the scattering media (refractive index nr) [19]:
A(ri,ux,uy)=z02πzi2ez02zi2((xix)2+(yiy)2+zi2(θx2+θy2)),
where ri=(xi,yi,zi) and z0=nr/(2λσ) is the distance traveled through the scattering media. Here, the microscope is focused on the near-objective surface of the tissue, so zi is the distance to this surface. We optimize over σ, which describes the amount of scattering (assumed to be homogenous and Gaussian-distributed in angle). Our model is based on wave-optical diffraction and accounts for multiple scattering [19].

We solve the optimization problem in Eq. (2) for c at each time frame and add the resulting light-field signatures and 3D positions to our dictionary. Each solution provides the sparse set of k neural structures active in that frame, along with their 3D positions. Each light-field signature Ij(u) corresponding to neuron j is a normalized time-independent quantity:

Ij(u)=1aj(t)I^j(u,t)I^(u,t)I(u,t),withaj(t)=I^j(u,t)I^(u,t)I(u,t)du.
The sum of all the elements should match the measured data:
Ij(u)du=1,andj=1NIj(u)aj(t)=I(u,t).
Figure 2 shows an example sparse frame from our zebrafish experiments (Fig. 4) and its extracted light-field signatures and 3D positions of detected neurons. Neurons near the surface have fairly compact light-field signatures, whereas deeper neurons spread and scatter to larger areas of the sensor, as expected.

1. Experiments

To demonstrate 3D detection and localization capabilities, we show experimental results for a simple test object with and without scattering. Our nonscattering sample is a static suspension (5.0*103μL1) of 1 μm sparsely distributed fluorescent beads in a 200 μm slice of agarose gel. As a proxy to ground-truth knowledge of the bead positions, we use two-photon microscopy to scan the imaging volume. We then record a single-shot light-field frame, which is shown in Fig. 3(a) along with several space-angle slices of the 4D light field. Each bead traces out a tilted line in the space-angle plot, as expected. After estimating the 3D bead positions using Eq. (2), we compare the results to our two-photon data [Figs. 3(d) and 3(e)]. Both detect the same set of beads, with a median difference in position of 1.3 μm in the (x,y) plane and 12.8 μm along the z axis. Assuming that the two-photon result is accurate, the error in our scheme is small enough to distinguish individual neurons and localize them.

 figure: Fig. 3.

Fig. 3. Single-shot experimental detection and 3D localization of sparsely distributed fluorescent beads, with and without scattering, as compared to two-photon microscopy scanned images. (a) Single-shot light-field measurement and several space-angle slices (along the red lines) without scattering. (b) Dataset recorded after placing a 100 μm slice of wild-type mouse brain tissue directly above the beads so as to introduce realistic scattering conditions without displacing the volume of interest. (c) 2D intensity images become blurred by scattering. (d) and (e) Comparison of localization capabilities for two-photon and our light-field microscopy, with and without scattering. (d) Estimated source positions are projected onto the x, y plane for visualization and (e) shown in 3D.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. Neural activity tracking in the telencephalon of a five-day-old live zebrafish restrained in agarose. (a) Light-field signatures were extracted for 802 neural structures and 10 s of spontaneous activity was recorded at 100 Hz. (b) The normalized change of fluorescence, dF/F, is displayed for each neuron as a function of time. Motion is quantified by digitally tracking the first moment of the 2D image, with visible motion artifacts at t=1.9s, t=4.9s, and t=9s. (c) For each identified neuron, the position in 3D space is estimated, with color showing time-averaged fluorescence activity across both telencephalic lobes of the fore-brain.

Download Full Size | PDF

Next, we test our method with scattering tissue by repeating the same experiment after placing a 100 μm slice of mouse brain tissue on top of the sample. This emulates conditions that would normally prevent good depth reconstructions. To get a sense of the amount of scattering, we show 2D intensity images in Fig. 3(c). The scattered image is degraded, yet there is still structure in the 4D light-field measurement. Despite scattering, the median difference in detected positions between our algorithm and the two-photon data is 1.8 μm in the (x,y) plane and 15.5 μm along the z axis, slightly worse than the nonscattering case. For comparison, we show that traditional light-field refocusing (see Visualization 1) and threshold-based detection (see Visualization 2) fails. However, our method’s detection and localization capabilities are not significantly affected by the presence of optical scattering in this case.

C. Independent Component Analysis

The detection and localization of neurons described thus far requires spatial sparsity in each video frame. Very large dictionaries of light-field signatures can be mapped out by videos with many frames, but only a sparse set may be active in each frame. More sparsity also leads to better localization (see Supplement 1). Unfortunately, our raw data is not always spatially sparse, in particular when the brain experiences periods of intense activity such as responding to a strong stimulus.

To address this issue, we implement a preprocessing step on the video data: Independent component analysis (ICA). The purpose of this optional first step is to take advantage of the temporal diversity of action potentials across many frames in order to computationally isolate time-correlated sets of neurons that can be spatially distinguished. With a large-enough training dataset, ICA provides spatially sparse light-field components.

We represent this space–time separation as follows:

I(u,t)=n=1NkI(n)(u)fn(t),
where Nk is the number of spatially independent components in the training dataset. The positive spatially independent components, I(n)(u), are modulated in time by positive coefficients fn(t). We consider a dataset of Nt frames, each containing a single-shot light-field measurement from the camera sensor, given by I(i,n)=I(n)(u(i)), where i=1Nl2Np2 pixels. The coefficients of matrix I correspond to photon counts from each neuron, and so must be non-negative (positivity constraint). For large values of Nt it is fair to assume that, even despite strong activity correlations between neurons, the acquired dataset is a linear superposition of spatial components from one or more neurons. Equation (7) therefore becomes
I=ST,
with S being a Nl2Np2 by Nk matrix of positive components:
Si,k=I(k)(u(i)),
and T being a Nk by Nt matrix of positive temporal components,
Tk,j=fk(jδt).
The number of individual components in the training data, Nk, is determined by decomposing matrix I [Eq. (8)] into singular values (see Supplement 1). Equation (8) seeks to reduce the dimensionality of a dataset by finding a locally optimal choice of S and T such that S,T0, and is known as non-negative matrix factorization (NMF) [22]. NMF has been shown to be a nondeterministic, polynomial-time (NP-hard) problem, but there exist polynomial-time local-search heuristics that are guaranteed to converge [23]. Traditional NMF is known to yield a sparse decomposition, yet does not exploit some of the characteristic properties of functional brain imaging. Here, in order to facilitate further processing of independent components, we slightly modify the standard NMF objective function. We implement a sparsity constraint known as 1 “Lasso” regularization [24,25] on the temporal components:
minSi,k>0,Tk,j>0(IST2+λ1k,j|Tk,j|),
where λ1 is a hand-tuned parameter which depends on the level of spontaneous neural activity in the calibration dataset. Our implementation uses an active-set approach to alternating non-negative least squares in order to find a locally optimal solution in polynomial time [26]. The optimal solution is given by T^ and S^. Since temporal correlations between neurons’ activity are very common, the independent component extraction does not guarantee identification of individual neurons but rather of sparse spatial components, S^ (see Visualization 3).

To summarize, our method for building the dictionary of light-field signatures has two steps. First, training data video is acquired and decomposed into spatially sparse components with ICA. Second, each sparse component is decomposed into light-field signatures which represent the footprints of single neurons (Fig. 2). By acquiring enough data frames, it becomes possible to identify every active neuron in the volume of interest as long as two neurons are not fully correlated in both space and time. Should the ICA step make the same neuron appear in multiple components, we identify and merge the signatures by evaluating the mutual distances between identified neurons and detecting overlaps within the typical size of one neuron (here we use a 4 μm mutual distance threshold). The method is naturally robust to optical scattering and aberrations, whose effects are included in the light-field signature. In fact, scattering may help to make each neuron’s signature more distinguishable (Fig. 2).

We build the dictionary from the set of all extracted light-field signatures, each of which also comes with an estimated 3D position of the neuron. The dictionary of signatures for N identified neurons is denoted by {Ij(u)|j=1N} or, as in Eq. (9), with a matrix representation, D, defined by Di,k=I(k)(u(i)). Although the 3D position accuracy degrades with depth and scattering (see Supplement 1), the ability to distinguish neurons remains intact deep into the scattering tissue.

D. Neural Activity Reconstruction from Experimental Data

After the completion of the training step, the dictionary of light-field signatures can be used to efficiently decompose any single-shot measurement acquired by the light-field microscope (including the training data) into a linear positive combination of elements of the dictionary (see Supplement 1). The number of active neurons, N, in one frame should be smaller than the number of sensor pixels (Np2Nl2=4×106). Even though a raw data frame may not necessarily show sparsity nor represent a sparse set of active neurons, the amount of fluorescence in each neuron, a1,,aN, in each frame is the solution of a non-negative least square optimization problem given by

mina1an>0I(u,t)j=1NajIj(u)2.
Experimental results for neural activity tracking with both the ICA step and the compressive detection and localization are shown in Fig. 4. A five-day-old Tg(NeuroD:GCaMP6f) zebrafish expressing GCaMP6 in the telencephalon is placed in the microscope. The generation of the transgenic zebrafish line will be the subject of a future publication [27]. The fish is live, awake, and immobilized in 2% low-temperature melting agarose. 40 independent components are extracted from 500 diverse frames from the calibration step. Each independent component is then separated into single neuron signatures. The final dictionary contains a set of 802 light-field signatures, as well as an estimated position in 3D for each corresponding calcium source [see Fig. 4(b)]. We then record 10 s of spontaneous brain activity at 100 fps. The solution of Eq. (12) provides a quantitative measurement of fluorescence for all neurons in the field of view for which a light-field signature has been identified. Figure 4(a) shows color-coded lines representing the normalized change of fluorescence, F, as a function of time, given by
(dFF0)i(t)=ai(t)(1/T)t=0Tai(t)dt1,
where T is the time duration of the video. We display activity in each neuron as a function of time in Fig. 4 and show a video reconstruction of the 3D activity in Visualization 4.

Scattering is minimal in zebrafish (σ=0.02), but motion is a limiting problem. In Fig. 4, artifacts appear at three time points when the zebrafish attempted to move, making the results inaccurate for the duration of muscular activity 0.1s (dark blue lines). Once the zebrafish returns to rest, the dictionary becomes valid again (residual error drops, see Visualization 5). In this experiment, every neuron expresses GCaMP and the resulting 802 detected active sources are most likely neurons but may also be, for example, dendrites whose size is comparable to the spatial resolution. In future applications, this issue can be solved by limiting the expression of functional markers of neural activity to a localized volume, for instance near the nucleus [28], or by capturing a two-photon structural scan before the experiment to disambiguate. Future work will explore ways to correct for motion by periodically recalibrating the dictionary or implementing motion-correction algorithms in light-field space. Further improvements may come from taking into account the specific temporal dynamics of calcium fluorescence [29] and accounting for inhomogeneous scattering.

3. ANALYSIS

A. Spatial Resolution

In traditional light-field microscopy where volume image reconstruction is the goal, resolution can be obtained by measuring the size of the point-spread function [12] or spatial bandwidth [14]. In brain tissue, resolution is further complicated by a dependence on scattering and density of neurons. Here, our method operates without ever reconstructing a volume image. In order to experimentally demonstrate our ability to resolve closely spaced neurons in scattering tissue, the experimental setup is slightly modified [see Fig. 5(a)]. We place a slice of mouse brain tissue (that does not express any particular fluorescence) above an artificial “neuron”. The artificial neuron is implemented by a second objective focusing light (in the emission spectral range, λ=532nm) to a spot the size of a typical neuron cell body (10μm). This source is controllably moved through the 3D space (x,y,z) in order to collect measurements through the tissue for each known source position. Two active neurons can be mimicked by adding the measurements from two positions of the source. This data is then input into our algorithm in order to compare and quantify localization performance. By removing the microlens array, we can also compare to conventional 2D fluorescence microscopy. The thickness of the slice is varied from 100 μm to 400 μm so as to mimic scattering from various depths.

 figure: Fig. 5.

Fig. 5. (a) Modified experimental setup for spatial resolution measurements. Slices of mouse brain tissue with varying thickness are placed above an artificial source (created by a second microscope objective) that is intended to mimic the fluorescence in an active neuron. The artificial source can be precisely positioned at any location in 3D space. Comparison of distinguishability for light-field data versus 2D fluorescence data. Measurements for two source positions are displayed simultaneously with red and green color maps, for separation (b) in the (x,y) plane and (c) along the z axis. (d) Distinguishability of the two captured images as a function of separation distance between two sources. Light-field measurements outperform 2D fluorescence in the lateral plane and (e) along the optical axis. (f) We use our algorithm to estimate the position of the source through a 300 μm slice for controlled source displacements along the (y) axis in strong scattering.

Download Full Size | PDF

We define the spatial resolution along a given axis (x, y, or z) as the minimum allowed separation distance δx,δy, or δz between two neurons for identification as two separate sources. This provides an upper limit on the number of neurons, N, that can be simultaneously observed in a volume, V, of brain tissue:

NVδxδyδz.
Spatial resolution here will correspond to the ability to distinguish and localize neurons. At a minimum, two resolved neurons must produce different light-field measurements. In Figs. 5(c) and 5(d), we display the light-field measurements from two source positions (40 μm separation) simultaneously on a two-color scale, with one in green and the other in red. The two sources yield distinct light-field measurements, as expected, but also distinct 2D fluorescence images. Since our algorithm never reconstructs a 3D image, it can potentially be applied directly to 2D images without ever using the microlens array.

To compare these two situations, consider two sources, 1 and 2, and the corresponding measurements, I1 and I2. We compute a metric for distinguishability, D, given by

D(I1,I2)=1I1(u)I2(u)duI12(u)duI22(u)du.
By definition, D=1 (fully distinguishable) when the recorded images give two disjoint sets of pixels and D=0 (not distinguishable) for the case of identical light-field signatures, as a direct consequence of Cauchy–Schwarz’s inequality.

Light-field measurements provide better distinguishability than 2D fluorescence images, particularly in the axial dimension. Figures 5(d) and 5(e) plot experimentally measured distinguishability as the separation distance between two sources is increased, for both the lateral and axial dimensions and with both light-field and 2D fluorescence data. To get a sense for how distinguishability relates to localization error in our algorithm, we recover 3D positions as the source is moved along y through 300 μm of scattering brain tissue, using the light-field data [see Fig. 5(f)].

In the absence of noise, accurate decomposition of a training dataset into light-field signatures is possible as soon as the light-field measurements associated with any two different neurons are not strictly identical. In theory, a single-pixel difference would be sufficient (D>0). Practically, at full frame rate and in low-light conditions, a conservative condition for identification [30] is to compare the distinguishability, D, to the signal to noise ratio (SNR) in the light-field measurements:

D(I1,I2)>1SNR.
Here, at 100 Hz sampling rate and without significant photobleaching, fluorescence is excited with SNR3, which corresponds to a minimal separation distance in the focal plane of δx=δy as well as along the optical axis δz, which we defined as the spatial resolution. The experiment is repeated in several locations and for various thicknesses of brain tissue, with results summarized in Figs. 6(a) and 6(b). Overall, the light-field data provides 10× better localization resolution in all dimensions as compared to 2D fluorescence images parsed by the same algorithm. Light-field deconvolution methods [14] are expected to have performance somewhere in between these two.

 figure: Fig. 6.

Fig. 6. Spatial resolution analysis for our method, according to the minimal distance between two sources required for correct identification as separate neurons (a) in the lateral (x, y) plane and (b) along the optical axis through a given depth of mouse brain tissue. Fluorescence microscopy (green) and light-field microscopy (blue) are compared on the same scale and show a tenfold difference in performance along all axes. (c) Estimated maximum resolvable neuron density as a function of depth in mouse brain tissue, as compared to typical neuron density observed in the mouse barrel cortex.

Download Full Size | PDF

To understand how our resolution metric relates to functional imaging capabilities, we deduce from Eq. (14) the maximal density of neurons, N/V, that can be resolved by light-field data versus 2D fluorescence data, then compare both to the density of neurons typically observed in layers I to IV of mouse brain (Primary somatosensory cortex) [31] [see Fig. 6(c)]. This plot confirms experimental observations: 2D fluorescence microscopy is unable to identify neurons located below Layer I in the barrel cortex. However, light-field data enables a 1000-fold improvement (tenfold improvement along each axis) in the neuron density that can be resolved as compared to 2D fluorescence, so it is a promising avenue toward neural activity tracking in all layers.

4. CONCLUSION

We have demonstrated compressive light-field microscopy as a path toward directly addressing the needs of neuroscience for accurate, quantitative measurement of fluorescence activity in the living brain. Our method enables single-shot capture of volumetric brain activity with neuron-scale resolution. We exploit both spatial and temporal sparsity in order to distinguish and 3D localize individual neural structures. Because the light-field signatures are calibrated in situ, the strategy is robust to optical scattering and allows for real-time readout of brain activity without ever reconstructing a 3D image. Conveniently, it does not require careful alignment or calibration and can be implemented with inexpensive lenslet arrays. Since the data requirements scale with the number of active neurons in a single frame, not the number of voxels reconstructed, we believe that this method can scale to extremely large networks of neurons and be amenable to use with patterned stimulation, enabling functional activity mapping of the entire mouse brain cortex.

Funding

David and Lucille Packard Foundation; New York Stem Cell Foundation (NYSCF); Arnold and Mabel Beckman Foundation.

Acknowledgment

The authors thank Andrew Prendergast and Claire Wyart for sharing the Tg(NeuroD:GCaMP6f) zebrafish transgenic line, as well as Benjamin Recht, Ehud Isacoff, Claire Oldfield, Elizabeth Carroll, Alan Mardinly, Evan Lyall, Ian Oldenburg, and Eric Jonas. L. W. acknowledges a fellowship from the David and Lucille Packard Foundation. H. A. is a New York Stem Cell Foundation Robertson Investigator and acknowledges support from the Arnold and Mabel Beckman Foundation.

See Supplement 1 for supporting content.

REFERENCES

1. T.-W. Chen, T. J. Wardill, Y. Sun, S. R. Pulver, S. L. Renninger, A. Baohan, E. R. Schreiter, R. A. Kerr, M. B. Orger, V. Jayaraman, L. L. Logger, K. Svoboda, and D. S. Kim, “Ultrasensitive fluorescent proteins for imaging neuronal activity,” Nature 499, 295–300 (2013).

2. C. Petersen, A. Grinvald, and B. Sakmann, “Spatiotemporal dynamics of sensory responses in layer 2/3 of rat barrel cortex measured in vivo by voltage-sensitive dye imaging combined with whole-cell voltage recordings and neuron reconstructions,” J. Neurosci. 23, 1298–1309 (2003).

3. A. Bègue, E. Papagiakoumou, B. Leshem, R. Conti, L. Enke, D. Oron, and V. Emiliani, “Two-photon excitation in scattering media by spatiotemporally shaped beams and their application in optogenetic stimulation,” Biomed. Opt. Express 4, 2869–2879 (2013).

4. W. Denk, J. Strickler, and W. Webb, “Two-photon laser scanning fluorescence microscopy,” Science 248, 73–76 (1990). [CrossRef]  

5. T. Schrödel, R. Prevedel, K. Aumayr, M. Zimmer, and A. Vaziri, “Brain-wide 3D imaging of neuronal activity in caenorhabditis elegans with sculpted light,” Nat. Methods 10, 1013–1020 (2013).

6. G. Katona, G. Szalay, P. Maak, A. Kaszas, M. Veress, D. Hillier, B. Chiovini, E. S. Vizi, B. Roska, and B. Rozsa, “Fast two-photon in vivo imaging with three-dimensional random-access scanning in large tissue volumes,” Nat. Methods 9, 201–208 (2012).

7. S. J. Yang, W. E. Allen, I. Kauvar, A. S. Andalman, N. P. Young, C. K. Kim, J. H. Marshel, G. Wetzstein, and K. Deisseroth, “Extended field-of-view and increased-signal 3D holographic illumination with time-division multiplexing,” Opt. Express 23, 32573–32581 (2015).

8. P. Keller, A. Schmidt, J. Wittbrodt, and E. Stelzer, “Reconstruction of zebrafish early embryonic development by scanned light sheet microscopy,” Science 322, 1065–1069 (2008). [CrossRef]  

9. E. Baumgart and U. Kubitscheck, “Scanned light sheet microscopy with confocal slit detection,” Opt. Express 20, 21805–21814 (2012).

10. M. B. Bouchard, V. Voleti, C. S. Mendes, C. Lacefield, W. B. Grueber, R. S. Mann, R. M. Bruno, and E. M. Hillman, “Swept confocally-aligned planar excitation (scape) microscopy for high-speed volumetric imaging of behaving organisms,” Nat. Photonics 9, 113–119 (2015).

11. M. Levoy and P. Hanrahan, “Light field rendering,” in Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, New York (ACM, 1996), pp. 31–42.

12. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” in ACM SIGGRAPH 2006 Papers, Boston, Massachusetts (ACM, 2006), pp. 924–934.

13. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25, 924–934 (2006). [CrossRef]  

14. M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-d deconvolution for the light field microscope,” Opt. Express 21, 25418–25439 (2013).

15. L. Tian and L. Waller, “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica 2, 104–111 (2015).

16. N. Cohen, S. Yang, A. Andalman, M. Broxton, L. Grosenick, K. Deisseroth, M. Horowitz, and M. Levoy, “Enhancing the performance of the light field microscope using wavefront coding,” Opt. Express 22, 24817–24839 (2014).

17. C.-H. Lu, S. Muenzel, and J. Fleischer, “High-resolution light-field microscopy,” in Computational Optical Sensing and Imaging, OSA Technical Digest (online) (Optical Society of America, 2013), paper CTh3B.2.

18. L. Waller, G. Situ, and J. W. Fleischer, “Phase-space measurement and coherence synthesis of optical beams,” Nat. Photonics 6, 474–479 (2012).

19. H.-Y. Liu, E. Jonas, L. Tian, J. Zhong, B. Recht, and L. Waller, “3D imaging in volumetric scattering media using phase-space measurements,” Opt. Express 23, 14461–14471 (2015).

20. R. Prevedel, Y.-G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,” Nat. Methods 11, 727–730 (2014).

21. M. Lax, “Multiple scattering of waves,” Rev. Mod. Phys. 23, 287–310 (1951). [CrossRef]  

22. D. D. Lee and H. S. Seung, “Algorithms for non-negative matrix factorization,” in Advances in Neural Information Processing Systems 13, T. Leen, T. Dietterich, and V. Tresp, eds. (MIT, 2001), pp. 556–562.

23. S. A. Vavasis, “On the complexity of nonnegative matrix factorization,” SIAM J. Optim. 20, 1364–1377 (2009).

24. R. Tibshirani, “Regression shrinkage and selection via the lasso,” J. R. Stat. Soc. Ser. B 58, 267–288 (1994).

25. A. Y. Ng, “Feature selection, L1 vs. L2 regularization, and rotational invariance,” in Twenty-first International Conference on Machine Learning, Banff, Alberta (ACM, 2004), p. 78.

26. H. Kim, “Sparse non-negative matrix factorizations via alternating non-negativity-constrained least squares for microarray data analysis,” Bioinformatics 23, 1495–1502 (2007). [CrossRef]  

27. C. S. Oldfield, A. R. Huth, M. Chavez, E. Carroll, A. Prendergast, T. Qu, A. Hoagland, C. Wyart, and E. Y. Isacoff, University of California, Berkeley, Berkeley, CA 94720 and CNRS-UMR-7225, 75005 Paris, France, are preparing a paper to be called “Experience shapes hunting behavior by increasing the impact of information transfer from visual to motor areas.”

28. C. K. Kim, A. Miri, L. C. Leung, A. Berndt, P. Mourrain, D. W. Tank, and R. D. Burdine, “Prolonged, brain-wide expression of nuclear-localized gcamp3 for functional circuit mapping,” Front. Neural Circuits 8, 00138 (2014).

29. E. Pnevmatikakis, D. Soudry, Y. Gao, T. A. Machado, J. Merel, D. Pfau, T. Reardon, Y. Mu, C. Lacefield, W. Yang, M. Ahrens, R. Bruno, T. M. Jessell, D. Peterka, R. Yuste, and L. Paninski, “Simultaneous denoising, deconvolution, and demixing of calcium imaging data,” Neuron 89, 285–299 (2016). [CrossRef]  

30. M. E. Lopes, “Estimating unknown sparsity in compressed sensing,” arXiv:1204.4227 (2012).

31. H. Meyer, V. Wimmer, M. Oberlaender, C. De Kock, B. Sakmann, and M. Helmstaedter, “Number and laminar distribution of neurons in a thalamocortical projection column of rat vibrissal cortex,” Cereb. Cortex 20, 2277–2286 (2010).

Supplementary Material (6)

NameDescription
Supplement 1: PDF (4319 KB)      Supplemental document
Visualization 1: MP4 (2038 KB)      Light-field refocusing.
Visualization 2: MP4 (5817 KB)      Threshold-based detection.
Visualization 3: MP4 (6942 KB)      Independent component extraction guarantees sparse spatial components.
Visualization 4: MP4 (18244 KB)      Video reconstruction of the 3D activity.
Visualization 5: MP4 (12298 KB)      When the zebrafish returns to rest, the dictionary becomes valid again (residual error drops).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Experimental setup and post-processing steps for samples tagged with engineered fluorescent proteins to track brain activity. A fluorescence microscope is fitted with a microlens array for light-field data acquisition. A training video of sparse frames is acquired or computed by ICA. (Step 1) Training: light-field measurements are processed to separate and identify individual calcium sources (neural structures) by their 3D position. The extracted “light-field signature” represents the measurement (including scattering and aberrations) that would be made if only that corresponding neural structure was active. (Step 2) Subsequent data frames are decomposed as a linear positive combination of the light-field signatures in the dictionary. The coefficients of this decomposition represent a quantitative measure of calcium-induced fluorescence in each identified neuron.
Fig. 2.
Fig. 2. Extracting light-field signatures and 3D positions of individual neural structures. (a) One of the 40 sparse light-field components. (b) Light-field slice along the red dashed line. Each distinct structure prescribes a line in the space-angle plot, whose position and tilt indicates lateral position and depth, respectively. Individual neural structures are distinguished and localized—shown here as different colors. (c) Overlay of extracted light-field signatures for multiple neural structures, each with a different color. (d) Estimated 3D positions for each of the neurons in this component.
Fig. 3.
Fig. 3. Single-shot experimental detection and 3D localization of sparsely distributed fluorescent beads, with and without scattering, as compared to two-photon microscopy scanned images. (a) Single-shot light-field measurement and several space-angle slices (along the red lines) without scattering. (b) Dataset recorded after placing a 100 μm slice of wild-type mouse brain tissue directly above the beads so as to introduce realistic scattering conditions without displacing the volume of interest. (c) 2D intensity images become blurred by scattering. (d) and (e) Comparison of localization capabilities for two-photon and our light-field microscopy, with and without scattering. (d) Estimated source positions are projected onto the x , y plane for visualization and (e) shown in 3D.
Fig. 4.
Fig. 4. Neural activity tracking in the telencephalon of a five-day-old live zebrafish restrained in agarose. (a) Light-field signatures were extracted for 802 neural structures and 10 s of spontaneous activity was recorded at 100 Hz. (b) The normalized change of fluorescence, d F / F , is displayed for each neuron as a function of time. Motion is quantified by digitally tracking the first moment of the 2D image, with visible motion artifacts at t = 1.9 s , t = 4.9 s , and t = 9 s . (c) For each identified neuron, the position in 3D space is estimated, with color showing time-averaged fluorescence activity across both telencephalic lobes of the fore-brain.
Fig. 5.
Fig. 5. (a) Modified experimental setup for spatial resolution measurements. Slices of mouse brain tissue with varying thickness are placed above an artificial source (created by a second microscope objective) that is intended to mimic the fluorescence in an active neuron. The artificial source can be precisely positioned at any location in 3D space. Comparison of distinguishability for light-field data versus 2D fluorescence data. Measurements for two source positions are displayed simultaneously with red and green color maps, for separation (b) in the ( x , y ) plane and (c) along the z axis. (d) Distinguishability of the two captured images as a function of separation distance between two sources. Light-field measurements outperform 2D fluorescence in the lateral plane and (e) along the optical axis. (f) We use our algorithm to estimate the position of the source through a 300 μm slice for controlled source displacements along the ( y ) axis in strong scattering.
Fig. 6.
Fig. 6. Spatial resolution analysis for our method, according to the minimal distance between two sources required for correct identification as separate neurons (a) in the lateral ( x , y ) plane and (b) along the optical axis through a given depth of mouse brain tissue. Fluorescence microscopy (green) and light-field microscopy (blue) are compared on the same scale and show a tenfold difference in performance along all axes. (c) Estimated maximum resolvable neuron density as a function of depth in mouse brain tissue, as compared to typical neuron density observed in the mouse barrel cortex.

Equations (16)

Equations on this page are rendered with MathJax. Learn more.

I ( u , t ) = j = 1 N I j ( u ) a j ( t ) ,
min c > 0 ( r i I ( u , t ) I ^ ( u , t ) 2 + μ r i c ( r i , t ) ) ,
I ^ ( u , t ) = r i I ^ i ( u , t ) = r i c ( r i , t ) A ( r i , u x , u y ) ,
A ( r i , u x , u y ) = z 0 2 π z i 2 e z 0 2 z i 2 ( ( x i x ) 2 + ( y i y ) 2 + z i 2 ( θ x 2 + θ y 2 ) ) ,
I j ( u ) = 1 a j ( t ) I ^ j ( u , t ) I ^ ( u , t ) I ( u , t ) , with a j ( t ) = I ^ j ( u , t ) I ^ ( u , t ) I ( u , t ) d u .
I j ( u ) d u = 1 , and j = 1 N I j ( u ) a j ( t ) = I ( u , t ) .
I ( u , t ) = n = 1 N k I ( n ) ( u ) f n ( t ) ,
I = ST ,
S i , k = I ( k ) ( u ( i ) ) ,
T k , j = f k ( j δ t ) .
min S i , k > 0 , T k , j > 0 ( I ST 2 + λ 1 k , j | T k , j | ) ,
min a 1 a n > 0 I ( u , t ) j = 1 N a j I j ( u ) 2 .
( d F F 0 ) i ( t ) = a i ( t ) ( 1 / T ) t = 0 T a i ( t ) d t 1 ,
N V δ x δ y δ z .
D ( I 1 , I 2 ) = 1 I 1 ( u ) I 2 ( u ) d u I 1 2 ( u ) d u I 2 2 ( u ) d u .
D ( I 1 , I 2 ) > 1 SNR .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.