Skip to main content
  • Research article
  • Open access
  • Published:

Variational attenuation correction in two-view confocal microscopy

Abstract

Background

Absorption and refraction induced signal attenuation can seriously hinder the extraction of quantitative information from confocal microscopic data. This signal attenuation can be estimated and corrected by algorithms that use physical image formation models. Especially in thick heterogeneous samples, current single view based models are unable to solve the underdetermined problem of estimating the attenuation-free intensities.

Results

We present a variational approach to estimate both, the real intensities and the spatially variant attenuation from two views of the same sample from opposite sides. Assuming noise-free measurements throughout the whole volume and pure absorption, this would in theory allow a perfect reconstruction without further assumptions. To cope with real world data, our approach respects photon noise, estimates apparent bleaching between the two recordings, and constrains the attenuation field to be smooth and sparse to avoid spurious attenuation estimates in regions lacking valid measurements.

Conclusions

We quantify the reconstruction quality on simulated data and compare it to the state-of-the art two-view approach and commonly used one-factor-per-slice approaches like the exponential decay model. Additionally we show its real-world applicability on model organisms from zoology (zebrafish) and botany (Arabidopsis). The results from these experiments show that the proposed approach improves the quantification of confocal microscopic data of thick specimen.

Background

Confocal microscopy has become a standard technique to record and localize fluorescent marker molecules within the 3-D context of organs and whole organisms on sub-cellular resolution. The confocal principle minimizes the blur introduced by the point spread function of the optics. However, signal degradations introduced by scattering and absorption within the inhomogeneous tissue still hamper many automatic image analysis steps like detection, registration, segmentation, or co-localization.

Light attenuation is a result of photon loss along the excitation and emission light paths. Photons get lost due to absorption, where the photons are converted to thermal energy, or due to scattering, where the photons leave the ray passing through the pinhole. Both effects result in a multiplicative reduction of the number of photons by a local tissue specific factor, and can therefore be modeled by the Beer-Lambert’s law. The opposite effect, an intensity increase, is caused by scattered photons that hit the pinhole by chance. In most tissues this second effect is small compared to the photon loss and its exact simulation would require an immense computational effort. Therefore we model only photon loss using attenuation coefficients accounting for both local absorption and scattering throughout the article.

Attenuation correction requires to estimate two quantities at each recording position, the local attenuation coefficient and the true underlying intensity. Solving for both quantities without further assumptions would require two noise-free measurements per recording position. However in most real-world applications only sparse measurements at the fluorescently marked structures are available (especially when imaging whole organs or organisms). Additionally the measured signal is distorted by Poisson distributed photon noise and Gaussian distributed read-out noise.

Single view approaches try to estimate both quantities from one recording that provides only one measurement per recording position. This requires strong prior assumptions to constrain the solution space. A common approach is to assume that the attenuation is dominated by aberrations introduced by a mismatch in immersion and embedding media [1, 2]. In the resulting models, local attenuation effects are neglected or constant absorption throughout the cuboid-shaped recording volume is assumed resulting in an exponential decay with imaging depth [3]. Other approaches estimate the attenuation from the per-slice intensity statistics. The overall intensity distribution is adapted towards a reference maximizing the overall coherence [4, 5].

One way of theoretically getting sufficiently many measurements to solve the problem is to record the sample from different angles (e.g. two views from opposite sides, see Figure 1). In [6] this has been done to increase the signal to noise ratio (SNR) of the reconstructed volume. The authors discuss, that previous approaches are only applicable given homogeneously distributed markers throughout the sample which is hardly the case. They propose instead to directly relate the absorption to the fluorophore distribution that can be observed. In [7], we go even one step further and assume no relationship between attenuation and marker, since only in rare cases all absorbing material is also fluorescently marked. The confocal image formation [7, 8] allows to recover attenuation in not fluorescently marked areas as long as they cast “shadows” through the sample along the excitation and emission cones of the different views. Only in regions where the light hits no fluorophores at all or in the case of full absorption an estimation is impossible.

Figure 1
figure 1

The attenuation correction work flow. (a) yz-sections of raw confocal stacks from two views (top/bottom). (b) Estimated real intensities and attenuation coefficients in the lower processing resolution. (c) Independently reconstructed intensities (high resolution) of the top- and bottom-views. (d) Final result after fusion of the reconstructed views. Scale bars indicate 200 μm. Shown intensities are clipped to the [ 0,500] range.

The multi-view approach can be applied to a wide range of data from biology and medicine. To underline this claim, we reconstruct the recordings of 500 μm thick zebrafish embryos, and Arabidopsis root tips. Two-view recording of tissue sections was already demonstrated in [6].

Contributions

This work is a significant extension to the attenuation correction presented in [7] extending the ideas and the evaluation presented in [9]. First, we use an elastic registration to align the two views which allows embeddings in viscous media without mechanical fixation. Second, we reformulate the image formation model to cope with the photon noise apparent in confocal microscopy and photo bleaching which cannot be avoided when recording the same sample multiple times. Third, we examine the effects of different priors (Tikhonov-Miller, total variation, and sparsity) on the attenuation field, and finally, we ensure the constraints on the variables (attenuation coefficients must be positive) directly in the optimizer. The effects of the different extensions are illustrated in Figure 2.

Figure 2
figure 2

Overview over the extensions to [7]. (a-b): yz- and xz-sections of raw confocal stacks from two views (top/bottom); (c): optimal reconstruction without regularization; left: (d) with Tikhonov Miller regularization (λ=107), (e) with sparsity (μ=1000), (f) with bleaching correction (est. β=0.64); right: (g) with total variation regularization (λ=104), (h) with sparsity (μ=1000), (i) with bleaching correction (est. β=0.63). Parts in the gray block are already implemented in [7]. Scale bars: 200 μm.

Methods

Image formation model

We use the image formation model and algorithms from [7] to simulate a confocal microscope with ideal point spread function. Optimal reconstruction quality using this model requires that the refractive indices of immersion and embedding medium match the specification of the microscope lens. They should be adapted to the average refractive index of the imaged specimen. The signal F i (x) for direction i (i=1: top, i=2: bottom) captured by the photo multiplier is modeled as the integral over a cone shaped bundle of rays originating at the recording position x3. Each ray is attenuated by the attenuation coefficients along its path so that

F i x = β i I x · S s i r · e 0 α x + r d d r 2 ,
(1)

where α: 3 Ω 0 are the spatially variant attenuation coefficients. s i :S→{0,1} are the cone sensitivity functions for both directions defined over the unit sphere S. s i (r) is one for all ray directions within the cone and zero otherwise. I:Ω 0 denotes the attenuation free intensities. The factors β i + can be used to additionally scale all intensities of the recordings. We use it to model photo bleaching induced signal attenuation in the second recording. Only the focused beam leads to significant bleaching since the excitation energy drops quadratically with the distance to the focal point. The assumption of constant bleaching for the whole volume is a zero-order approximation for the true bleaching function which is non-linear and specific for the marker-molecules used. We fix β1:=1 and optimize β2 alongside with the real intensities I and the attenuation coefficients α as described in the upcoming section.

Energy formulation

We want to maximize the posterior probability for the attenuation coefficients α, the attenuation-free intensities I, and the factor β2 given the observed intensities of the two recorded data volumes I1 and I2.

According to Bayes’ rule we get

α , I , β 2 = arg max α , I , β 2 P I 1 , I 2 α , I , β 2 P ( α , I , β 2 ) P ( I 1 , I 2 ) = arg max α , I , β 2 P I 1 , I 2 α , I , β 2 P ( α , I , β 2 ) .
(2)

The prior P(I1,I2) of the recorded images is independent of the attenuation, the true intensities, and the bleaching, therefore it could be dropped from the equation.

We have no prior knowledge about the expected intensities but want the attenuation coefficients to vary smoothly within local neighborhoods. Additionally we prefer solutions with zero attenuation estimates in areas of insufficient data. This can be modeled in the prior probability as

P α e λ Ω ψ α 2 d x · e μ Ω α 2 x + ε sp 2 d x ,
(3)

where Ω 3 is the recorded volume, λ,μ≥0 are weighting factors and εsp+ is a small constant which is added for reasons of numerical stability. The loss function ψ: is either the identity function, leading to quadratic regularization (Tikhonov Miller (TM)) or approximates the total variation regularization (TV) when set to ψ TV x 2 := x 2 + ε TV 2 with εTV+ being a small constant. This approximation is closely related to the Huber norm. During the experiments we set εsp=εTV=10−10.

We assume, that for all discrete recording positions x the observed attenuation coefficients and intensities are independent (except for their explicitly modeled common dependency on α, I, and β2) resulting in

α , I , β 2 =arg max α , I , β 2 P α x Ω i = 1 2 P I i x α , I , β 2 ,
(4)

where Ω is the discretized recorded volume.

The noise in the image intensities is dominated by the Poisson distributed shot noise due to the quantum nature of light but the measured intensities also contain additive Gaussian distributed read-out noise. The resulting noise model is the convolution of a Poisson process (scaled by a constant factor m+ and offset b) with a zero mean Gaussian distribution with standard deviation σ+. We eliminate the offset b by subtracting the intensity at the histogram peak from each recording prior to further processing. For reasons of computational efficiency, we approximate the Poisson process by a Gaussian process with variable variance (the variance is proportional to the mean of the distribution). We further assume, that the measured intensities are a good estimator of this mean value. At each position xΩ this leads to a convolution of two Gaussian distributions which results in the combined Gaussian distribution

P I i x α , I , β 2 1 2 π m 2 I i x e I i x β i F i x 2 2 m 2 I i x 1 2 π σ 2 e I i x β i F i x 2 2 σ 2 = 1 2 π m 2 I i x + σ 2 e I i x β i F i x 2 2 m 2 I i x + σ 2 .
(5)

Note, that for m=0 and σ=1 the new model coincides with the pure Gaussian model presented in [7]. The actual value for the Poisson scaling m (the number of collected photons per intensity level) and the standard deviation σ of the Gaussian noise can be estimated during a microscope calibration phase. If they are unknown, one of them can be fixed to an arbitrary value (we always fixed σ=1), and the other one can be adjusted to qualitatively obtain the optimum result. If additional sample information is available, e.g. the recordings consist of large homogeneous regions of different intensities, one can also try to estimate the parameters from the images themselves as done in [10]. However, for biological samples this is rarely the case.

The final energy formulation is obtained by replacing the maximization of the probability by a minimization of its negative logarithm

min α , I , β 2 E α , I , β 2 : = min α , I , β 2 i = 1 2 Ω I i x F i x 2 m 2 I i x + σ 2 d x E data + λ Ω ψ α x 2 d x E smooth + μ Ω α 2 x + ε sp 2 d x E sparse respect to β 2 > 0 x Ω : α x 0 .
(6)

To simplify the notation, we introduce the shorthands

T r α x : = e 0 α x + r d C i α x : = S s i r · T r α x d r F i α , I , β 2 x = β i I x C i 2 α x D i α , I , β 2 x : = I i x F i α , I , β 2 x

where T r is the attenuation along the ray with direction r, C i is the cone transmission for recording direction i, F i are the simulated intensities, and D i are the differences between the recordings and simulations. Variables in square brackets indicate dependencies on the corresponding optimization variables.

For the optimization we employ the Broyden-Fletcher-Goldfarb-Shanno algorithm with box constraints on the variables (short L-BFGS-B) [11]. The solver minimizes the energy while respecting the positivity of the attenuations throughout the iterative optimization. L-BFGS-B implements a quasi Newton method, therefore, we need to provide the derivatives of the energy with respect to the unknown intensities I, the attenuation coefficients α, and the bleaching factor β2. These are given by

δE α , I , β 2 δI x = 2 i = 1 2 β i D i x C i 2 x m 2 I i x + σ 2
(7)
δE α , I , β 2 δα x = 4 i = 1 2 β i S s i r 0 I x r T r x r C i x r D i x r m 2 I i x r + σ 2 d d r λ · 2 · div ψ α x 2 α x + μ α x α 2 x + ε sp 2
(8)
∂E α , I , β 2 β 2 = 2 Ω D 2 x I x C 2 2 x m 2 I 2 x + σ 2 d x ,
(9)

where the derivative of the loss function ψTM′(x2)=1 for the TM regularization and ψ TV x 2 = 1 2 x 2 + ε TV 2 for the TV approximation. The detailed derivations are given in the Additional file 1.

The bleaching factor β2 prevents a direct analytic optimization of the intensities at each quasi Newton iteration as presented in [7]. Instead we optimize the bleaching factor β2 and intensities I within each quasi Newton iteration in an inner fixed-point iteration loop that alternates between analytic computation of I[j] given β 2 j 1 and β 2 j given I[j] in inner iteration j where

I j x = I 1 x C 1 2 x m 2 I 2 x + σ 2 + β 2 j 1 I 2 x C 2 2 x m 2 I 1 x + σ 2 C 1 4 x m 2 I 2 x + σ 2 + β 2 j 1 2 C 2 4 x m 2 I 1 x + σ 2
(10)
β 2 j = Ω I 2 x I j x C 2 2 x m 2 I 2 x + σ 2 d x Ω I j x C 2 2 x 2 m 2 I 2 x + σ 2 d x .
(11)

Implementation

The variational attenuation correction was implemented in C++ and run under Linux (Ubuntu 12.04) on an Intel Xeon E5-2680 (2.7GHz) Dual-Processor system. For the optimization we used the ready FORTRAN implementation of the L-BFGS-B optimizer. One iteration for data sub-sampled to 80×80×80 voxels needed on average 1.8 seconds, so a full reconstruction can be computed in the range of a few minutes. The complexity scales linearly with the number of voxels to process within each iteration (Additional file 1: Figure S3). The memory complexity also scales linearly with the raw data volume. Both quantities can be limited by sub-sampling the high resolution raw data. This has two advantages: First, less computational resources are needed and second, the weighted averaging during the sub-sampling already considerably reduces the image noise. The cone transmission is computed in parallel for all ray directions leading to a significant speed-up of the confocal microscope simulation. Depending on the random computation order introduced by the scheduling the results can slightly deviate from the numbers reported in the Results section. For real-world data we observed deviations of the estimated intensities of up to 3% after convergence of the algorithm. However, these differences are visually not recognizable.

Discrete derivative and integral computation

The gradients needed in the derivatives of the TM smoothness term and in the sparsity term are computed using central differences. For TV regularization we extended the numerical differentiation-scheme from [12] to 3-D to obtain the divergence term in the derivative of the smoothness term. The corresponding equations are given in the Additional file 1. The cone integrals are approximated as in [7] with a ray spacing of six degrees. This approximation of the cone integrals requires high regularization to lead to good reconstructions. We also did experiments with an alternative ray integration scheme that uses thin rays instead of the incrementally widening conic rays of [7]. To still capture all attenuations the ray sampling was increased so that the cone is sampled densely at the largest cone diameter with respect to the volume grid. The result is given in the Additional file 1 and shows, that the energy formulation leads to the desired solution but the numerical approximations have crucial influence on the resulting reconstruction.

Data generation

Synthetic data

To quantitatively evaluate our method, we generated two different synthetic datasets. The first consists of a solid sphere with constant absorption coefficients of 0.006 per voxel. The interior 60% of the sphere were set to an intensity value of 4094. We added a smooth random texture with a variance of approximately 30% of the maximum of the corresponding quantity. This corresponds to the “well-posed” case when intensity and absorption information is available in the whole domain. The second dataset is the well-known Shepp-Logan phantom [13] consisting of a set of overlapping ellipsoids with homogeneous intensities. We assigned absorption coefficients to the different regions avoiding direct correlation with the intensities. Some regions were assigned equal attenuations independent of their intensity difference. During the simulation we applied an anisotropic Gaussian smoothing to reflect the microscope’s point spread function and ensure Nyquist sampling. For both datasets two recordings from opposite sides were simulated using (1) modeling absorption and photo bleaching (β2=0.8, meaning 20% signal loss between the recordings). Then Poisson noise with scaling m=0.05 and Gaussian noise with standard deviation σ=10 were applied. The datasets are shown in Figure 3.

Figure 3
figure 3

Simulated data. Synthetic ground truth and top/bottom simulations. (a) solid sphere with smooth random texture added to attenuations and intensities (well-posed reconstruction problem). Intensity range: [0, 8500], attenuation range: [0, 0.012]; (b) Shepp-Logan phantom with large constant areas (ill-posed reconstruction problem). Intensity range: [0, 65535], attenuation range: [0, 0.01]. All views show the central xy-, xz-, and zy-sections of the corresponding 3D volumes. Each panel shows: First row: ground truth intensities I (left) and absorption coefficients α (right). Second/third row: confocal simulations from top/bottom; Noise free (left); with applied Poisson-Gaussian noise (right) (m=0.05, σ=10. The rmse compared to the noise free simulation is indicated).

Zebrafish

To show that the approach also copes well with real world data, we tested it on samples of the ViBE-Z database consisting of confocal recordings of whole zebrafish (Danio rerio) embryos, which were fixed 72h after fertilization. Sample preparation, recording setup and image preprocessing are described in detail in [7]. The processing was performed on sub-sampled data with isotropic voxel extents of 8 μm.

Arabidopsis thaliana

Finally we tested the approach on recordings of the root tip of the model plant Arabidopsis thaliana. The samples were fixed 96h after germination and the cell membranes marked with an Alexa antibody stain. Then they were embedded in SlowFade Gold Antifade (Invitrogen) and recorded from top and bottom using a confocal microscope equipped with a 40 × oil immersion objective. We applied the elastic registration algorithm from [7] to register the two views to each other. Finally we performed a background subtraction prior to applying the attenuation correction. The embedding medium had a refractive index of 1.42 compared to a refractive index of the immersion oil of 1.52 for which the lens was adjusted. The attenuation correction was performed on sub-sampled data with isotropic voxel extents of 2 μm.

Parameter setup

We want all terms in the energy to have approximately the same influence on the optimization process. This leads to rough rules of thumb for the selection of λ and μ. Since all terms integrate over the whole image domain, the choice is independent of the number of voxels. The energy contribution of the data term is in the order of the squared expected intensity differences between recording and simulation divided by the Poisson weights. The smoothness term’s contribution is in the order of the magnitude of the expected attenuation gradient (TV) or its square (TM). Finally, the sparsity term’s contribution is in the order of the expected attenuations. E.g. for intensity data with an expected residual intensity difference of 5 (corresponding to the average noise intensity) and pure Gaussian noise with expected attenuation coefficients of 0.005 and gradient magnitudes of 0.0005 initial choices of λ= 5 2 0.000 5 2 =4·1 0 8 and μ= 5 2 0.005 =5000 (TM), resp. λ= 5 2 0.0005 =5·1 0 4 and μ=5000 (TV) are appropriate. The approximate estimates for the expected attenuations and their gradients were empirically confirmed on real world samples. For higher Poisson weighting m the factors have to be decreased accordingly. The optimal values depend on the image content and should be optimized for specific types of data.

For the synthetic phantoms we chose for each fixed m the optimal λ and μ which minimize the root mean squared error (rmse) of the true intensities and the estimated intensities. The parameters were empirically determined with an exponential grid search over a parameter range of λ{0,100,…,109} and μ{0,102, 103,104} for the textured sphere phantom, and λ{0, 106,…,1012} and μ{0,103,…,108} for the Shepp-Logan phantom. For all experiments we set σ:=1. For the real world data we used a conservative parameter set of λ=107, μ=0 and m=0.1 (Arabidopsis) or λ=108, μ=104 and m=0 (zebrafish) for all experiments with TM regularization. For the zebrafish experiments with TV regularization we set λ=5·104, μ=0, and m=0. For the real world data we stopped the iterative process when the visually optimal reconstruction of the intensities was reached, which was after between 3 to 20 iterations. For the textured sphere phantom data we set a maximum of 50 iterations for TM regularization and ran the algorithm to convergence for TV regularization. All results reported for the Shepp-Logan phantom were reached at convergence of the algorithm.

Results and discussion

In Figure 2 the influence of the different extensions to the original model in [7] are summarized. If no prior knowledge about the attenuations is introduced (Figure 2 (c)) the approach is already able to reasonably reconstruct the original intensities. However, the attenuation field is coarse and cannot be applied to the reconstruction of secondary channels. With regularization (Figure 2 (d) and (e)) the attenuation field is much smoother, but especially with Tikhonov Miller regularization strong spurious attenuations outside the sample are estimated. Application of the sparsity term reduces these attenuation estimates (Figure 2 (f) and (g)). The residual apparent “bleeding” of the attenuation coefficients below the sample are the effect of different mean intensities in the top and bottom recordings, as e.g. introduced by photo bleaching. When this factor is additionally estimated during the optimization, the lower boundary becomes much clearer.

Detailed evaluation of the proposed model on the data sets described in the methods section and a comparison to [7] are given in the remainder of this section.

Synthetic data

As a first baseline measure we computed the best possible outcome of the traditional one-factor-per-slice methods using the ground truth intensities of the synthetically generated phantoms. I.e. no method that assumes the correction factors to be a function of the z-position in the recorded volume can perform better than this. The optimal correction factor for each slice was computed by minimizing the rmse of the estimated intensities compared to the true intensities. The reconstruction error for slice Ω z Ω is given by

E z c 1 , c 2 := i = 1 2 x Ω z c i · I i x Î x 2 ,
(12)

where the c i are the correction factors for the top- and bottom-view and Î are the true intensities. The corresponding optimal correction factors c1 and c2 can be computed analytically to

c i = x Ω z Î x I i x x Ω z I i 2 x .
(13)

The reconstructions are shown in the column “Slicewise” of Figure 4. In both cases the one-factor-per-slice model was not able to reconstruct the interior intensities even though the true intensities were given.

Figure 4
figure 4

Reconstruction results on simulated data. (a) Textured sphere phantom; (b) Shepp-Logan phantom. Each vertically aligned pair of images shows an xy- and an xz-cut through the same volume. The cut position is indicated by the red line. Rows (top to bottom): I: real/estimated intensities, IÎ: difference of the reconstruction to the real intensities, Parameters: Used parameters, rmse: root mean squared error of the reconstructed intensities compared to the true intensities.

As second baseline we applied the attenuation correction from [7] to all datasets and empirically determined the best regularization parameter λ for each of them. We also empirically optimized the parameters λ and μ for our proposed scheme using the exponential grid introduced in the parameter setup section. The best results regarding the rmse of the intensities are given in Tables 1 and 2 and are depicted in Figure 4.

Table 1 Results for the textured sphere phantom (+ noise)
Table 2 Results for the Shepp-Logan phantom (+ noise)

For the synthetic data, the new model clearly outperforms the baseline from [7] even for sub-optimal choices of the Poisson weight m. The increase in performance is clearer for the textured phantom in which our approximation to the real noise is less affected by suboptimal mean value estimates in low-intensity regions, but even for the Shepp-Logan phantom the reconstruction quality is increased almost by a factor of three. The noise model and the bleaching factor β2 both affect the reconstruction significantly. The sparsity term also plays an important role for the reconstruction in two ways: Firstly, it avoids high attenuation estimates in regions with insufficient information; Secondly, it suppresses errors introduced by the discrete numerical approximations.

We evaluated the reconstruction quality with respect to the three parameters λ, μ, and m (TV: Figure 5 (a) and (b), TM: Figure 6 (a) and (b)). As quality measure we used the rmse of the estimated intensities. We found that the results are stable over a wide range of parameters. The parameter having the highest impact on the result is the smoothness weight λ, followed by m and finally μ. We also evaluated the evolution of the rmse during the optimization process for different choices of the parameters (TM: Figure 5 (c) and (d), TV: Figure 6 (c) and (d)). For all parameter choices the rmse first decreases rapidly reaching a very good reconstruction after 30 to 60 iterations (From practical observations we found that for real world data the optimum is reached earlier). Beyond that point the boundary effects and inaccuracies in the applied numerical approximations lead to an increase in the rmse for the TM regularization. For high TV regularization the attenuations are well localized within the sample volume and therefore no significant attenuations are estimated at the boundaries. This results in monotonically decreasing rmses with small local fluctuations.

Figure 5
figure 5

Effects of different choices for λ , μ , and m on the rmse of the reconstructed intensities of the textured sphere phantom using TM regularization. (a) Effect of different combinations of λ and μ on the reconstruction. Residual parameters: m=0.05, nIter = 50. (b) Effect of different combinations of λ and m on the reconstruction. Residual parameters: μ=0, nIter = 50. (c) Evolution of the rmse of the intensities during the iterative process for different choices of λ. Residual parameters: μ=0, m=0.05. (d) Evolution of the rmse of the intensities during the iterative process for different choices of m. Residual parameters: λ=107, μ=0.

Figure 6
figure 6

Effects of different choices for λ , μ , and m on the rmse of the reconstructed intensities of the textured sphere phantom using TV regularization. (a) Effect of different combinations of λ and μ on the reconstruction. Residual parameters: m=0.05, run to convergence. (b) Effect of different combinations of λ and m on the reconstruction. Residual parameters: μ=0, run to convergence. (c) Evolution of the rmse of the intensities during the iterative process for different choices of λ. Residual parameters: μ=0, m=0.05. (d) Evolution of the rmse of the intensities during the iterative process for different choices of m. Residual parameters: λ=5·104, μ=0.

Zebrafish

Additionally to the result shown in Figure 1, we applied our method to other zebrafish samples with varying staining quality. The reconstructions with Tikhonov Miller regularization are shown in Figure 7 and for total variation regularization in Figure 8. The estimated attenuation coefficients clearly resemble the shapes of the embryos. The bright spots in the eyes stem from the strong refraction of the eyes’ lenses showing the modeling limitations of the presented approach. However, due to the imposed priors the artifact is localized in a small region and affects the surrounding reconstruction only marginally.

Figure 7
figure 7

Attenuation Correction on zebrafish data using TM regularization. Result of the application of the proposed method to four challenging samples of the ViBE-Z database (TM regularized, λ=108, μ=104, m=0). (a-d): Reconstruction results for four different zebarfish larvae. Top: yz- and xz-cuts through raw recordings from top and bottom, middle: Reconstructed intensities, bottom: Estimated attenuation coefficients. Red lines indicate the cut positions of the corresponding views. Scale bars: 200 μm.

Figure 8
figure 8

Attenuation Correction on zebrafish data using TV regularization. Result of the application of the proposed method to four challenging samples of the ViBE-Z database (TV regularized, λ=5·104, μ=0, m=0). (a-d): Reconstruction results for four different zebarfish larvae. Top: yz- and xz-cuts through raw recordings from top and bottom, middle: Reconstructed intensities, bottom: Estimated attenuation coefficients. Red lines indicate the cut positions of the corresponding views. Scale bars: 200 μm.

Figure 9 shows a comparison of the proposed approach to the one-factor-per-slice model and the baseline approach [7] for one fish. For [7] we set λ=107, for the proposed approach we used λ=107, μ=103 and m=0. The one-factor-per-slice model is not able to recover the intensity spectrum since it cannot change the ratio between the boundary and interior intensities. In the brain region of the fish the baseline and the proposed approach estimate the same intensities, whereas the proposed approach emphasizes the tissue layers in the zebrafish eyes stronger. The proposed approach shows slightly smaller intensity overshoots at the eyes’ surfaces and around the nose but overall both reconstructions are convincing. The apparent “bleeding” of the attenuation coefficients ventral to the fish is reduced. The staircasing artifacts along the back of the fish in the baseline approach which were introduced with the orthogonal subspace projections are effectively removed.

Figure 9
figure 9

Comparison of the proposed method with [7] and the one-factor-per-slice model on one zebrafish dataset. Each panel shows xy-, xz- and zy-cuts through the volume. The cut positions are indicated by the red lines. (a) Raw recording with scaled intensities to match the intensities of the reconstruction on the eye surface. (b) Averaged intensity and attenuation profiles along the y-direction of the xy cuts. Cut positions and averaging width are indicated by colored bars. (c) Baseline reconstruction [7]. (d) Proposed method. Scale bars: 200 μm.

Arabidopsis thaliana

The mismatch in refractive indices of immersion and embedding media in the Arabidopsis sample preparation leads to an aberration induced signal loss, that is not modeled in the presented approach. However, Figure 10 shows that our method still accurately reconstructs the intensities of the root tip datasets.

Figure 10
figure 10

Attenuation Correction on Arabidopsis thaliana data. Result of the application of the proposed method to four Arabidopsis root tip samples (TM regularized, λ=107, μ=0, m=0.1). (a-d): Reconstruction results for four different Arabidopsis root tips. Top: yz- and xz-cuts through raw recordings from top and bottom, middle: Reconstructed intensities, bottom: Estimated attenuation coefficients. Red lines indicates the cut position of the corresponding view. Scale bars: 100 μm.

Figure 11 shows a comparison of the proposed approach to the one-factor-per-slice model and the baseline approach [7] for one root tip. Again the one-factor-per slice model cannot reconstruct the interior intensities. The reconstruction of [7] and the proposed approach both significantly enhance the root-internal contrast. However, an intensity gradient towards the root center remains visible. We assume, that it is induced by imperfect marker distributions due to incomplete tissue penetration and the properties of the inner cell membranes. The estimated attenuation field closely resembles the root’s shape and is homogeneous compared to the baseline approach. The baseline approach shows strong variation within the root and additionally estimates strong attenuation outside the root volume to cope with the bleaching effects. Such erroneous attenuation estimates may lead to reasonable reconstructions of the intensities of the channel they were estimated on, but they will fail in reconstructing secondary channels containing the markers to quantify like protein patterns.

Figure 11
figure 11

Comparison of the proposed method with [7] and the one-factor-per-slice model on one Arabidopsis root tip dataset. Each panel shows xy-, xz- and zy-cuts through the volume. The cut positions are indicated by the red lines. (a) Raw recording with scaled intensities to match the intensities of the reconstructions at the root boundary. (b) Averaged intensity profiles along the y-direction of the xy cuts. Cut positions and averaging widths are indicated by colored bars. (c) Baseline reconstruction [7]. (d) Proposed method. Scale bars: 100 μm.

Limitations of the approach

The exponential decay model along a ray is only strictly valid for pure absorption. In most cases local random refractions can be also described by this model. However, in areas with clearly structured refraction, as e.g. in the eyes of the zebrafish, where the light is actively bundled, the model is violated and localized errors in the attenuation estimates are introduced. We minimize the influence of these errors with high regularization, however, a better modeling of refraction would be a desirable - though practically very challenging - extension.

Another source of error is the limited recording volume. Samples exceeding this volume introduce the problem of sensibly guessing the outside attenuations the rays pass before entering the recording volume. Boundary effects can lead to solutions with low energies which are qualitatively far away from the optimum, especially when performing many iterations. In our image formation model we assume zero outside attenuations (natural boundary conditions), while for the regularization we assume Neumann boundary conditions. If possible, the recording volume should be increased to contain more background in cases of boundary problems. If this is not possible the TV regularization with its sharp boundaries is to prefer over the TM regularization. Additionally a high weight on the sparsity term alleviates effects that lead to extreme attenuation estimates. This can be the case when outside attenuations are explained by a thin highly absorbing region at the image boundary. An alternative, that leads to visually good, but energetically suboptimal results, is to restrict the number of iterations (less than ten iterations usually lead to qualitatively good results). This has the additional advantage of very low computation times.

Conclusions

We could significantly improve the results of the variational attenuation correction presented in [7] by additionally modeling photo bleaching and replacing the ad-hoc Gaussian noise assumption by the (approximate) Poisson-Gaussian statistics. The choice of the loss function in the smoothness term allows to choose between smoothly varying (TM) or piecewise constant (TV) attenuation fields. The choice of the appropriate regularization is application dependent. In our case both regularization strategies lead to equally plausible results in the rather inhomogeneous biological samples analyzed. TV regularization is more stable in practice because the attenuation is much better localized, and therefore less boundary artifacts - that may lead to convergence to undesired solutions - are introduced. For both regularizations the sparsity term also actively avoids boundary errors, by keeping the attenuation field compact. However, high sparsity weights lead to an underestimation of the attenuation volume and should be avoided. Instead, an earlier manual termination of the iterative process leads to very good results without introducing this side-effect.

We showed the efficacy of the presented method on highly complex real world examples, where it was able to significantly increase the homogeneity of the measured signal and attenuation fields. This is crucial if the attenuation field is used to correct secondary channels containing sparse structures within the anatomy. Based on these findings, we conclude that the presented attenuation correction approach is an important step towards the quantification of confocal microscopic data.

References

  1. Egner A, Hell SW: Aberrations in confocal and multi-photon fluorescence microscopy induced by refractive index mismatch. Handbook of Biological Confocal Microscopy. Edited by: Pawley JB. 2006, New York: Springer, 404-413.

    Chapter  Google Scholar 

  2. Booth M, Neil M, Wilson T: Aberration correction for confocal imaging in refractive-index-mismatched media. J Microsc. 1998, 192 (2): 99-98. 10.1046/j.1365-2818.1998.00412.x.

    Article  Google Scholar 

  3. Guan YQ, Cai YY, Zhang X, Lee YT, Opas M: Adaptive correction technique for 3D reconstruction of fluorescence microscopy images. Microsc Res Techn. 2008, 71 (2): 146-157. 10.1002/jemt.20536.

    Article  CAS  Google Scholar 

  4. Čapek M, Janáček J, Kubínová L: Methods for compensation of the light attenuation with depth of images captured by a confocal microscope. Microsc Res Tech. 2006, 69 (8): 624-635. 10.1002/jemt.20330.

    Article  PubMed  Google Scholar 

  5. Stanciu SG, Stanciu GA, Coltuc D: Automated compensation of light attenuation in confocal microscopy by exact histogram specification. Microc Res Tech. 2010, 73: 165-175. 10.1002/jemt.20767.

    Article  Google Scholar 

  6. Can A, Al-kofahi O, Lasek S, Szarowski DH, Turner JN, Roysam B: Attenuation correction in confocal laser microscopes: a novel two-view approach. J Microsc. 2003, 211: 67-79. 10.1046/j.1365-2818.2003.01195.x.

    Article  CAS  PubMed  Google Scholar 

  7. Ronneberger O, Liu K, Rath M, Rueß D, Mueller T, Skibbe H, Drayer B, Schmidt T, Filippi A, Nitschke R, Brox T, Burkhardt H, Driever W: ViBE-Z: a framework for 3D virtual colocalization analysis in Zebrafish Larval Brains. Nature Methods. 2012, 9 (7): 735-742. 10.1038/nmeth.2076.

    Article  CAS  PubMed  Google Scholar 

  8. Visser T, Groen F, Brakenhoff G: Absorption and scattering correction in fluorescence confocal microscopy. J Microsc. 1991, 163 (2): 189-200. 10.1111/j.1365-2818.1991.tb03171.x.

    Article  Google Scholar 

  9. Schmidt T, Dürr J, Keuper M, Blein T, Palme K, Ronneberger O: Variational attenuation correction of two-view confocal microscopic recordings. IEEE International Symposium on Biomedical Imaging (ISBI). 2013, San Francisco, CA, USA. Piscataway: IEEE, 169-172.

    Google Scholar 

  10. Foi A, Trimeche M, Katkovnik V, Egiazarian K: Practical poissonian-gaussian noise modeling and fitting for single-image raw-data. Trans Image Process. 2008, 17 (10): 1737-1754. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4623175&tag=1,

    Article  Google Scholar 

  11. Zhu C, Byrd RH, Lu P, Nocedal J: Algorithm 778: L-BFGS-B: Fortran subroutines for large-scale bound-constrained optimization. ACM Trans Math Softw. 1997, 23 (4): 550-560. 10.1145/279232.279236.

    Article  Google Scholar 

  12. Brox T: Von Pixeln zu Regionen: Partielle Differenzialgleichungen in der Bildanalyse. PhD thesis,. Universität des Saarlandes, Mathematische Bildverarbeitungsgruppe, Fakultät für Mathematik und Informatik Universität des Saarlandes, 66041 Saarbrücken-2005. http://lmb.informatik.uni-freiburg.de/people/brox/pub/brox_PhDThesis.pdf,

  13. Shepp LA, Logan BF: The fourier reconstruction of a head section. IEEE Trans Nuclear Science. 1974, 21 (3): 21-43. http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6499235&tag=1,

    Article  Google Scholar 

Download references

Acknowledgements

We thank the members of our teams for helpful comments on the manuscript. We also gratefully acknowledge the excellent technical support from Roland Nitschke (Life Imaging Centre, ZBSA, Freiburg). We thank Meta Rath, Alida Fillipi and Wolfgang Driever for providing the zebrafish recordings. We finally want to gratefully acknowledge EMBO for the long-term postdoctoral fellowship at the University of Freiburg awarded to TB. This work was supported by the German Research Foundation (DFG), the Excellence Initiative of the German Federal and State Governments (EXC 294) and the Bundesministerium für Bildung und Forschung (BMBF) (Fkz. 031 56 90 A). The article processing charge was funded by the DFG and the Albert Ludwigs University Freiburg in the funding program Open Access Publishing.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Thorsten Schmidt.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

TS, MK and OR developed the theory. TS and OR performed the implementation and did the experiments. JD and TB prepared the Arabidopsis samples and did the recordings. KP initiated the plant project. TS, MK and OR wrote the paper. All authors read and approved the final manuscript.

Electronic supplementary material

Authors’ original submitted files for images

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License(http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Schmidt, T., Dürr, J., Keuper, M. et al. Variational attenuation correction in two-view confocal microscopy. BMC Bioinformatics 14, 366 (2013). https://doi.org/10.1186/1471-2105-14-366

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1471-2105-14-366

Keywords