Brought to you by:
Paper

Time estimation with multichannel digital silicon photomultipliers

, , , and

Published 5 March 2015 © 2015 Institute of Physics and Engineering in Medicine
, , Citation Esteban Venialgo et al 2015 Phys. Med. Biol. 60 2435 DOI 10.1088/0031-9155/60/6/2435

0031-9155/60/6/2435

Abstract

Accuracy in timemark estimation is crucial for time-of-flight positron emission tomography, in order to ensure high quality images after reconstruction. Since the introduction of multichannel digital silicon photomultipliers, it is possible to acquire several photoelectron timestamps for each individual gamma event.

We study several timemark estimators based on multiple photoelectron timestamps by means of a comprehensive statistical model. In addition, we calculate the MSE of the estimators in comparison to the Cramér–Rao lower bound as a function of the system design parameters.

We investigate the effect of skipping some of the photoelectron timestamps, which is a direct consequence of the limited number of time-to-digital converters and we propose a technique to compensate for this effect. In addition, we carry out an extensive analysis to evaluate the influence of dark counts on the detector timing performance.

Moreover, we investigate the improvement of the timing performance that can be obtained with dark count filtering and we propose an appropriate filtering method based on measuring the time difference between sorted timestamps.

Finally, we perform a full Monte Carlo simulation to compare different timemark estimators by exploring several system design parameters. It is demonstrated that a simple weighted-average estimator can achieve a comparable performance as the more complex maximum likelihood estimator.

Export citation and abstract BibTeX RIS

1. Introduction

The coincidence resolving time (CRT) of a positron emission tomography (PET) scanner is a crucial parameter because of its direct impact on image quality. The signal-to-noise ratio (SNR) of the PET image is enhanced when the image reconstruction algorithms utilize time-of-flight (TOF) information (Moses 2003). The suppression of noise propagation during the image reconstruction is the main reason for the image improvement.

Since the introduction of multichannel digital silicon photomultipliers (MD-SiPMs) multiple-photoelectron timestamps are available for gamma photon timemark estimation; the use of several timestamps can in principle improve CRT significantly (Seifert et al 2012, Mandai and Charbon 2013). Nevertheless, more complex estimators are required to extract the gamma-photon timemark from multiple photoelectron timestamps.

MD-SiPMs have many time-to-digital converters (TDCs) shared among single-photon avalanche diode (SPAD) pixels through column-parallel lines (Mandai and Charbon 2013). Consequently, MD-SiPMs can assign timestamps to multiple photoelectrons produced by scintillation photons. This type of silicon detectors has opened a new field of study that is the gamma-photon timemark estimation based on multiple photoelectron timestamps.

In this paper, we explore the use of weighted-average estimators and the maximum-likelihood estimator (MLE) for gamma-photon timemark estimation. A comprehensive Monte Carlo model is implemented to evaluate the performance of these estimation methods. In addition, several datasets are generated under different system design parameters in order to compare the performance of the estimators against the Cramér–Rao lower bound (CRLB).

Since the amount of available TDCs is less than the number of SPADs, after one TDC is measuring the qth photoelectron, the next photoelectron, qth + 1, has a lower probability of being detected. This is called skipping effect and it accounts for the inability to measure a continuous set of photoelectron timestamps with the same detection probability. In this work, we present a likelihood function that models this effect.

Furthermore, we study the possibility of filtering out spurious dark counts by measuring the time difference between ordered timestamps. In addition, we carry out a realistic simulation that compares the simplest estimator in terms of computing power against the most complex one investigated in this work, which is the MLE corrected for skipping effect and dark count rate (DCR) variations. In these last simulations we consider two light coupling conditions between the LYSO pixels and the MD-SiPM array. In the first condition, we analyze a LYSO pixel coupled to one MD-SiPM and in the second case we simulate a LYSO pixel coupled to four MD-SiPM.

2. Statistical models of scintillator-based coincidence detectors

In the early 1950s, a statistical model that described the decay process of a scintillator was introduced for the first time (Post and Schiff 1950). It was based on Poisson statistics, assumed a single-exponential decay model and did not include the variation in transit times, or jitter, of the photosensor. Consequently, it predicted that the best timing performance for gamma timemark estimation was obtained when utilizing the first photoelectron.

In 1966, a review paper was published that compared experimental and simulation results of coincidence measurements with scintillation detectors. This work demonstrated that the lowest variance is not necessarily obtained with the first photoelectron because of the influence of transit time spread (TTS) in photomultiplier tubes (PMTs) and of the finite rise time in scintillators (Gatti and Svelto 1966).

Later, a broad theoretical background was established that allowed the use of any type of scintillation decay pulse shape or probability density function (PDF) (Gioacchino 1993). According to this model, the PDF of the qth photoelectron's time-of-registration, called pq, is given by

Equation (1)

where f(t) represents the photoelectron time distribution PDF and F(t) is the corresponding cumulative density function (CDF).

In 2010, the concept of order statistics was introduced to the scintillation decay process, in combination with a double-decay exponential model (Fishburn and Charbon 2010). It is possible to obtain the same theoretical framework derived in Gioacchino (1993) using order statistics. The double exponential decay model, where τr, τd are the rise and decay constants, respectively, while T0 is the timemark of the gamma photon, can be expressed as

Equation (2)

According to the model proposed in that work, the distribution of timestamps generated by a system composed of a scintillator and a photosensor follows the distribution defined by

Equation (3)

The Gaussian distribution $\mathcal{N}\left({{\mu}_{\text{TT}}},{{\sigma}_{\text{TT}}}\right)$ models the total transit time and jitter of the instrumentation chain. Moreover, the PDF of the resulting photoelectron time distribution is the convolution between fs(t) and $\mathcal{N}\left({{\mu}_{\text{TT}}},{{\sigma}_{\text{TT}}}\right)$ since the total jitter is modeled as additive noise; μTT and σTT model the total transit time and jitter, respectively (Fishburn and Charbon 2010). σTT includes all of the sources of time uncertainty such as, TDC jitter, SPAD jitter, electronic noise, etc.

Recently, Seifert et al (2012) derived the Cramer–Rao lower bound on the time resolution of scintillation detectors. Instead of the single bi-exponential function fs(t) given in (2), their analysis is based on a more accurate model of scintillation decay that is able to account for the multiple, simultaneous cascades of excitation and decay processes that occur in some scintillators, such as LaBr3:Ce. Their model can be used with a Gaussian function to describe the single-photon jitter of the instrumentation chain, as in (3), or with a more complex PDF in cases where a Gaussian approximation is considered insufficiently accurate. Moreover Seifert et al (2012), analyzed the possibility to utilize multiple photoelectron timestamps for gamma timemark estimation in some depth.

The present work is based on the analysis of Seifert et al (2012), using a Gaussian PDF to describe the single photon timing jitter as in (3).

3. Cramér–Rao lower bound

The CRLB is a scalar value that establishes the lower bound on the variance of any unbiased estimator of the parameter of interest (Kay 2013). The CRLB for the particular case of order statistics was analyzed in Park (2003) and Seifert et al (2012).

The regularity conditions that are required to obtain a valid CRLB are given in Arnold et al (1992). Common support is a condition for obtaining a valid Fischer information calculation. This regularity condition can be expressed as follows for the problem under consideration in this work

Equation (4)

In principle one might question if this condition is fulfilled since our model assumes zero probability of scintillation photon emission before T0 (Hanggi and Carr 1985, Seifert et al 2012).

However, the condition is fulfilled when the scintillation decay function is convolved with the total Gaussian jitter of the system, as in (3). It is to be noted that this model in principle allows the (false-positive) registration of scintillation photons at times earlier than T0. Although this is not unphysical if the instrumentation chain indeed exhibits a white jitter spectrum, one might prefer to also truncate $\mathcal{N}\left({{\mu}_{\text{TT}}},{{\sigma}_{\text{TT}}}\right)$ at T0 in (3). Such truncation should be performed carefully in order to include a sufficiently large part of the left tail of $\mathcal{N}\left({{\mu}_{\text{TT}}},{{\sigma}_{\text{TT}}}\right)$ , so as to avoid significant breaching of regularity condition (4). Fortunately, this is easily achieved in practice because in a typical photosensor instrumentation chain, μTT −T0 is many times larger than σTT.

In general, it is not guaranteed that an unbiased estimator exists that is able to reach the CRLB for a given problem. To illustrate this, we investigate the simple case in which the registration time of only one of the scintillation photons is known and we compare the only possible unbiased estimator that exists in this case to the CRLB, as a function of several system parameters.

When estimating the gamma timemark just utilizing the qth photoelectron only, the only possible unbiased estimator is given by

Equation (5)

where A is expressed as

Equation (6)

The accuracy of this estimator in terms of the root mean square error (root-MSE) is equal to the square root of the variance of the PDF of the time-of-registration of the qth photoelectron.

Equation (7)

Equation (7) provide a simple means to compare the only possible unbiased estimator performance against the CRLB for several system design parameter configurations.

In the single-photoelectron T0 estimation case, the Fisher information I(T0) is given by

Equation (8)

where the chain rule was applied to rewrite pq(tT0) as follows

Equation (9)

We calculated the Fisher information with a time step of 1 ps, for different values of the essential scintillation properties and swept the system design parameters (Ludziejewski et al 1995, Glodo et al 2005, Szczesniak et al 2009, Seifert et al 2012). Subsequently, we calculated the standard deviation of pq(t) with the same time step and compared the results.

Figure 1(a) shows the σq and root-CRLB as a function of the photoelectron order for different numbers of detected photoelectrons. Figures 1(b) and (c) depict similar comparisons for different scintillators and for different values of total jitter of the system respectively. In all of the plots the total jitter of the system is given at FWHM level. The number of photoelectrons was intentionally kept low in figures 1(b) and (c) (i.e. corresponding to an overall photodetection efficiency of about 6%, since under these conditions the only possible unbiased estimator does not fully reach the CRLB, as in figure 1(a)).

Figure 1.

Figure 1. σq and root-CRLB as a function of the photoelectron order for different system design parameters. (a) τr was set to 89 ps and τd to 46.6 ns, the total jitter of the system was set to 300 ps (FWHM). (b) The number of detected photoelectrons was 1800 for LaBr3 and 800 in all of the other scintillators; the total jitter of the system was 300 ps (FWHM). (c) τr and τd was set as in (a); the total number of detected photoelectrons was 1300.

Standard image High-resolution image

4. Maximum likelihood estimation

As demonstrated in the previous section, the only possible unbiased estimator in the single-photoelectron timestamp case does not always reach the CRLB, for certain system design parameters. In the multiple-photoelectron time estimation case, there are many possible unbiased estimators and it is computationally impractical to perform the previous analysis for all system design parameter combinations.

In the multiple photoelectron timestamp case, we therefore limited ourselves to the calculation of the MSE of the multiple-photoelectron MLE in two different conditions, which were defined based on the results obtained in the single-photoelectron time estimation case. We calculated the CRLB of the multiple-photoelectron estimation case for the same two conditions and compared it to the result obtained with the MLE.

The likelihood function defined for the t1:Q timestamps of the first Q photoelectrons, for the estimation of location parameter (T0) is expressed by

Equation (10)

This expression corresponds to a type II censored sample of order statistics (Arnold et al 1992). The likelihood functions were calculated with a time step of 1 ps. Subsequently, the root-MSE of the multiple-photoelectron MLE was evaluated utilizing random timestamps generated with a Monte Carlo code, which follows the same model as L1:Q(t1, ..., tQT0).

As observed in figures 1(a)(c), the only possible unbiased estimator in the single photoelectron timestamp case does not fully reach the CRLB when the jitter level is low and the number of detected photoelectrons is small. Thus, we generated two random timestamp datasets called I and II, the dataset I with 100 ps FWHM jitter and 300 photoelectrons; the dataset II with 700 ps FWHM jitter and 3800 photoelectrons. The scintillation decay constants were the same for both datasets, namely LSO with properties according to (Ludziejewski et al 1995, Seifert et al 2012). In the multiple-photoelectron estimation case, the Fisher information can be calculated as follows (Park 2003)

Equation (11)

Equation (12)

The performance of MLE for estimating the location parameter T0 utilizing both datasets is depicted in figure 2. The number of TDCs is equal to the number of sorted timestamps utilized for the estimation and varies from 1 to 30. The root-MSE of the MLE differs from the CRLB by about 67 ps in the case of dataset I. However, for dataset II (see figure 2) the root-MSE of the MLE is very close to the CRLB. In conclusion, we observe that the MLE does not fully reach the CRLB if the number of detected photoelectrons is small and the total jitter of the system is low, similarly to what was observed for the only possible unbiased estimator in the single photoelectron timestamp case. The important difference between the single- and multiple photoelectron timestamp cases is that in the latter many different estimators can be defined. The fact that the MLE does not reach the CRLB under some conditions in the multiple photoelectron timestamp case implies that it may be possible to find more efficient estimators than the MLE under such conditions. However, obtaining good timing resolution in scintillation detectors requires the use of scintillators with high light output in combination with photosensors with high photoelectron detection efficiency (PDE) (Seifert et al 2012). Under those conditions, the present results indicate that the MLE is an efficient estimator.

Figure 2.

Figure 2. Root-MSE of the multiple photoelectron MLE and root-CRLB versus size of multiple timestamps set, when utilizing dataset I and II.

Standard image High-resolution image

5. Weighted-average timemark estimators

In general, a linear function of a sorted set of random samples provides an efficient estimator of the location parameter (Arnold et al 1992). We tested several weighted-average estimators such as the simple mean, the variance weighted and the best linear unbiased estimator (BLUE) by calculating the single detector root-MSE of the estimators using the Monte Carlo simulator.

The weighted-average timemark estimators are given by

Equation (13)

The first estimator calculates the average value of a group of ordered Q photoelectron timestamps. We call this estimator the simple mean estimator and the weights are given by (p = 1)

Equation (14)

The number of photoelectrons timestamps Q utilized to calculate the mean value was varied from 1 to 48, since the MD-SiPM that was designed in our laboratory has 48 TDCs per MD-SiPM (Mandai et al 2012, Mandai and Charbon 2013). Furthermore, tq corresponds to the time-of-registration of the qth photoelectron.

The second method is a weighted-average estimator, in which the weights are calculated according to the variance of the corresponding tq timestamp (p = 2)

Equation (15)

Furthermore, the weights are normalized so the sum of the weights is equal to the unity, in order to preserve the linearity of the estimation. This method is called the variance weighted estimator. Experimentally, each weight can be estimated during a calibration procedure.

The third method is also a weighted-average estimator but we calculated the weights according to the covariance of the timestamps following (p = 3)

Equation (16)

Equation (17)

where d is a column vector filled with ones and with a length equal to the number of utilized timestamps and C is the covariance matrix of the timestamps. This estimator follows the BLUE methodology in order to obtain a weighted-average estimation with minimum variance. The last step in the derivation of this estimator for the case of a single detector would be to compensate the bias in the estimation by subtracting the multiplication between $\boldsymbol{w}_{\boldsymbol{q}}^{(3)}$ and the mean vector of the timestamps. However, in a coincident setup such as a PET system this step is not required, since the biases of two equal, coincident detectors cancel against each other.

In order to test the estimators, we generated random timestamps with the Monte Carlo code based on the models explained in Seifert et al (2012). In addition, we included the influence of the energy resolution (ER) into the Monte Carlo code by sampling R (number of detected photoelectrons) randomly according to the selected ER.

Figure 3(a) shows the root-MSE of all estimators as a function of the number of TDCs under two jitter conditions. Figures 3(b) and (c) show the root-MSE as a function of the number of photoelectron timestamps for two different energy resolutions and for two different number of photoelectrons, respectively. Figure 3(d) shows the root-MSE of the estimators for two types of crystals scintillators.

Figure 3.

Figure 3. Single detector root-MSE (ΔT) for all of the estimators. (a) For two different jitter levels, the ER was set to 14%, the number of detected photoelectrons was set to 2400 and the scintillator was set to LYSO. (b) For two different ERs, the number of detected photoelectrons set to was 2400, the scintillator was set to LYSO and the total jitter of the system was set to 100 ps (FWHM). (c) For two different number of photoelectrons, the ER was set to 14%, the scintillator was set to LYSO and the total jitter of the system was set to 100 ps (FWHM). (d) For two scintillators, the number of detected photoelectrons was set to 2400 for the scintillation decay constants of LYSO and 4800 for the scintillation decay constants of LaBr3; the ER was set to 14% and the total jitter of the system was set to 100 ps (FWHM).

Standard image High-resolution image

In figures 3(a)(d) the LYSO decay constants were taken from Seifert et al (2012) table 1 entry 10. In figure 3(d), the LaBr3 decay constants were taken from Glodo et al (2005).

Table 1. System design parameters for condition I and II.

  Scintillator Size MD-SiPMs TDCs SPAD Cells Total DCR (max)
Condition I 0.8 × 0.8 × h mm3 1 48 416 125 Mcps
Condition II 1.6 × 1.6 × h mm3 4 96 1664 500 Mcps

As observed, the accuracy of BLUE and MLE improves as more TDCs are included in the estimation. In contrast, the other estimators tend to degrade when increasing the number of TDCs. The MLE and BLUE estimators are observed to have practically equal efficiency under all the system conditions studied.

6. Skipping effect and likelihood function

In the previous sections, we did not yet take into account the fact that the TDCs are shared. Subsequently, the number of available TDCs per SPAD is less than 1. This condition significantly modifies the time distribution of the ordered photoelectrons; therefore, the likelihood function must be readjusted in order to account for this effect.

In this implementation, each of the 3 TDCs in a column is shared by 8 or 9 SPADs through an OR gate (see figure 4); therefore, as soon as a TDC detects the qth photoelectron signal, it becomes unavailable and the overall probability that the next photoelectron is timestamped decreases. In other words, if a light photon generates an avalanche in a SPAD that is connected to an already-triggered TDC, then its time information is lost. We call this decrease in detection probability the skipping effect. Assuming that the photoelectron detection probability is equal and constant for every group composed by 8 or 9 SPADs and a TDC; then, the probability to detect a continuous set of timestamps without skipping can be straightforwardly calculated as follows

Equation (18)

where NTDCs represents the total number of TDCs of the MD-SiPM and 1:Q is refereed to a set of sorted timestamps of size Q < NTDCs.

Figure 4.

Figure 4. Block diagram of the MD-SiPM.

Standard image High-resolution image

From (18), P (1:Q) decreases significantly if the set size 1 : Q is larger than 20 and NTDCs is 48, for instance. Consequently, the resulting time distribution of the qth photoelectron does not follow (1) anymore and the likelihood function (10) is no longer valid. In order to obtain an accurate estimation with the MLE method, the likelihood function must account for this effect.

The photoelectron timestamp probability distribution including the skipping effect can be modeled as a two-stage order-statistics process if we assume that the photoelectron detection probability is equal for every TDC when all TCDs are in non-occupied state. We first model the time distribution of the unsorted timestamps measured by the TDCs using a modified version of the function f(t) called fk as follows

Equation (19)

Equation (20)

Thus, fk(t) represents the time distribution of the first photoelectron that is timestamped by any of the TDCs. In the second step, we model the sorting of the TDC's timestamps. The resulting pq(t) that accounts for the skipping effect is given by

Equation (21)

Figure 5(a) depicts the pq(t) for several ordered photoelectrons with and without skipping effect modeling. Figure 5(b) shows the PDFs calculated by (21) and the normalized histograms generated from a Monte Carlo simulation.

Figure 5.

Figure 5. pq(t) for several ordered qth photoelectrons. (a) pq(t) was calculated with skipping effect from (21) and without skipping effect from (1). (b) pq(t) was obtained from a Monte Carlo simulation, as compared with the calculated values accounting for skipping effect. Note the near perfect match between the calculated and simulated values of pq(t).

Standard image High-resolution image

The next step is to derive a likelihood function that models the skipping effect. Such a function is given by

Equation (22)

where in this case, 1:Q' represents a subset of the timestamps that were registered by a system that has skipping effect.

This equation shows that, if all of the timestamps are utilized, then the MLE does not require sorted timestamps. In addition, under this condition the assumption of equal detection probability for every TDC is no longer required; consequently, the fk(t) can be replaced by a specific TDC PDF (van Dam et al 2013).

7. Dark count rate

Dark counts can significantly affect the resolving time of a PET detector module composed by SiPMs. Several DCR filtering methods have been implemented in digital SiPMs (Frach et al 2009, Mandai et al 2012, Braga et al 2013, Mandai and Charbon 2013). In this section, we simulated the DCR rejection method described in Mandai and Charbon (2013), propose a new filtering method based on timestamps subtractions and correct the likelihood function to account for DCR.

In this subsection, we consider two scintillator-photosensor coupling conditions. In condition I, a single pixel of LYSO is coupled directly to a MD-SiPM. In condition II, we simulated a pixel of LYSO, with a four times larger footprint, coupled to four MD-SiPMs. Consequently, in condition II the initial amount of available TDCs is twice as high as in condition I and the DCR is four times higher. The number of TDCs in condition II is not four times larger because of the way of sharing the TDCs in an array of MD-SiPMs (Mandai and Charbon 2013). The number of TDCs in condition II is twice as high as in condition I. The photoelectron dynamic range of condition II quadruples in condition I. Table 1 shows the system design parameters for condition I and II. Figures 6(a) and (b) show a representation of the coupling condition between the LYSO pixels and the MD-SiPMs.

Figure 6.

Figure 6. Condition I and II representation. In (a), the crystal pixel is attached to one MD-SiPM and in (b) the crystal pixel is attached to four MD-SiPMs.

Standard image High-resolution image

In order to simplify the free parameters, we assume that the factor limiting the total amount of light detected is only the MD-SiPM dynamic range. That is, it is not limited by the crystal light output nor the MD-SiPM's PDE. Consequently, the mean number of detected photoelectrons was fixed to 800 for condition I and 3200 for condition II . The total jitter of the system (σTT) was kept equal to 179 ps for both cases (Mandai et al 2012).

We included the DCR effect within the Monte Carlo simulation and simulated the smart-reset technique introduced in Mandai et al (2012). The gamma photon timemark was randomly generated following a uniform distribution between 0 and 100 ns. The dark counts were generated following an exponential distribution with a given DCR, since the SPADs are reset in every new detection cycle. Afterpulsing was ignored. The DCR was kept identical for every group composed by a TDC and 8 or 9 SPADs. If the total DCR of the MD-SiPM is above 100 Mcps, then the probability that a dark count triggers a TDC within the detection cycle of 100 ns is considerable. Consequently, another filtering technique should be performed before the time estimation occurs. We propose to filter out the dark counts based on the time differences between sorted timestamps.

For known system design parameters, such as crystal type, total number of TDCs, etc; it is possible to calculate the distribution of the time difference between consecutive timestamps. Utilizing fk(t) (20), the joint distribution of order statistics can be expressed as follows

Equation (23)

where t1 < t2.

Using this result, the distribution of the time difference between two consecutive timestamps tq and tq + 1 is given by

Equation (24)

It is important to mention that fk(t) and consequently Fk(t) must be recalculated for each of the possible skipping effect conditions that may arise due to the presence of dark counts. DCR modifies the amount of available TDCs because of dark counts that accumulate after each reset. Consequently, for each fΔ(q, q + 1)(t) there are NTDCs − 1 possible PDFs (see figures 7(a) and (b)). These PDFs are utilized to define a filtering time window for the time difference between subsequent photoelectron in every skipping effect condition. The time windows are defined as the time at which fΔ(q, q + 1)(t) is completely vanished. Figure 7(c) shows the normalized histograms of Δ(q, q + 1) calculated from Monte Carlo simulations, which overlap the fΔ(q, q + 1)(t) using (24).

Figure 7.

Figure 7. (a) fΔ(q, q + 1)(t) calculated for condition I and several skipping cases. (b) fΔ(q, q + 1)(t) calculated for condition II and several skipping cases. (c) Normalized histograms obtained from a Monte Carlo simulation, as compared with the calculated fΔ(q, q + 1)(t) from (24).

Standard image High-resolution image

Before the arrival of the gamma photon, the TDCs are being fired with dark counts that follow a certain inter-avalanche time distribution. When the gamma photon triggers a scintillation event, light photons are emitted producing a higher avalanche rate in the photosensor. Consequently, the proposed filter is designed to remove the dark counts that accumulated in the beginning of the measurement frame before the gamma-photon arrival. This filter measures the distance between timestamps and discards the timestamps that are registered at the beginning of the frame, because after the gamma-photon arrival it is not possible to discriminate dark counts from actual photoelectron detections. The procedure is detailed as follows.

As shown in algorithm 1, the timestamps are calculated and compared to a time window. If the first N timestamp differences are not inside their corresponding time windows, the first timestamp is discarded and the procedure is repeated by utilizing new windows that correspond to the new skipping condition. The set size of the first N timestamps is defined by the parameter length in the algorithm 1.

Algorithm 1:. DCR Filter Algorithm

Input : A timestamp set TM(q) of NTDCs size.
Input : A set of time windows ${{W}_{\text{N}}}\left({{\Delta}_{q}}, \text{skipped}\right)$ of size(NTDCs − 1) ×(NTDCs − 1)
Input : Number of required first timestamps within their corresponding time windows, length
Output : A filtered timestamp set FM(q) of (NTDCs − skipped) size.
Output : Amount of TDCs that are initially triggered by dark counts, skipped
 
skipped = NTDCs ;
deltaT = diff(TM) ;
for sweep = 1:(NTDCs − 1) do
  current = deltaT(sweep:end);
  condition = ${{W}_{\text{N}}}\left(:, \text{sweep}\right)-$ current > 0;
  if all(condition (1 : length)) then
    skipped = sweep;
    FM = TM(sweep:end) ;
    break;
   end
end

Some dark counts are not discriminated because they are randomly generated inside the time windows. Particularly, among the first photoelectron timestamps there is more probability to register dark counts even after filtering, because of the dark count accumulation before gamma-event detection. In order to account for this effect within the likelihood functions, we re-estimated fk(t) for several DCR levels by utilizing Monte Carlo simulations and a kernel density estimator based on the Epanechnikov kernel function. Figures 8(a) and (b) show the estimated fk(t) for condition I and II. The magnified area shows an increase of probability in the beginning of the PDF that depends on DCR. This effect is produced by dark counts that are not discriminated among the first photoelectron timestamps, as explained before.

Figure 8.

Figure 8. fk(t) estimated with Monte Carlo simulations: (a) with condition I, (b) with condition II. The DCR is the total amount of one MD-SiPM for condition I and of four MD-SiPMs for condition II. In (a), fk(t) is shown, estimated with a Monte Carlo simulation for condition I that fixed R' without any random samplings. The total DCR was 250 kcps.

Standard image High-resolution image

It is to be noted that we did not take into account the statistical Poisson variations of R' in equations (19) and (20). This fluctuations can potentially modify the likelihood equation significantly if the number of photoelectrons per TDC is low. Consequently, we included this effect within the Monte Carlo simulation that estimates fk(t). Therefore, the likelihood functions depicted in figures 8(a) and (b) do take into account the statistical variations of R'. In addition, we calculated a fk(t) with a fixed value of R' for condition I and 250 kcps DCR. This fk(t) is depicted with a dashed line in figure 8(a).

8. MLE and BLUE including DCR and skipping effect

After including the DCR model in the Monte Carlo code, it appears that DCR filters and corrections of the likelihood function are required to account for the dark counts that are not discriminated. The last part of this study is focused on the performance of BLUE and MLE under DCR and skipping effect conditions. A new set of Monte Carlo simulations was run, which included the skipping effect, DCR and the proposed rejection method.

Random timestamps were regenerated under condition I and II (see table 1 and figures 6(a) and (b)), taking into account the influences of DCR and skipping effect. The coefficients of BLUE were calculated utilizing a different realization of the Monte Carlo data; consequently, they include the influence of DCR and skipping effect. MLE utilizes the fk(t) functions depicted in figures 8(a) and (b) in order to model the DCR influence.

In addition, the likelihood function was customized for each individual gamma event, since the skipping effect changes depending on how many TDCs were occupied by dark counts, see (20) and (21). For instance, if 10 TDCs are triggered by dark counts, NTDCs must be readjusted to 38. The number of TDCs that are initially triggered by dark counts are detected by the DCR filter for each individual gamma event (see variable skipped in algorithm 1).

Figures 9(a) and (b) show the performance of several estimators for condition I and condition II respectively, when including DCR and skipping effect. It appears that MLE approximates a minimum root-MSE when increasing the size of the multiple-photoelectron timestamp set.

Figure 9.

Figure 9. Single detector root-MSE (ΔT) for several estimators for different DCR levels: (a) with condition I, (b) with contidition II. The DCR is the total amount of one MD-SiPM for condition I and of four MD-SiPMs for condition II.

Standard image High-resolution image

To investigate the efficiency of the estimators developed in this section, the CRLB that accounts for DCR and skipping effect was calculated using fk(t) that was estimated with Monte Carlo events. In (11) and (12), we replaced f(t) by fk(t), the corresponding CDF and Q by NTDCs. pq(Tt0) was calculated performing the same replacement in (1). Furthermore, in pq(Tt0) we replaced R by NTDCs. The CRLB calculated for the maximum number of available timestamps is depicted in figures 9(a) and (b).

9. Discussion and conclusions

In section 3, we first analyzed the single-photoelectron estimation case. We concluded that with a single timestamp, the only possible unbiased estimator did not fully reach the CRLB under certain conditions, namely when the number of photoelectrons registered is low and the jitter of the complete instrumentation chain is small (see figures 1(a)(c)). In section 5, we tested the MLE in the multiple timestamp estimation case and it did not fully reach the CRLB under similar conditions (see figure 2). Nevertheless, under conditions that are typical for scintillation detectors intended for fast timing applications, i.e. detectors based on fast, bright scintillators and photosensors with a high photodetection efficiency, it appears that the MLE is an efficient estimator of the time of interaction of the gamma quantum.

It was shown in section 5 that BLUE can reach the same performance as MLE in the multiple photoelectron timestamp case, under no-DCR conditions. In addition, both estimation methods are essentially insensitive to the energy resolution (see figure 1(b)). Furthermore, the largest improvement in time resolution due to the use of multiple timestamps is obtained in systems with a high level of total jitter (see figure 3(a)).

In section 8, it was shown that a favorable performance under high-DCR conditions (see figures 9(a) and (b)) can be achieved by a DCR filter. Thus, DCR robustness is a particular advantage of the MD-SiPM. Since this architecture is more DCR-tolerant, it can be implemented in a standard CMOS process instead of requiring a more expensive and less widely available image sensor CMOS process (Mandai and Charbon 2013).

In section 8, it was furthermore shown that the performance of BLUE and MLE are essentially equal in the presence of DCR and skipping effect (with a DCR filter applied). Moreover, both estimators appear to be efficient (i.e. they closely approach the CRLB), when the amount of detected photoelectrons is high (i.e. under condition II).

However, BLUE is much simpler than MLE in terms of computing power or hardware implementation. BLUE requires just NTDCs multiplication and accumulation operations (MACs). On the other hand, MLE requires several MACs per TDC, depending on the numerical resolution of fk(tqT0) in addition to a maximum-value search algorithm. Hence, BLUE is considered the best of the different estimators tested for estimating the time of interaction of gamma quanta in MD-SiPM based scintillation detectors.

In conclusion, a comprehensive theoretical analysis of multiple-photoelectron time estimation in MD-SiPM based scintillation detectors was performed, supported with realistic Monte Carlo simulations. The statistical models that are described within this work can be applied to any time estimation problem based on MD-SiPMs by substituting the appropriate function for fs(t).

Acknowledgments

The research has received funding from the European Union Seventh Framework Program under Grant Agreement n°289355 (PicoSEC-MCNet). Dennis R Schaart is supported by the European Union Seventh Framework Program under Grant Agreement n°241711 (SUBLIMA).

Please wait… references are loading.
10.1088/0031-9155/60/6/2435