Abstract

Two datasets of points of known spatial positions and an associated absorbed dose value are often compared for quality assurance purposes in External Beam Radiation Therapy (EBRT). Some problems usually arise regarding the pass fail criterion to accept both datasets as close enough for practical purposes. Instances of this kind of comparisons are fluence or dose checks for intensity modulated radiation therapy, modelling of a treatment unit in a treatment planning system, and so forth. The gamma index is a figure of merit that can be obtained from both datasets; it is widely used, as well as other indices, as part of a comparison procedure. However, it is recognized that false negatives may take place (there are acceptable cases where a certain number of points do not pass the test) due in part to computation and experimental uncertainty. This work utilizes mathematical methods to analyse comparisons, so that uncertainty can be taken into account. Therefore, false rejections due to uncertainty do not take place and there is no need to expand tolerances to take uncertainty into account. The methods provided are based on the rules of uncertainty propagation and help obtain rigorous pass/fail criteria, based on experimental information.

1. Introduction

Modern radiation therapy aims at a high level of accuracy and, as a consequence, becomes more demanding regarding quality assurance checks (even patient-specific checks) and measurement and computation performance. The use of comparisons of two datasets consisting of a sample of measured or computed absorbed dose points covering the treatment field or a patient tomographic slice is frequently performed on a routine basis. Therefore, the acceptance method should be both straightforward and reliable.

Traditionally, treatment goals in radiation therapy were achieved by choosing several directions around the patient so that the dose from all the beams was conformed to the target volume, sparing healthy tissues. Nowadays, it is possible to improve the homogeneity of absorbed dose in the planning target volume (PTV) and reduce the absorbed dose to healthy organs using several fields of non-uniform intensity (IMRT) designed to combine in an optimised dose distribution inside the patient [1, 2].

The process is more complex than the one involved in conventional radiation therapy. The way the different beam orientations are combined could lead to practical problems, due to several issues: small and elongated beams are used, there are high dose gradients inside the fields, some features of the linear accelerator could have a noticeable effect, and treatment planning computation could not be accurate enough. These issues can make a particular plan unsuitable for treatment, and this is the reason why a comprehensive quality control of the technique and checks for each plan are often recommended [3, 4].

Two main types of patient specific checks have been recommended in the literature [3, 5].

(1) The first one consists on recomputing the plan substituting the representation of a suitable phantom for the patient representation and obtaining the 2D dose distribution on several planes inside the phantom. Radiographic or radiochromic film is inserted in the phantom in the same positions where the 2D doses where computed, and it is irradiated with the whole treatment. These films are scanned with an appropriate device and compared with the computed dose planes; this is a way to check the combined dose distribution. Some 3D measurement devices are also available [69].

(2) Irradiations are carried out for each beam, with the film or 2D detector placed perpendicular to the beam direction. The dose distributions have been previously computed with the treatment planning system, and a corresponding set of 2D computed dose distributions (or fluence maps) has to be compared with the measured ones. A check of each fluence map is obtained with this technique.

In either case, a comparison of two datasets with a great number of points has to be performed. Similar situations arise when commissioning a treatment planning system, since computation results have to be checked against measurement results. This leads to the following problem.

Given two arrays of values (absorbed dose), maybe with different spacing, find a convenient criterion to decide whether or not they can be considered as coincident for practical purposes. The dose distributions to be expected in radiation therapy can have sharp gradients in the field boundary, and possibly also inside the field, where the dose is not homogeneous. Wherever a sharp gradient is present, the result could be affected by geometrical errors (i.e., error in the position of a collimator leaf, error in the computation of the dose on the edge of the collimator leaf) and the check method should be able to cope with this. A small geometrical error is considered acceptable, but a direct comparison of the measured and reference dose in this area could result in a value out of dose tolerance. This is the reason why acceptance criteria based on distance to agreement (DTA) were developed [35]. DTA is the distance from the measured point to the nearest point in the reference dose distribution with the same dose. DTA tolerances are usually set for penumbra regions (field edges), and tolerances based on absorbed dose differences are used for homogeneous regions inside or outside the field. Unfortunately, there is no reasonable criterion as to whether dose difference or DTA should be used for points inside a modulated field, because there could be gradients of very different magnitude.

A solution was proposed by Low et al. [10, 11], involving the computation of a single figure of merit for the quality of the match. It has become the method of choice for acceptance of IMRT plans. It involves an artificial distance in a 3D dose space. If the dose difference tolerance is Δ𝐷 and the spatial tolerance is Δ𝑅, then the gamma index is𝛾=min𝐷2Δ𝐷2+𝑅2Δ𝑅2,(1) where 𝐷 and 𝑅 are the dose difference and distance to the point in the reference dataset where the square root would reach a minimum. This minimum could be an interpolated point.

A point passes the check if this index is less or equal than to 1. Δ𝐷 and Δ𝑅 are no longer strict tolerances: dose difference could be greater than Δ𝐷 for a point passing the gamma test; DTA could also be greater than Δ𝑅 for a point with a gamma less than 1; although if dose difference is greater than Δ𝐷 and DTA greater than Δ𝑅 at the same point, the gamma test fails [10]. At the same time, the absolute value of 𝛾 at a point where the test is not passed is a measure of the severity of the failure.

The gamma index can be easily generalized to a 3D comparison, if DTA is computed with a 3D search [12]. A gamma filter method developed by Depuydt et al. [13] helps improve computation efficiency at the expense of not obtaining gamma values, but just checking whether or not every point is within tolerance.

It is widely acknowledged that in few occasions measured and computed datasets pass this gamma test for every measurement point, and it is customary to allow for some percentage of points failing the test [35, 14]. In practice, the pass rate is checked, the percentage of points in the reference dataset passes the test, and the tolerance for this rate is set according to previous experience. Therefore, the occurrence of failing points does not mean that the plan has to be rejected. This is the reason to accept a pass rate that could be less than 100%. However, there are no other grounds to accept this tolerance in pass rate, but empirical evidence, unless experimental uncertainty for the check procedure is somehow taken into account and propagated to the test indices. Basran and Woo [14] show their method to set the acceptance pass rate. They check their history of previous checks, their pass rates and whether they have been accepted or not in order to find the pass rate value corresponding to a 95% confidence. This is a purely empirical method, that ensured self-consistency, but it does not address the causes of the failing points.

Palta et al. [15] proposed a method to set tolerances in the process of commissioning, according to the observed variability. This recommendation was included in the recent report by AAPM Task Group 119 [3], to account indirectly for uncertainty in the tolerance levels. In this case, commissioning tests provide experience about variability of results that can be attributed to the experimental procedure. Analysis of the results can help set expanded tolerance levels of acceptance pass rates.

In other kinds of comparisons (like comparisons between computed datasets), statistical information is not available and a decision about the percentage of failing points that can be tolerated has to be based on other considerations.

In this work, a novel method is presented that modifies the gamma index check, introducing uncertainty features into its computation. This method has some interesting properties: first, it is a direct propagation of experimental uncertainty, allowing for uncertainty analysis. Second, tolerance levels are not modified because of uncertainty; using this method, tolerance levels can be set to values close to the accuracy actually sought for. And third, experimental devices, computations, and their uncertainties are characterized by simple and physically meaningful parameters. Therefore, the study of the check procedure is reduced to the a priori study of the devices and algorithms involved.

2. Methods and Materials

2.1. Theoretical Background

Each of the datasets has been represented as several arrays of random variables. Their mean values are the values in the dataset, labelled as small-case letters with subscripts for their position and superscripts for the dataset: 𝑥𝑟𝑖𝑗,𝑑𝑡𝑘𝑙,. There is one array for each of the spatial coordinates and another for the dose. Test and reference datasets are allowed to have different values of uncertainty and array spacing, but spatial uncertainty within one of the two datasets is supposed to be isotropic. Therefore, spatial uncertainty is described by one parameter for each dataset: 𝜎𝑡𝑠 for the test dataset and 𝜎𝑟𝑠 for the reference dataset. Along this work, the symbol 𝜎 stands for standard uncertainty (one standard deviation). Similarly, 𝜎𝑡𝑑 and 𝜎𝑟𝑑 are dose uncertainties. Δ𝑅 and Δ𝐷 are check tolerances for the comparison between both datasets (not related to dataset uncertainty). In the next paragraphs, the computation algorithm for the comparison with uncertainty evaluation will be presented and the derivation of the algorithm can be found in the appendix.

2.1.1. Probability Check for Gamma Index (2D Datasets)

A pass/fail test has to be performed for each possible pair of points, one from each dataset: point 𝑖𝑗 from the reference dataset and 𝑘𝑙 from the test dataset.

Step 1. Compute the following parameters: 𝑐1𝑖𝑗𝑘𝑙=𝜎𝑡𝑑2+𝜎𝑟𝑑2Δ𝐷2𝜎+2𝑡𝑠2+𝜎𝑟𝑠2Δ𝑅2+𝑑𝑡𝑘𝑙𝑑𝑟𝑖𝑗2Δ𝐷2+𝑥𝑡𝑘𝑙𝑥𝑟𝑖𝑗2+𝑦𝑡𝑘𝑙𝑦𝑟𝑖𝑗2Δ𝑅2=𝜎𝑡𝑑2+𝜎𝑟𝑑2Δ𝐷2𝜎+2𝑡𝑠2+𝜎𝑟𝑠2Δ𝑅2+𝛾2𝑖𝑗𝑘𝑙,𝑐2𝑖𝑗𝑘𝑙=𝜎𝑡𝑑2+𝜎𝑟𝑑2Δ𝐷22𝜎+2𝑡𝑠2+𝜎𝑟𝑠2Δ𝑅22𝜎+2𝑡𝑑2+𝜎𝑟𝑑2𝑑𝑡𝑘𝑙𝑑𝑟𝑖𝑗2Δ𝐷4𝜎+2𝑡𝑠2+𝜎𝑟𝑠2𝑥𝑡𝑘𝑙𝑥𝑟𝑖𝑗2+𝑦𝑡𝑘𝑙𝑦𝑟𝑖𝑗2Δ𝑅4,𝑐3𝑖𝑗𝑘𝑙=𝜎𝑡𝑑2+𝜎𝑟𝑑2Δ𝐷23𝜎+2𝑡𝑠2+𝜎𝑟𝑠2Δ𝑅23𝜎+3𝑡𝑑2+𝜎𝑟𝑑22𝑑𝑡𝑘𝑙𝑑𝑟𝑖𝑗2Δ𝐷6𝜎+3𝑡𝑠2+𝜎𝑟𝑠22𝑥𝑡𝑘𝑙𝑥𝑟𝑖𝑗2+𝑦𝑡𝑘𝑙𝑦𝑟𝑖𝑗2Δ𝑅6,𝑖𝑗𝑘𝑙=𝑐23𝑖𝑗𝑘𝑙𝑐32𝑖𝑗𝑘𝑙,𝑦𝑖𝑗𝑘𝑙=𝑔2𝜎𝑡𝑑2+𝜎𝑟𝑑2Δ𝐷2𝜎2𝑡𝑠2+𝜎𝑟𝑠2Δ𝑅2𝛾2𝑖𝑗𝑘𝑙𝑐2𝑖𝑗𝑘𝑙𝑐3𝑖𝑗𝑘𝑙+𝑐23𝑖𝑗𝑘𝑙𝑐32𝑖𝑗𝑘𝑙.(2)

Step 2. Compute 𝑃𝑖𝑗𝑘𝑙=𝑃[𝜒2𝑖𝑗𝑘𝑙>𝑦𝑖𝑗𝑘𝑙] from a noncentral chi-square distribution probability function or table.

For each point in the reference dataset, the value 𝑃𝑖𝑗=𝑃[max𝑘𝑙(Γ𝑖𝑗𝑘𝑙)>1]=1𝑘𝑙(1𝑃[Γ𝑖𝑗𝑘𝑙>1]) is computed and the test is passed if it is less than a preset significance figure 𝛼.

A global modified pass rate can be reported with the results of this test for every point in the reference dataset. A value of 𝛼=0.05 is used in this study.

2.1.2. Probability Distribution of Gamma (3D Datasets)

In a similar fashion, the test can be carried out for 3D datasets.

Step 1. Compute the following parameters: 𝑐1𝑖𝑗𝑘𝑙𝑚𝑛=𝜎𝑡𝑑2+𝜎𝑟𝑑2Δ𝐷2𝜎+3𝑡𝑠2+𝜎𝑟𝑠2Δ𝑅2+𝑑𝑡𝑙𝑚𝑛𝑑𝑟𝑖𝑗𝑘2Δ𝐷2+𝑥𝑡𝑙𝑚𝑛𝑥𝑟𝑖𝑗𝑘2+𝑦𝑡𝑙𝑚𝑛𝑦𝑟𝑖𝑗𝑘2+𝑧𝑡𝑙𝑚𝑛𝑧𝑟𝑖𝑗𝑘2Δ𝑅2=𝜎𝑡𝑑2+𝜎𝑟𝑑2Δ𝐷2𝜎+3𝑡𝑠2+𝜎𝑟𝑠2Δ𝑅2+𝛾2𝑖𝑗𝑘𝑙𝑚𝑛,𝑐2𝑖𝑗𝑘𝑙𝑚𝑛=𝜎𝑡𝑑2+𝜎𝑟𝑑2Δ𝐷22𝜎+3𝑡𝑠2+𝜎𝑟𝑠2Δ𝑅22𝜎+2𝑡𝑑2+𝜎𝑟𝑑2𝑑𝑡𝑙𝑚𝑛𝑑𝑟𝑖𝑗𝑘2Δ𝐷4𝜎+2𝑡𝑠2+𝜎𝑟𝑠2𝑥𝑡𝑙𝑚𝑛𝑥𝑟𝑖𝑗𝑘2+𝑦𝑡𝑙𝑚𝑛𝑦𝑟𝑖𝑗𝑘2+𝑧𝑡𝑙𝑚𝑛𝑧𝑟𝑖𝑗𝑘2Δ𝑅4,𝑐3𝑖𝑗𝑘𝑙𝑚𝑛=𝜎𝑡𝑑2+𝜎𝑟𝑑2Δ𝐷23𝜎+3𝑡𝑠2+𝜎𝑟𝑠2Δ𝑅23𝜎+3𝑡𝑑2+𝜎𝑟𝑑22𝑑𝑡𝑙𝑚𝑛𝑑𝑟𝑖𝑗𝑘2Δ𝐷6𝜎+3𝑡𝑠2+𝜎𝑟𝑠22𝑥𝑡𝑙𝑚𝑛𝑥𝑟𝑖𝑗𝑘2+𝑦𝑡𝑙𝑚𝑛𝑦𝑟𝑖𝑗𝑘2+𝑧𝑡𝑙𝑚𝑛𝑧𝑟𝑖𝑗𝑘2Δ𝑅6,𝑖𝑗𝑘𝑙𝑚𝑛=𝑐23𝑖𝑗𝑘𝑙𝑚𝑛𝑐32𝑖𝑗𝑘𝑙𝑚𝑛,𝑦𝑖𝑗𝑘𝑙𝑚𝑛=𝑔2𝜎𝑡𝑑2+𝜎𝑟𝑑2Δ𝐷2𝜎3𝑡𝑠2+𝜎𝑟𝑠2Δ𝑅2𝛾2𝑖𝑗𝑘𝑙𝑚𝑛𝑐2𝑖𝑗𝑘𝑙𝑚𝑛𝑐3𝑖𝑗𝑘𝑙𝑚𝑛+𝑐23𝑖𝑗𝑘𝑙𝑚𝑛𝑐32𝑖𝑗𝑘𝑙𝑚𝑛.(3)

Step 2. Compute 𝑃𝑖𝑗𝑘𝑙𝑚𝑛=𝑃[Γ𝑖𝑗𝑘𝑙𝑚𝑛>1]=𝑃[𝜒2𝑖𝑗𝑘𝑙𝑚𝑛>𝑦𝑖𝑗𝑘𝑙𝑚𝑛] from a noncentral chi-square distribution probability function or table.

For each point in the reference dataset, the value 𝑃𝑖𝑗𝑘=𝑃[max𝑙𝑚𝑛(Γ𝑖𝑗𝑘𝑙𝑚𝑛)>1]=1𝑙𝑚𝑛[1𝑃[Γ𝑖𝑗𝑘𝑙𝑚𝑛>1]] is computed and the test is passed if it is less than a preset significance figure 𝛼.

As in the 2D case, a global modified pass rate can be reported with the results of this test for every point in the reference dataset. A value of 𝛼=0.05 is used in this study.

2.2. Application

A probabilistic method to check test datasets for coincidence with a reference dataset, taking uncertainty into account, was tested with an example. It has to be remarked that for the new test to be passed, every point has to pass the test, that is, the probability test has to be passed for each pair of points drawn from the reference and test datasets. Common practice when using classic gamma test is to allow a limited percentage of points to fail the test. For the application of the present method, the probability comparison will only be passed if all points pass the test.

A practical example with 5 segments was set up. Figure 1(a) shows the whole reference composite field on film and Figures 1(b), 1(c), 1(d), 1(e), and 1(f) the segments used to obtain the composite image. The composite irradiation was modified in order to introduce controlled defects.

Case 1. A 1 mm shift along 𝑋 in the first segment, 1.5% more dose in the second, a 1 mm shift along 𝑌 in the third, and 0.5% less dose in the fourth.

Case 2. Same modifications, but the increment of dose in the second segment is 3% and the third is shifted 4 mm.

Case 3. The first segment has 2% less dose than the reference and is shifted 4 mm along 𝑋; the second segment has been delivered with 5% more dose; the third segment shift along 𝑌 is 4 mm, and the fourth has 2% less dose.

Case 4. All segments but the smallest one were shifted 4 mm along the 𝑋 axis.

Case 5. All segments but the smallest one were shifted 4 mm along the 𝑌 axis.

Therefore, each of the cases corresponds to a set of shifts and changes of intensities for every segment as exemplified in Figure 2. These controlled defects are simple enough as to make clear whether or not a test on coincidence with the reference unmodified image should pass or fail. However, the algorithm was applied with the same rigour as it would have been done for a more complex fluence pattern.

The modified planar distributions (test datasets) were compared with the original one (reference dataset) with the following uncertainty parameters: 0.2% dose and 0.5 mm, 0.5% dose and 0.5 mm, and 0.2% dose and 1 mm. Dose uncertainty is relative, and this fact has been taken into account in the computation of the indices. Tests were performed for tolerances 2% dose and 2 mm and 3% dose and 3 mm.

A function in 𝑅 statistical software [16] was used to perform all the computations. Graphs were obtained using the “rimage” package [17].

3. Results

Results for the gamma test are shown in Table 1. Pass rates for a classical test are presented along with the modified test. Shaded cells contain acceptable values: 100% pass rate for the modified test and more than 98% pass rate for classical tests.

Figure 3(a) shows a graph with points that fail the classic test for Case 2 and tolerances of 2 mm and 2%, Figure 3(b) shows the images of pass probability for the new test with 0.2% uncertainty in dose and 0.5 mm in position, the same tolerance values as in the previous case.

Figure 4 shows a sequence of pass probability images for Case 4, tolerance 3 mm and 3%, and different uncertainty values: 0.2 mm/0.2%, 0.5 mm/0.2%, 0.5 mm/0.5%, and 1.0 mm/0.2%. These uncertainty values have been chosen to illustrate the method.

4. Discussion and Conclusions

Case 1 is a priori an acceptable result, Case 2 is on the limit of acceptability, and the other ones are a priori unacceptable. It is clear that the classic test failed to discard the wrong irradiations even allowing for a percentage of failing points. The usual gamma index tests would have approved every case if a 97% pass rate would be allowed and 3%—3 mm tolerances would have been used. For tolerances of 2%—2 mm, only Case 4 would have been rejected. Case 4 is an extremely undesirable plan, with an unacceptable global shift, but, interestingly enough, Case 5, with the same shift along the other axis, would have been accepted with a passing rate greater than 98%. Comparison of pass rates as well as of images in Figure 3 shows that the novel test developed in this work would have rejected cases where the standard gamma index test would allow for a great number of points in gradient areas without rejecting the comparison. Therefore, the new test is less permissive than the classic one.

On the other hand, Figure 4 and their pass rates in Table 1 show the potential misleading effect of using measurement or computing methods not suitable for the task: as uncertainty grows larger, it is possible to accept an inadequate case (Case 4), if tolerances are also too large. It can be concluded that for tolerances of 3% and 3 mm, uncertainties of 0.2% and 1 mm are enough to make the test insensitive. Tolerances of 3 mm and 3% are currently used, but these results show that they are a compromise to account indirectly for uncertainty. The test developed in this paper would work with accuracy and sensitivity with 2%—2 mm tolerances that are closer to actual physical requirements.

The method presented in this work is potentially applicable to a broad set of comparisons: computer versus measured dose distributions for planning system commissioning, IMRT commissioning and patient checks, commissioning of measurement devices, and so forth. For any real experimental case, care should be taken to characterize its uncertainty. Furthermore, this method could be used to evaluate whether experimental uncertainties could deteriorate the sensitivity of a test. Accuracy requirements in IMRT patient plan checks are very high, and it is useful to know if the checking device uncertainty could induce the checker to accept plans too easily.

Some alternative methods have been described in the literature in order to refine the standard gamma index test; but the result is a consensus about tolerances and pass rate criteria. It is interesting to look at some conclusions in the ESTRO Booklet no. 9 [5] in the sense that it is hard to decide if test failure is related to computer system, data transfer, linear accelerator, measurement, or data analysis. A document with a similar scope is the one published by the American Association of Physicists in Medicine [3]. Both et al. [18] performed a study of check results (dose difference and distance to agreement) in order to set reasonable acceptance values for the percentage of passing points (95% for prostate, 90% for other sites) and point dose error per field (3% for prostate and 5% elsewhere). Stock et al. [19] present a strategy of primary and secondary checks. They accept checks with 𝛾 pass rates of 5% and prescribe further evaluation (𝛾 angle, e.g.) if 𝛾 pass rate is greater than 5% but less than 10%. Moran [20] designed a method to allow for small range failures in the test.

In the survey performed by Nelms and Simon [21], current practice (September 2007) in the USA is presented. It is far from clear that the consensus about how to accept results from a comparison check is actually used. From these sources, it seems there is no rigorous accepted method in the literature in order to consider measurement and computation uncertainty a priori.

This work shows a practical application of several results about the probability distribution of quadratic forms of normal random variables. Since no a priori relationship between dose and position uncertainty can be assumed, the expression for the gamma index cannot be reduced to a simple noncentral chi-square random variable. This is the reason why some more refined mathematics have been used. The use of a Monte Carlo method [22] would introduce more than a million iterations for each pair of points while the three moment approximation used in this work is fast and accurate enough. Computation does not involve more iterations than a classic gamma check.

A classical test (with Δ𝐷=3%, Δ𝑅=3 mm, and a pass rate tolerance of 97%) accepts every case in this work, despite the fact that some of them were designed with controlled defects that should not be acceptable at all. Using tighter tolerances (presented, Δ𝐷=2%, Δ𝑅=2 mm), only one case is rejected (Case 4) and, oddly enough, Case 5, with the same shift as Case 4 but along the central dose edge, passes this classical test. Figure 3(b) shows that this allowance in pass rate means that points in high gradient regions are allowed to fail the test. Unless further investigation is carried out it is not clear that those failing points are due to limitations in the measurement procedure.

When the new method is used, it becomes feasible to ensure whether or not points failing a classic test are a consequence of measurement limitations. If the new test does not yield a 100% pass rate it is possible to assert that the failing points cannot have been caused solely by the measurement procedure but there is also a problem with the irradiation. Therefore, no failing points are allowed.

As pointed out previously, this novel method relates experimental features (uncertainty) with test results. A well-defined answer in terms of probability, whether or not the probability of failing a gamma test at the point 𝑖𝑗 is larger than 𝛼, is obtained. As long as the uncertainty properties of the experimental or computational procedure have been investigated, the user is provided with a method to obtain a definite answer. On the other hand, feasibility studies become possible and it is possible to evaluate whether or not a comparison procedure uncertainty features could affect sensitivity in the test results.

Appendices

A. Probability Check for Gamma Index (2D Datasets)

Gamma is often described as a distance in a 𝑁+1 dimensional space. If the additional dimension were a spatial one and there would be the same tolerance and uncertainty value for every spatial direction, the problem of propagating uncertainty to gamma would have been much easier. But the additional dimension, absorbed dose, has independent tolerance and uncertainty values, and the problem, in its simpler formulation, is the following one: find the probability that the sum of two normal random variables, with zero mean, different variances, and different coefficients is less than 𝑔2.

Two sets of 2D dose distributions are defined: the reference one and the test one. Both are regular arrays but their spacing could be different. The reference points are labelled with subscripts (𝑖𝑗) and the test points with (𝑘𝑙). For each of these positions, there will be three quantities: dose 𝐷𝑟𝑖𝑗, 𝑥 coordinate 𝑋𝑟𝑖𝑗, and 𝑦 coordinate 𝑌𝑟𝑖𝑗. The notation for the test 2D set will be a 𝑡 superscript instead of 𝑟. 𝑋 and 𝑌 axes are the same for both point sets. These quantities will be considered normally distributed random variables, with mean the measured or computed values. Capital letters will refer to random variables and small letters to their means (𝑑𝑟𝑖𝑗,𝑥𝑟𝑖𝑗,𝑦𝑟𝑖𝑗,𝑑𝑡𝑘𝑙,𝑥𝑡𝑘𝑙,𝑦𝑡𝑘𝑙). Absorbed dose in the reference set has associated standard uncertainty 𝜎𝑟𝑑 and 𝜎𝑡𝑑 in the test set. Spatial uncertainty is isotropic in both datasets (same standard deviation for 𝑋 and 𝑌) and will be referred to with the symbols 𝜎𝑟𝑠 and 𝜎𝑡𝑠. Thus, we are assuming that𝐷𝑖𝑗𝑘𝑙=𝐷𝑡𝑘𝑙𝐷𝑟𝑖𝑗𝑑𝑁𝑡𝑘𝑙𝑑𝑟𝑖𝑗,𝜎𝑡𝑑2+𝜎𝑟𝑑2,𝑋𝑖𝑗𝑘𝑙=𝑋𝑡𝑘𝑙𝑋𝑟𝑖𝑗𝑥𝑁𝑡𝑘𝑙𝑥𝑟𝑖𝑗,𝜎𝑡𝑠2+𝜎𝑟𝑠2,𝑌𝑖𝑗𝑘𝑙=𝑌𝑡𝑘𝑙𝑌𝑟𝑖𝑗𝑦𝑁𝑡𝑘𝑙𝑦𝑟𝑖𝑗,𝜎𝑡𝑠2+𝜎𝑟𝑠2.(A.1) The squared gamma random variable Γ2𝑖𝑗𝑘𝑙=𝐷2𝑖𝑗𝑘𝑙/Δ𝐷2+(𝑋2𝑖𝑗𝑘𝑙+𝑌2𝑖𝑗𝑘𝑙)/Δ𝑅2 is a weighted sum of normal random variables with different means and standard deviations. If this equation is written in the following way:Γ2𝑖𝑗𝑘𝑙=𝜎𝑡𝑑2+𝜎𝑟𝑑2Δ𝐷2𝐷2𝑖𝑗𝑘𝑙𝜎𝑡𝑑2+𝜎𝑟𝑑2+𝜎𝑡𝑠2+𝜎𝑟𝑠2Δ𝑅2𝑋2𝑖𝑗𝑘𝑙𝜎𝑡𝑠2+𝜎𝑟𝑠2+𝑌2𝑖𝑗𝑘𝑙𝜎𝑡𝑠2+𝜎𝑟𝑠2,(A.2) it becomes a weighted sum of noncentral chi-square random variables. The following random variables:𝐷𝑖𝑗𝑘𝑙=𝐷𝑖𝑗𝑘𝑙𝜎𝑡𝑑2+𝜎𝑟𝑑2𝑑𝑁𝑡𝑘𝑙𝑑𝑟𝑖𝑗𝜎𝑡𝑑2+𝜎𝑟𝑑2,𝑋,1𝑖𝑗𝑘𝑙=𝑋𝑖𝑗𝑘𝑙𝜎𝑡𝑠2+𝜎𝑟𝑠2𝑥𝑁𝑡𝑘𝑙𝑥𝑟𝑖𝑗𝜎𝑡𝑠2+𝜎𝑟𝑠2,𝑌,1𝑖𝑗𝑘𝑙=𝑌𝑖𝑗𝑘𝑙𝜎𝑡𝑠2+𝜎𝑟𝑠2𝑦𝑁𝑡𝑘𝑙𝑦𝑟𝑖𝑗𝜎𝑡𝑠2+𝜎𝑟𝑠2,1(A.3) have standard deviation 1. It is possible to use now some of the properties of the noncentral chi-squared distribution. Let 𝑈𝑛 be a finite set of independent normally distributed random variables with means 𝜇𝑛 and standard deviation 1. Then, 𝑊=𝑛𝑈𝑛2 will have a noncentral chi-square distribution 𝜒2(𝑛,𝜆) with 𝑛 degrees of freedom and noncentrality parameter 𝜆=𝑛𝜇𝑛2. Thus, 𝐷2𝑖𝑗𝑘𝑙=𝐷2𝑖𝑗𝑘𝑙𝜎𝑡𝑑2+𝜎𝑟𝑑2𝜒2𝑑1,𝑡𝑘𝑙𝑑𝑟𝑖𝑗2𝜎𝑡𝑑2+𝜎𝑟𝑑2,𝑅2𝑖𝑗𝑘𝑙=𝑋2𝑖𝑗𝑘𝑙+𝑌2𝑖𝑗𝑘𝑙=𝑋2𝑖𝑗𝑘𝑙𝜎𝑡𝑠2+𝜎𝑟𝑠2+𝑌2𝑖𝑗𝑘𝑙𝜎𝑡𝑠2+𝜎𝑟𝑠2𝜒2𝑥2,𝑡𝑘𝑙𝑥𝑟𝑖𝑗2+𝑦𝑡𝑘𝑙𝑦𝑟𝑖𝑗2𝜎𝑡𝑠2+𝜎𝑟𝑠2.(A.4) The squared gamma index isΓ2𝑖𝑗𝑘𝑙=𝜎𝑡𝑑2+𝜎𝑟𝑑2Δ𝐷2𝐷2𝑖𝑗𝑘𝑙+𝜎𝑡𝑠2+𝜎𝑟𝑠2Δ𝑅2𝑅2𝑖𝑗𝑘𝑙.(A.5) Given a quadratic form of normally distributed variables, there always exists a transformation which reduces it to a weighted sum of noncentral chi-squared variables, corresponding to the orthogonal transformation that reduces the form to its canonical form. As a matter of fact, the previous derivation is a very simple particular case of this general result [23]. In the gamma test problem, it is necessary to evaluate the probability of the event Γ2𝑖𝑗𝑘𝑙>1.

In the general case of a quadratic form, the noncentrality parameters are linear combinations of the means. Thus, a quadratic form of central normal variables results in a linear combination of central chi-squared variables. The normal variables involved in Γ are noncentral ones; their means are the differences between doses or between spatial coordinates in the test and the reference datasets.

Different expansions of the distribution function of a weighted sum of noncentral chi-squared variables can be found in the literature, and they could be used for this problem. Shah and Khatri [24] found a power series expansion, Ruben [25] developed series of distribution functions of central and noncentral chi-squared variables, with coefficients recursively defined, and Shah and other authors [26] proposed series involving Laguerre polynomials. There is also a simple approximation based on a study on relationships between chi-squared and Poisson variables first proposed by Patnaik [27], improved by Pearson [28], which gives accurate results especially in the tails [29]. Imhoff [23] rewrote this approximation for the weighted sum of noncentral chi-squared variables. It uses probability values for a single central chi-squared variable. This approach was used in the present work. The accuracy of this method leaves out of the question a Monte Carlo approach based on ISO recommendations [22], which would lead to a minimum of 107 iterations for each pair of points.

According to Imhoff, if 𝑄=𝑚𝑟=1𝜆𝑟𝜒2𝑟;𝛿𝑟2 (𝛿2𝑟 being noncentrality parameters in his notation), then𝑃[]𝜒𝑄>𝑥𝑃2>𝑦,(A.6) where=𝑐23𝑐32,𝑦=𝑥𝑐1𝑐21/2+,𝑐𝑗=𝑚𝑟=1𝜆𝑟𝑗𝑟+𝑗𝛿𝑟2,𝑗=1,2,3.(A.7) Applying this approximation to the problem of finding 𝑃[Γ2𝑖𝑗𝑘𝑙>𝑔2], the set of parameters (2) in the main text are obtained, taking into account that the chi squared parameters are those in Table 2, taking into account that for our case, and, therefore, (2) in the main text are obtained. With those parameters 𝑃𝑖𝑗𝑘𝑙=𝑃[Γ𝑖𝑗𝑘𝑙>𝑔2]=𝑃[𝜒2𝑖𝑗𝑘𝑙>𝑦𝑖𝑗𝑘𝑙], and the probability of gamma being greater than 𝑔2 for the points (𝑖𝑗) in the reference dataset and (𝑘𝑙) in the test dataset, taking into account spatial and dosimetry uncertainties in both datasets has been obtained.

It is possible to modify the original gamma test for the 𝑖𝑗 point if the probabilities𝑃𝑖𝑗=𝑃max𝑘𝑙Γ𝑖𝑗𝑘𝑙>1=1𝑘𝑙Γ1𝑃𝑖𝑗𝑘𝑙>1(A.8) are defined and the following criterion is set: 𝑖𝑗 passes the test if 𝑃𝑖𝑗<𝛼 being 𝛼 a significance figure, set by the user as the maximum probability to be accepted.

B. Probability Distribution of Gamma (3D Datasets)

The squared gamma random variable is now Γ2𝑖𝑗𝑘𝑙𝑚𝑛=𝐷2𝑖𝑗𝑘𝑙𝑚𝑛/Δ𝐷2+(𝑋2𝑖𝑗𝑘𝑙𝑚𝑛+𝑌2𝑖𝑗𝑘𝑙𝑚𝑛+𝑍2𝑖𝑗𝑘𝑙𝑚𝑛)/Δ𝑅2. As long as the spatial standard uncertainties are isotropic, the same rearrangement as in the 2D case can be doneΓ2𝑖𝑗𝑘𝑙𝑚𝑛=𝜎𝑡𝑑2+𝜎𝑟𝑑2Δ𝐷2𝐷2𝑖𝑗𝑘𝑙𝑚𝑛𝜎𝑡𝑑2+𝜎𝑟𝑑2+𝜎𝑡𝑠2+𝜎𝑟𝑠2Δ𝑅2𝑋2𝑖𝑗𝑘𝑙𝑚𝑛𝜎𝑡𝑠2+𝜎𝑟𝑠2+𝑌2𝑖𝑗𝑘𝑙𝑚𝑛𝜎𝑡𝑠2+𝜎𝑟𝑠2+𝑍2𝑖𝑗𝑘𝑙𝑚𝑛𝜎𝑡𝑠2+𝜎𝑟𝑠2,(B.1) and, defining the new random variable 𝑍𝑖𝑗𝑘𝑙𝑚𝑛=𝑍𝑖𝑗𝑘𝑙𝑚𝑛/𝜎𝑡𝑠2+𝜎𝑟𝑠2𝑁(𝑧𝑡𝑙𝑚𝑛𝑧𝑟𝑖𝑗𝑘/𝜎𝑡𝑠2+𝜎𝑟𝑠2,1), we obtain𝐷2𝑖𝑗𝑘𝑙𝑚𝑛=𝐷2𝑖𝑗𝑘𝑙𝑚𝑛𝜎𝑡𝑑2+𝜎𝑟𝑑2𝜒2𝑑1,𝑡𝑙𝑚𝑛𝑑𝑟𝑖𝑗𝑘2𝜎𝑡𝑑2+𝜎𝑟𝑑2,𝑅2𝑖𝑗𝑘𝑙𝑚𝑛=𝑋2𝑖𝑗𝑘𝑙𝑚𝑛+𝑌2𝑖𝑗𝑘𝑙𝑚𝑛+𝑍2𝑖𝑗𝑘𝑙𝑚𝑛=𝑋2𝑖𝑗𝑘𝑙𝑚𝑛𝜎𝑡𝑠2+𝜎𝑟𝑠2+𝑌2𝑖𝑗𝑘𝑙𝑚𝑛𝜎𝑡𝑠2+𝜎𝑟𝑠2+𝑍2𝑖𝑗𝑘𝑙𝑚𝑛𝜎𝑡𝑠2+𝜎𝑟𝑠2𝜒2𝑥3,𝑡𝑙𝑚𝑛𝑥𝑟𝑖𝑗𝑘2+𝑦𝑡𝑙𝑚𝑛𝑦𝑟𝑖𝑗𝑘2+𝑧𝑡𝑙𝑚𝑛𝑧𝑟𝑖𝑗𝑘2𝜎𝑡𝑠2+𝜎𝑟𝑠2,Γ2𝑖𝑗𝑘𝑙𝑚𝑛=𝜎𝑡𝑑2+𝜎𝑟𝑑2Δ𝐷2𝐷2𝑖𝑗𝑘𝑙𝑚𝑛+𝜎𝑡𝑠2+𝜎𝑟𝑠2Δ𝑅2𝑅2𝑖𝑗𝑘𝑙𝑚𝑛.(B.2) Using Imhoff’s three moments approximation, the parameter values are the ones in equations (3) in the main text, and 𝑃𝑖𝑗𝑘𝑙𝑚𝑛=𝑃[Γ𝑖𝑗𝑘𝑙𝑚𝑛>1]=𝑃𝜒2𝑖𝑗𝑘𝑙𝑚𝑛>𝑦𝑖𝑗𝑘𝑙𝑚𝑛. The test is 𝑃𝑖𝑗𝑘<𝛼 with 𝑃𝑖𝑗𝑘=𝑃[max𝑙𝑚𝑛(Γ𝑖𝑗𝑘𝑙𝑚𝑛)>1]=1𝑙𝑚𝑛[1𝑃[Γ𝑖𝑗𝑘𝑙𝑚𝑛>1]].

Conflict of Interests

No conflict of interests exists for any of the authors.

Acknowledgments

The authors are indebted to C. Weatherill from Boston University for some very valuable suggestions. They are also indebted to E. Overton and A. Baker for their careful revision of this paper and valuable suggestions.