Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

CD4 Enumeration Technologies: A Systematic Review of Test Performance for Determining Eligibility for Antiretroviral Therapy

  • Rosanna W. Peeling ,

    rosanna.peeling@lshtm.ac.uk

    Affiliation London School of Hygiene and Tropical Medicine, London, WC1E 7HT, England

  • Kimberly A. Sollis,

    Affiliation London School of Hygiene and Tropical Medicine, London, WC1E 7HT, England

  • Sarah Glover,

    Affiliation London School of Hygiene and Tropical Medicine, London, WC1E 7HT, England

  • Suzanne M. Crowe,

    Affiliation Centre for Biomedical Research, Burnet Institute, Melbourne, 3004, Victoria, Australia

  • Alan L. Landay,

    Affiliation Department of Immunology/Microbiology, Rush University Medical Center, Chicago, IL, 60612, United States of America

  • Ben Cheng,

    Affiliation Pangaea Global AIDS Foundation, Oakland, CA, 94607, United States of America

  • David Barnett,

    Affiliation UK NEQAS for Leucocyte Immunophenotyping, Sheffield, S10 2QD, England

  • Thomas N. Denny,

    Affiliation Duke Human Vaccine Institute and Center for HIV/AIDS, Immunology and Virology Quality Assessment Center, Durham, NC, 27710, United States of America

  • Thomas J. Spira,

    Affiliation Division of AIDS, STD, &TB Laboratory Research, National Center for Infectious Diseases, Centers for Disease Control and Prevention, Atlanta, GA, 30333, United States of America

  • Wendy S. Stevens,

    Affiliation University of the Witwatersrand, Parktown, 2193, South Africa

  • Siobhan Crowley,

    Affiliation Director Health Programs, ELMA Philanthropies, New York, NY, United States of America

  • Shaffiq Essajee,

    Affiliation Clinton Health Access Initiative, Boston, MA, 02127, United States of America

  • Marco Vitoria,

    Affiliation World Health Organization, Geneva, Switzerland

  • Nathan Ford

    Affiliation World Health Organization, Geneva, Switzerland

Abstract

Background

Measurement of CD4+ T-lymphocytes (CD4) is a crucial parameter in the management of HIV patients, particularly in determining eligibility to initiate antiretroviral treatment (ART). A number of technologies exist for CD4 enumeration, with considerable variation in cost, complexity, and operational requirements. We conducted a systematic review of the performance of technologies for CD4 enumeration.

Methods and Findings

Studies were identified by searching electronic databases MEDLINE and EMBASE using a pre-defined search strategy. Data on test accuracy and precision included bias and limits of agreement with a reference standard, and misclassification probabilities around CD4 thresholds of 200 and 350 cells/μl over a clinically relevant range. The secondary outcome measure was test imprecision, expressed as % coefficient of variation. Thirty-two studies evaluating 15 CD4 technologies were included, of which less than half presented data on bias and misclassification compared to the same reference technology. At CD4 counts <350 cells/μl, bias ranged from -35.2 to +13.1 cells/μl while at counts >350 cells/μl, bias ranged from -70.7 to +47 cells/μl, compared to the BD FACSCount as a reference technology. Misclassification around the threshold of 350 cells/μl ranged from 1-29% for upward classification, resulting in under-treatment, and 7-68% for downward classification resulting in overtreatment. Less than half of these studies reported within laboratory precision or reproducibility of the CD4 values obtained.

Conclusions

A wide range of bias and percent misclassification around treatment thresholds were reported on the CD4 enumeration technologies included in this review, with few studies reporting assay precision. The lack of standardised methodology on test evaluation, including the use of different reference standards, is a barrier to assessing relative assay performance and could hinder the introduction of new point-of-care assays in countries where they are most needed.

Introduction

The increased availability of antiretroviral therapy (ART) has resulted in major reductions in morbidity and mortality in high HIV burden settings. Through significant global scale-up, access to ART is increasing, with around 10 million people in low- and middle-income settings receiving treatment as of the end of 2013, an estimated 65% of the of the global target of 15 million people set for 2015 [1].

CD4+ T-lymphocytes, also known as the helper T-cells, are the coordinators of the immune response that protects the body against microbial disease, a variety of autoimmune diseases and some forms of cancer. The destruction of CD4+ T-lymphocytes by HIV is the main cause of the progressive weakening of the immune system in HIV infection, and leads ultimately to acquired immune deficiency syndrome (AIDS). The CD4 count is a strong predictor of HIV progression to AIDS and death, and is considered the best laboratory marker for deciding when to initiate ART [2,3]. The use of clinical staging alone to determine the timing of ART initiation is limited by the unreliable correlation between asymptomatic or mild disease and short-term prognosis and may result in dangerous delays in treatment initiation in those without symptoms but with severe immune suppression [4].

Prior to 2013, the World Health Organization (WHO) recommended ART initiation in all HIV-infected individuals whose CD4 count has dropped to ≤350 cells/μl, irrespective of clinical symptoms. The WHO 2013 guidelines have raised the threshold for ART initiation to ≤500 cells/μl, with priority given to those with a CD4 count to ≤350 cells/μl, consistent with emerging data, indicating a clinical and public health benefit of earlier treatment and as part of a global effort to get 15 million HIV patients on ART by end of 2015 [5].

A number of technologies are available for CD4 enumeration, with considerable variation in cost, complexity, and operating requirements (Tables 15). The traditional approach to calculating absolute CD4+ T lymphocyte counts is to use the total leukocyte count (or lymphocyte count) obtained from the hematology analyzer and then use the percentage CD4+ T lymphocytes from the flow cytometric analysis to calculate the absolute values—the so-called “dual platform” (DP) approach. Quite often, however, two separate samples are used in the procedure, one to obtain the total leukocyte count using a hematology analyzer and one to undertake the flow cytometry, each having its own in-built variation. Thus, when the results from each are combined to determine the absolute CD4+ T lymphocyte count, the variation is compounded such that inter-laboratory variation between centers can be as high as 40%. Thus, the need to derive accurate and precise absolute CD4+ T lymphocyte counts has led to the development of instruments that can produce both percentage and absolute values, termed the”single platform” approach (SP).

thumbnail
Table 1. Operating characteristics of flow cytometric methods for CD4+ T-cell enumeration, using conventional flow cytometers.

https://doi.org/10.1371/journal.pone.0115019.t001

thumbnail
Table 2. Operating characteristics of dedicated single platform CD4+ T-cell enumeration systems.

https://doi.org/10.1371/journal.pone.0115019.t002

thumbnail
Table 3. Operating characteristics of manual technologies for CD4+ T-cell enumeration.

https://doi.org/10.1371/journal.pone.0115019.t003

thumbnail
Table 4. Operating characteristics of other commercially available technologies for CD4+ T-cell enumeration.

https://doi.org/10.1371/journal.pone.0115019.t004

thumbnail
Table 5. Operating characteristics of Point of Care (POC) technologies for CD4+ T-cell enumeration.

https://doi.org/10.1371/journal.pone.0115019.t005

Two SP approaches are in widespread use today: volumetric and bead based. The principle of the volumetric approach is that a known volume of sample is passed through the flow cell (and interrogated by the laser beam) in a known amount of time. The alternative approach is to use bead-based technologies where a known number of beads is added to a known volume of sample thus allowing calculation of the bead to cell ratio and the subsequent calculation of the absolute cell count, in this instance, CD4+ T lymphocytes. An important feature of any absolute counting system is pipetting accuracy and minimum sample manipulation. The introduction of SP technologies has had a beneficial effect and lowered inter-laboratory variation in CD4 enumeration.

It is critical that country programmes consider whether these tests can give accurate and reproducible results as well as being appropriate for the setting [6]. In particular, we focussed on how bias and misclassification probabilities of different CD4 assays may affect eligibility for ART initiation. Misclassification probabilities give clinically useful measures of test performance and should be reported in evaluations of CD4 technologies. An upward misclassification around a treatment threshold means that a patient who should be eligible for treatment would be denied treatment, while a downward misclassification would not lead to ineligibility for treatment. To date, there have been no systematic reviews of the performance of CD4 technologies. Here we provide an evaluation of the performance characteristics of CD4 technologies through a systematic review of published literature.

Methods

We performed a systematic review of studies evaluating the performance of CD4 enumeration technologies. A search of the Cochrane Library and the Centre for Reviews and Dissemination databases, including the Database of Abstracts of Reviews of Effects (DARE), the National Health Service Economic Evaluation Database (NHS EED) and the National Institute for Health Research Health Technology Assessment (NIHR HTA) database found no existing reviews addressing the review objective.

We followed standard guidance in performing the review [7]. Objectives and methods of the review were documented in a review protocol, which is included as S1 Methods.

Eligibility criteria

Eligibility criteria were defined using the PICOS (Population, Interventions, Comparisons, Outcomes, Study Design) format. Studies evaluating the accuracy and/or precision of any CD4 technology commercially available at the time of the review were considered eligible for inclusion. Currently, no “gold” standard technology or internationally recognised reference preparation exists for CD4 enumeration, and a wide range of flow cytometric technologies have been used as comparators [6].

For the purposes of this review, we included studies that used as reference technologies any flow cytometric method considered to be acceptable by the WHO HIV diagnostics working group named in the review protocol (S1 Methods).

Information Sources

Studies were identified by searching two electronic databases—MEDLINE and EMBASE, scanning the reference lists of the Nature supplement Evaluating diagnostics: the CD4 guide, and by inviting the WHO working group, whose members are authors of the Nature supplement to identify relevant studies for the review [6,8].

Search Strategy

We used the following search terms to search the electronic databases: “CD4”, “technolog*”, “methodolog*”, “techn*”, “method*”, “test”, “evaluation”, “validation”, “accuracy”, “comparison”, “efficacy”, “performance”, “reproducibility”, “precision”, “flow AND cytometry”. Subject headings included: “CD4 antigen”, “antigens, CD4”, “CD4 lymphocyte count”, “CD4-positive T-lymphocytes”, “technology”, “methodology”, “technique”, “evaluation”, “clinical evaluation”, “economic evaluation”, “evaluation research”, “evaluation and follow up”, “instrument validation”, “validation process”, “validation study”, “evaluation studies as topic”, “validation studies”, and “flow cytometry”. Full electronic search strategies and review protocols are detailed in S1 Methods.

Study selection

Articles were exported from the search database to EndNote and screened for relevance (Fig. 1). Data were extracted by two independent reviewers (SG and KS) and disagreements resolved through consensus.

thumbnail
Fig 1. PRISMA flow diagram of study selection.

EQA: external quality assurance, FC: flow cytometry.

https://doi.org/10.1371/journal.pone.0115019.g001

Data extraction

The following data were extracted: study location, index test, reference test, and population (HIV positive or HIV positive and negative). Data on accuracy and precision included bias or mean difference and limits of agreement, misclassification probabilities (when sensitivity or specificity values were given, misclassification probabilities were calculated), and coefficient of variation. Where possible, HIV positive data alone were extracted. Where this was not possible, combined HIV positive and negative data were extracted. Only data within the clinically relevant range (thresholds of 200, 350, 500 cells/μl) were extracted as per the inclusion criteria.

Studies should report not only percent misclassification around clinically important CD4 cell thresholds (e.g., 200, 350 or 500 cells/μl), but should also report the magnitude of these misclassifications.

The secondary outcome measure addressed precision or reproducibility. Precision is particularly important when following a patient’s serial measurements using the same technology. Precision can be measured within-laboratory or between-laboratories and is expressed as percent coefficient of variation (%CV).

Studies meeting inclusion criteria were also assessed for bias and quality on ten points drawn from the STARD guidelines (Fig. 2) [9]. This review has been reported following the PRISMA statement guidance for reporting of systematic reviews [10,11].

thumbnail
Fig 2. Methodological quality of included studies.

EQA: external quality assurance, IQC: internal quality control.

https://doi.org/10.1371/journal.pone.0115019.g002

Results

A summary of different commercially available CD4 technologies, including their assay principles, operational characteristics and compatibility with international external quality assurance programme reagents, is shown in Tables 15. Conventional flow cytometry based technologies include DP technologies such as the BD FACSCalibur (BD Biosciences, a division of Becton Dickinson, Franklin Lakes, NJ, USA [BD]) and the Epics XL (Beckman Coulter, Inc., Pasadena, CA, USA [BC]); bead-based SP technologies such as the BD Trucount Tube, the BC Flow Count Tubes, the Cytognos Perfect Count (Cytognos S.L., Salamanca, Spain [Cytognos]), and the BC Pan-leucogating (PLG) FlowCare CD4. Dedicated SP CD4 technologies include bead-based BD FACSCount, and three volumetric assays, Millipore Guava PCA (EMD Millipore, Darmstadt, Germany [Guava]), Partec CyFlow Counter (Partec, a division of Sysmex, Corporation, Kobe, Japan [Partec]) and Apogee Auto40 Flow Cytometer (Apogee Flow Systems, Hertfordshire, UK [Apogee]). There are two manual microscopy based counting methods, the BC Cytospheres and the Dynal T4 Quant (Dynal Biotech ASA, a division of Thermo Fisher Scientific Inc., Waltham, MA, USA [Dynal]). Point-of-care (POC) analysers include the Sysmex pocH-100i with Dynabeads from Dynal, the i+MED CD4 Select (i+ MED Laboratories, Bangkok, Thailand [i+ MED]), the Pima Analyzer (Alere Inc., Waltham, MA, USA [Alere]), and PointCare NOW (PointCare Technologies, Marlborough, MA, USA [PointCare]).

Study selection

This systematic review was first performed in July 2009. Of the 433 studies in the search, 345 were excluded as they were not performance evaluations. After further triage, 20 studies that measured bias, misclassification and/or %CV were accepted for inclusion in this review (PRISMA flow diagram, Fig. 1). A second search was conducted in April 2013 using the same search strategy and review protocol with the goal of capturing more POC CD4 enumeration technologies in the review. An additional 12 studies were included. A summary of data extracted from all eligible studies with data on bias, misclassifications and/or %CV is shown in S1 Dataset.

Study characteristics

A summary of study characteristics is shown in Table 6.

At least one published performance evaluation study was found for each of the following technologies: BC Cytospheres [1218], Dynal Dynabeads [14,1822], Guava PCA [2329], Partec CyFlow instruments [16,26,3036], BD FACSCount [14,3742]; BD Trucount tubes [4346], BC Flow-Count fluorospheres [47], Cytognos Perfect-Count microspheres [48], SP Flow Cytometry using flow rate calibration [4952]; BC PLG FlowCARE CD4 [53], Sysmex PocH-100i with Dynal Dynabeads [54], i+MED CD4 Select [55], Alere Pima Analyzer [5658,61], PointCare NOW [60], Apogee Auto40 [59,63,64] and MBio Snap Count (MBio Diagnostics, Inc., Boulder, CO, USA [MBio]) [62].

No published, peer-reviewed performance evaluations of the commercially available technology Partec miniPOC were found by our literature search, and the MBio assay is not yet commercially available.

Methodological quality of included studies

The findings of the quality assessment of included studies are summarised in Fig. 2. Most studies reported the index test (test under evaluation) and the reference standard in sufficient detail to be reproduced, but few studies reported whether staff at the evaluation sites were proficient at performing the reference standard and/or sufficiently trained on performing the index test. Few studies reported internal quality controls being performed during the evaluation period. Without these quality measures, it would be difficult to differentiate whether the bias or misclassification between the index and reference tests was due to differences in inherent test characteristics or to operator error.

Manufacturer involvement was evident in a number of studies. Seven studies declared one or more authors to be affiliated with the manufacturer of the index test [12,17,39,42,46,54]. Four studies were partially sponsored by the manufacturer [25,26,45,53]. One study stated that the manufacturer’s site was one of the study sites [42], and four studies declared donation of reagents or equipment by the manufacturer [15,29,38,40]. A further four studies could be considered to be calibration or test developers’ papers [30,49,51,55]. In the absence of definitions for a sponsored study versus an independent evaluation, it is not clear to what extent the inclusion of manufacturers as co-authors of papers influenced the study results.

Accuracy

As there is no international standard for CD4 enumeration, a variety of reference standard technologies were used for evaluating the performance of new CD4 technologies, making it difficult to pool data on bias and misclassification across all studies.

Bias

Bias (mean difference) data were collated and represented graphically but only from studies that compared the index tests mean difference to the same reference technology (Table 6 and Fig. 3a, b). FACSCount and FACSCalibur were the most common reference technologies.

thumbnail
Fig 3. A. Bias compared to FASCount as the reference technology. B. Bias compared to FASCalibur as the reference technology.

https://doi.org/10.1371/journal.pone.0115019.g003

Bias was evaluated at CD4 thresholds of 200 cells/μl and 350 cells/μl using the BD FACSCount as a reference method. For the Pima Analyser, at CD4 counts at both <350 cells/μl and >350 cells/μl, the studies found that the Pima Analyser consistently underestimated CD4 counts by-16.6 cells/μl (limits of agreement-88.4, +55.3 cells/μl at <350 cells/μl) and at-70.7 cells/μl (limits of agreement-216.5, +75 cells/μl) [58]. However, when the Guava Easy CD4 was compared to the FACSCount, the bias for CD4 counts <350 cells/μl was +13 cells/μl (limits of agreement-27, +53) and for cell counts >350 cells/μl, bias was-45.3 cells/μl [23,27].

The FACSCalibur, in comparison to the FACSCount, overestimated CD4 counts by +13.1 cells/μl at values of <350 cells/μl [41]. However, bias estimation of manual bead based methods showed underestimation of CD4 counts at <350 cells/μl of-35.2 cells/μl (limits of agreement-164.9,+ 94.6) and -0.4 cells/μl (limits of agreement-126, +125.2) for BC Cytospheres and Dynal Dynabeads respectively [18].

When the overall bias for all CD4 technologies was calculated, this ranged from-70.7 to +47 cells/μl for CD4 counts >350 cells/ μl and -35.2 to +13.1 cells/μl for CD4 counts <350 cells/μl when compared to the FASCount as a reference method.

We then progressed to study bias data using a threshold of 200 cells/μl compared to FACSCount as a reference method. A total of six publications were identified covering the following technologies, the Apogee Auto 40 [59], Guava Easy CD4 [23,2527], and Partec CyFlow Counter [26,52].

The four studies that had reported data at <200 cells/μl and compared the Guava Easy CD4 to the FACSCount, all had a positive bias that ranged from +10 to +45.5 cells/μl [23,2527]. One study had data for CD4 counts >200 cells/μl and had a bias of +44.9 cells/μl (limits of agreement-112.6 to + 212.3) [25]. It is interesting to note that all studies reporting the performance of the Guava Easy CD4 showed that this assay overestimated CD4 counts compared to the FACSCount [23,2527].

Two studies compared the Partec CyFlow Counter to the FACSCount. One showed an over-estimation by +0.8 cells/μl (limits of agreement-21.7 to +23.2) while the other showed an underestimation by-5.8 cells/μl (limits of agreement-37.6 to +25.9) [26,52].

Overall when comparing the technologies to the FACSCount as a reference method, bias ranged from +44.9 to -12.1 for CD4 counts >200 cells/ μl and +45.5 to -5.8 for CD4 counts <200 cells/ μl. Bias using FACSCalibur as a reference method showed similar results to that using FACSCount (Fig. 3a and b).

Misclassification

Fig. 4 showed the range of misclassifications at thresholds of 200 and 350 cells/μl of new CD4 assays compared to different reference standards.

thumbnail
Fig 4. Misclassification (%), using a CD4 thresholds of 350 cells/μl and 200 cells/μl.

https://doi.org/10.1371/journal.pone.0115019.g004

Nine studies provided data on misclassification probabilities using a cut-off of 350 cells/μl [16,18,27,56,6064]. Data were available on the following assays: BC Cytospheres [16,18], Dynal Dynabeads [18], Guava Easy CD4 [27], Partec CyFlow Counter [16], Pima Analyzer [56,61], PointCare NOW [60], MBio Snap Count [62], and Auto40 Flow Cytometer [63,64].

Two studies [63,64] evaluated the Auto40 Flow Cytometer compared to the FACSCalibur in Cameroon. One study [63] reported upward and downward misclassification probabilities of 8% and 2%, respectively, while another [64] found the likelihood of under-treatment (upward misclassification) to be 3% and the probability of over-treatment (downward misclassification) to be 2%.

Karcher et al. conducted a large trial in field conditions in Uganda comparing BC Cytospheres and the Partec CyFlow Counter to DP flow cytometry [16]. HIV positive patients were recruited, the majority of whom had CD4 counts within a range from 0 to1200 (median 332 cells/μl). Of samples with CD4 counts <350 cells/μl measured by DP flow cytometry, BC Cytospheres misclassified 16% of the patients as having CD4 counts of >350 cells/μl, thereby denying them of eligibility for treatment. Similarly for those with CD4 counts > 350 cells/μl measured by DP flow cytometry, BC Cytospheres misclassified 20%, indicating that 20% of patients not qualifying for treatment using the reference test would have done so if BC Cytospheres were used. Karcher et al. also compared the Partec CyFlow to the DP flow cytometer and found that of samples with counts <350 cells/μl, 29% were misclassified as having >350 cells/μl by the Partec CyFlow Counter, and 7% of samples with CD4 counts >350 cells/μl were misclassified downward as being <350 cells/μl [16].

Lutwama et al. conducted a large study of manual technologies in Uganda; they recruited only HIV positive patients, the majority of whom had CD4 counts within the clinically important range (range in study: 0–900 cells/μl) using the reference standard technology [18]. Of samples with counts of <350 cells/μl using the reference standard technology, BC Cytospheres misclassified only 1% as >350 cells/μl. However, of those with counts >350 cells/μl, 68% were misclassified as <350 cells/μl (indicating that 68% of patients not qualifying for treatment using the reference test would have done so if BC Cytospheres were used). Dynal Dynabeads had upward and downward misclassification probabilities of 6% and 30% respectively.

Renault et al. (2010) conducted a comparison study between the Guava Easy CD4 and FACSCount. Across a range of CD4 (0–1100 cells/μl), the upward and downward misclassification was calculated and found to be 6.1% and 9.4%, respectively [27].

One study reported on evaluations of the PointCare NOW assay (this instrument has since been re-marketed/rebranded as HumaCount CD4now (Human Diagnsotics Worldwide mbH, Weisbaden, Germany) in five countries (Mozambique, Belgium, Canada, USA and South Africa) [60]. Mozambique, Belgium, Canada and USA used the FACSCalibur as a reference standard while South Africa compared the PointCare NOW to the Epics XL. Upward and downward misclassification were reported by country: Mozambique, +51%, -20%; Belgium, +62%, -4%; Canada, +50%, -0%; USA, +0%, -3%; and for South Africa, +64%, -6%. Overall misclassification was also calculated, and it was found that testing with PointCare NOW would have led to 47% of patients with CD4 counts less than 350 cells/μl not eligible to receive treatment (upward misclassification) and 6% of patients with CD4 counts greater than 350 cells/μl eligible to receive treatment (downward misclassification).

Of the two studies evaluating the Pima Analyzer, Herbert et al. (2013) used the BC Cytomics FC 500 as a reference standard while Jani et al. compared the Pima Analyzer to the BD FACSCalibur [56,61]. Across the clinically relevant range (60–1200 cells/μl), Herbert et al. found the upward and downward misclassification to be 6.1% and 9.4% respectively. MBio Snap Count was evaluated by Logan et al. (2013) and compared to the FACSCalibur [62]. Of the 94 samples, 2.1% were misclassified upward and 3.2% were misclassified downward at a threshold of 350 cells/μl.

Thirteen studies presented misclassification data using a CD4 cut-off of 200 cells/μl; these are presented in Fig. 4b [12,13,15,17,18,20,23,28,36,56,60,61,63,64].

Even through misclassification probabilities can be influenced by the number of patients with CD4 counts close to the threshold in each study, Pointcare NOW showed an overall tendancy towards upward misclassification at both thresholds of 200 and 350 cells/μl. Most other technologies showed misclassification probabilities of <10%.

Precision

Forty-four percent of studies reported within-laboratory precision of absolute CD4 count measurement, using replicates of fresh whole blood. [15,17,19,21,24,29,3234,3842,4547,50,54,56,58] Fig. 5 shows the inter- and intra assay variations expressed as % CV for the Apogee Auto assay and the intra-assay variation for the mBio SnapCount [62,63].

thumbnail
Fig 5. Intra- and inter-assay variation (% CV) for the Apogee Auto 40 and for the MBio SnapCount at thresholds <350 cells/mL and >350 cells/mL.

https://doi.org/10.1371/journal.pone.0115019.g005

Five studies reported between-laboratory precision using whole blood. Two were studies evaluating BD FACSCount, [39,42] one evaluated Panleucogating, [50] and two were studies evaluating bead-based SP technology (BD Trucount tubes and BC Flow-Count fluorospheres) [46,47]. Gernow et al. studied the reproducibility of BC Cytospheres and found poor precision, with a coefficient of variation of 58% [15]. The study by Landay et al., however, found precision levels more in keeping with the other technologies [17].

Overall, in studies addressing between-laboratory precision, SP flow cytometry using BD Trucount tubes or BC Flow-Count fluorospheres showed less inter-laboratory variability than the DP comparators. [46,47] In addition, Denny et al. demonstrated improved inter-laboratory precision using DP Panleucogating compared with technologies which included DP or SP conventional flow cytometry [50]. External quality assurance data showed that the BD FACSCount, which is most commonly used a reference standard for CD4 assay evaluations, has within-laboratory and between-laboratory precision of 15% or less [38,39,42,46].

Discussion

This review highlights the difficulties of answering clinically relevant questions about CD4 test performance from the published literature. A minority of studies reported clinically useful measures of accuracy, and few POC tests were carried out under field conditions.

It can be seen that whatever technology is chosen, there is variability associated with CD4 measurement. It should be noted that there is also significant physiological variability in CD4 count, that may account for as much as, if not more than, technical variability of CD4 measurement [6567]. Performance characteristics vary between technologies and for the same technology depending on the reference technology used as a comparator. These characteristics have important implications both for individual patient management and for HIV treatment programmes. It is essential to consider assay performance as well as operating characteristics when choosing a technology. However, these data are not always available in the literature, and currently, evaluations are not sufficiently robust or comprehensive to give a clear idea of the comparative merit of different technologies.

Misclassification probabilities describe the likelihood that a test will incorrectly categorise a result as higher or lower than a given cut-off value as measured by a reference standard. They are clinically relevant measures of accuracy, as they can be used to assess the likelihood that a patient will be incorrectly classified above or below a defined CD4 threshold used in clinical decision making. Misclassification probabilities for the same assay can vary not only because the test is compared to different reference standards, but also because the probabilities are affected by the number of samples clustered around the thresholds of 200 or 350 cells/ul.

Two types of misclassification can be defined—upward misclassification and downward misclassification. Upward misclassification around a treatment threshold may be the most clinically important, leading to a delay in starting ART in some patients, with potentially harmful consequences. Downward misclassification on the other hand would be expected to lead to ART use earlier than indicated, with potential implications for cost and drug exposure. Given the trend towards earlier initiation in global and national guidelines, a degree of over-treatment is likely to be preferred over significant under-treatment [68]. Furthermore, the use of CD4 counts alone to assess ART immunological failure in the absence of viral load monitoring will, because of the biases observed, lead to some patients not receiving the appropriate clinical intervention.

Misclassification data showed that manual technologies [18], particularly the method using BC Cytospheres, were associated with substantial downward misclassification. It would therefore be expected that the implementation of these tests would lead to the decision to treat potentially large numbers of additional patients who have CD4 counts above the guideline threshold when using the reference test. Less upward misclassification was seen, suggesting that under-treatment might be less of a problem with these technologies. Upward misclassification by either manual technology is however likely to be an underestimate as the majority of counts in this study were very low (less than 25% of samples had counts >200 cells/μl); if the tests were to be used in a population with counts closer to the treatment threshold (as might be the case if used primarily in asymptomatic patients), upwards misclassification would be expected to be higher.

Limited misclassification data were available for the Partec CyFlow instruments. Of concern, one study evaluating the Partec CyFlow Counter under field conditions found 29% upward misclassification; that is, 29% of patients potentially eligible for treatment may have been denied treatment if the Partec CyFlow Counter-determined CD4 counts were the only criteria for assessing eligibility [16]. No other studies of the Partec CyFlow Counter or other CyFlow instruments reported misclassification probabilities at 350 cells/μl. Therefore, we do not know if this finding was replicated elsewhere. The Guava PCA (using EasyCD4 reagents) and the Pima Analyzer showed acceptable upward and downward misclassification rates.

There is some disparity in precision reported for the BC Cytospheres, and the reason for this disparity is not clear. The CD4 counts of the 19 samples used for replicate analysis in the study conducted by Gernow et al were not stated; however, they included 12 HIV negative samples that might be assumed to have high counts [15]. Given that several papers found BC Cytospheres to have poorer performance at higher counts, this may be in keeping with poor reproducibility in these replicates. Another study, conducted by Landay et al., found better precision on a sample with a CD4 count of 1200 (%CV 3·5%) than on a sample with a CD4 count of 200 (%CV 10·8%) [17]. Manual methods, although employing simple technology, are labor intensive and require significant user skill. Inadequate training, lack of supervision, or user fatigue may lead to poor performance of these techniques. Neither study described the training received by technicians performing the manual tests, nor reported blinding. Between-laboratory precision is likely to be superior with SP technologies (using BD Trucount tubes or BC Flow-Count fluorospheres) than with conventional DP technologies. As more point-of-care devices are introduced to lower levels of the health system, where training and supervision can be challenging, the lack of adequate training and supervision may introduce additional sources of error, contributing to decreased assay precision. This should be addressed through the development of a comprehensive training and supervision policy and implementation plan for the introduction of POC devices. In addition to the studies presented in this review, evaluations have been performed by government agencies and other national bodies that have not been published in peer-reviewed journals. An evaluation of BC Cytospheres published by the Medical Devices Agency of the UK included samples from 17 HIV positive subjects, and compared BC Cytospheres against DP flow cytometry [69]. Accuracy was reported using assessment of bias; misclassification probabilities were not reported. Imprecision was addressed using 6–7 replicates of 6 samples, and found a CV range of 3·2–17·6% (mean 8·5%). Unpublished evaluations have not been included in this review.

A recent review of external quality assurance (EQA) programmes involving 58,626 CD4 data sets from over 3,000 laboratories over a 12-year period show that SP technologies consistently give lower relative errors and confidence limits than DP technologies at clinically significant absolute CD4 counts [70].

Limitations of this review

Limitations include the fact that we only included papers published in the English language, and we may have overlooked data because of this limitation. Limiting the search to the peer-reviewed literature may have overlooked robust evaluations conducted by national reference facilities or similar institutions.

It is important to consider that the reference standard technologies themselves are not perfect. Misclassification assumes that the reference result is accurate, i.e., the closest approximation to the truth. Thus, a result considered as a misclassification may in fact be correct. The reference technology if performed once may give a result of 340cells/ul, but if performed in duplicate using the same specimen may give results of 340 and 360. It may be important for the reference technology to be performed in duplicate and only when a concordant result is obtained around a threshold of 350 can it be used as the reference standard for the new test.

What constitutes an “acceptable” margin of error and misclassification probabilities around a threshold remain undefined and may vary among sites, depending on local factors such as the distribution of CD4 counts among asymptomatic patients, how often patients undergo repeat CD4 testing, and the implications of potential overtreatment (e.g., cost, long term risk of drug toxicity). However, given the move towards earlier treatment and the use of better-tolerated, less toxic drugs, misclassification that results in overtreatment may be more acceptable than would have previously been the case. It is relatively straightforward for national programmes to decide which technology best fits their needs based purely on cost and operating characteristics; it may be harder to decide what performance characteristics are acceptable, and harder still to obtain data on test performance to inform choice.

Given the potential for testing error, laboratory participation in EQA programmes and access to quality control (QC) reagents are essential. EQA information is not mentioned in the publications included in the review. Without this information, the proficiency of the laboratory staff performing the testing may have contributed to the errors and variation in addition to the assays themselves.

Conclusions

A wide range of bias and percent misclassification around treatment thresholds over the clinically relevant range were reported on the CD4 enumeration technologies included in this review. Less than half the studies reporting assay precision or reproducibility of the CD4 values obtained. This is a rapidly evolving field with new tests under development, and with existing instruments and reagents being regularly replaced by updated versions. A systematic review of POC tests compared to laboratory-based technologies showed that POC CD4 testing can increase retention in care prior to treatment initiation and can also reduce time to eligibility assessment resulting in more eligible patients being initiated on life-saving treatment [71]. The lack of standardized methodology on test evaluation, including consensus on reference standards, is a barrier to assessing relative assay performance and could hinder the introduction of new POC assays in countries where they are most needed.

Acknowledgments

We thank Liam Whitby for his input into the study selection process and Maurine Murtagh, Ruth McNerney, Rhosyn Tuta and Chris Sanders for technical assistance.

Author Contributions

Conceived and designed the experiments: RWP SMC SC ALL BC DB TND TJS WSS. Performed the experiments: RWP KAS SG. Analyzed the data: RWP SMC SC ALL BC DB TND TJS WSS. Wrote the paper: RWP KAS SG SMC SC ALL BC DB TND TJS WSS SE MV NF.

References

  1. 1. World Health Organization, UNAIDS, UNICEF (2013) Global update on HIV treatment 2013: results, impact and opportunities. Geneva: World Health Organization.
  2. 2. Giorgi JV (1993) Characterization of T lymphocyte subset alterations by flow cytometry in HIV disease. Annals of the New York Academy of Sciences 677: 126–137. pmid:8494202
  3. 3. Sterling TR, Chaisson RE, Moore RD (2001) HIV-1 RNA, CD4 T-lymphocytes, and clinical response to highly active antiretroviral therapy. Aids 15: 2251–2257. pmid:11698698
  4. 4. Tayler-Smith K, Zachariah R, Massaquoi M, Manzi M, Pasulani O, et al. (2010) Unacceptable attrition among WHO stages 1 and 2 patients in a hospital-based setting in rural Malawi: can we retain such patients within the general health system? Transactions of the Royal Society of Tropical Medicine and Hygiene 104: 313–319. pmid:20138323
  5. 5. Vitoria M, Vella S, Ford N (2013) Scaling up antiretroviral therapy in resource-limited settings: adapting guidance to meet the challenges. Current opinion in HIV and AIDS 8: 12–18. pmid:23188179
  6. 6. Stevens W, Gelman R, Glencross DK, Scott LE, Crowe SM, et al. (2008) Evaluating new CD4 enumeration technologies for resource-constrained countries. Nature reviews Microbiology 6: S29–38. pmid:22745957
  7. 7. Centre for Reviews and Dissemination (2009) Systematic Reviews: CRD’s guidance for undertaking reviews in health care. York: University of York.
  8. 8. Barnett D, Walker B, Landay A, Denny TN (2008) CD4 immunophenotyping in HIV infection. Nature reviews Microbiology 6: S7–15. pmid:18923413
  9. 9. Bossuyt PM, Reitsma JB, Bruns DE, Gatsonis CA, Glasziou PP, et al. (2003) The STARD statement for reporting studies of diagnostic accuracy: explanation and elaboration. Clinical chemistry 49: 7–18. pmid:12507954
  10. 10. Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gotzsche PC, et al. (2009) The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS medicine 6: e1000100. pmid:19621070
  11. 11. Moher D, Liberati A, Tetzlaff J, Altman DG, Group P (2009) Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS medicine 6: e1000097. pmid:19621072
  12. 12. Balakrishnan P, Dunne M, Kumarasamy N, Crowe S, Subbulakshmi G, et al. (2004) An inexpensive, simple, and manual method of CD4 T-cell quantitation in HIV-infected individuals for use in developing countries. Journal of acquired immune deficiency syndromes 36: 1006–1010. pmid:15247552
  13. 13. Carella AV, Moss MW, Provost V, Quinn TC (1995) A manual bead assay for the determination of absolute CD4+ and CD8+ lymphocyte counts in human immunodeficiency virus-infected individuals. Clinical and diagnostic laboratory immunology 2: 623–625. pmid:8548544
  14. 14. Didier JM, Kazatchkine MD, Demouchy C, Moat C, Diagbouga S, et al. (2001) Comparative assessment of five alternative methods for CD4+ T-lymphocyte enumeration for implementation in developing countries. Journal of acquired immune deficiency syndromes 26: 193–195. pmid:11242190
  15. 15. Gernow A, Lisse IM, Bottiger B, Christensen L, Brattegaard K (1995) Determination of CD4+ and CD8+ lymphocytes with the cytosphere assay: a comparative study with flow cytometry and the immunoalkaline phosphatase method. Clinical immunology and immunopathology 76: 135–141. pmid:7614732
  16. 16. Karcher H, Bohning D, Downing R, Mashate S, Harms G (2006) Comparison of two alternative methods for CD4+ T-cell determination (Coulter manual CD4 count and CyFlow) against standard dual platform flow cytometry in Uganda. Cytometry Part B, Clinical cytometry 70: 163–169. pmid:16498672
  17. 17. Landay A, Ho JL, Hom D, Russell T, Zwerner R, et al. (1993) A rapid manual method for CD4+ T-cell quantitation for use in developing countries. Aids 7: 1565–1568. pmid:7904449
  18. 18. Lutwama F, Serwadda R, Mayanja-Kizza H, Shihab HM, Ronald A, et al. (2008) Evaluation of Dynabeads and Cytospheres compared with flow cytometry to enumerate CD4+ T cells in HIV-infected Ugandans on antiretroviral therapy. Journal of acquired immune deficiency syndromes 48: 297–303. pmid:18545154
  19. 19. Diagbouga S, Chazallon C, Kazatchkine MD, Van de Perre P, Inwoley A, et al. (2003) Successful implementation of a low-cost method for enumerating CD4+ T lymphocytes in resource-limited settings: the ANRS 12–26 study. Aids 17: 2201–2208. pmid:14523277
  20. 20. Idigbe EO, Audu RA, Oparaugo CT, Onwujekwe D, Onubogu CC, et al. (2006) Comparison of Dynabeads and Capcellia methods with FACScount for the estimation of CD4 T-lymphocyte levels in HIV/AIDS patients in Lagos, Nigeria. East African medical journal 83: 105–111. pmid:16863006
  21. 21. Imade GE, Badung B, Pam S, Agbaji O, Egah D, et al. (2005) Comparison of a new, affordable flow cytometric method and the manual magnetic bead technique for CD4 T-lymphocyte counting in a northern Nigerian setting. Clinical and diagnostic laboratory immunology 12: 224–227. pmid:15643012
  22. 22. Lyamuya EF, Kagoma C, Mbena EC, Urassa WK, Pallangyo K, et al. (1996) Evaluation of the FACScount, TRAx CD4 and Dynabeads methods for CD4 lymphocyte determination. Journal of immunological methods 195: 103–112. pmid:8814325
  23. 23. Balakrishnan P, Solomon S, Mohanakrishnan J, Cecelia AJ, Kumarasamy N, et al. (2006) A reliable and inexpensive EasyCD4 assay for monitoring HIV-infected individuals in resource-limited settings. Journal of acquired immune deficiency syndromes 43: 23–26. pmid:16885780
  24. 24. Kandathil AJ, Kannangai R, David S, Nithyanandam G, Solomon S, et al. (2005) Comparison of Microcapillary Cytometry Technology and Flow Cytometry for CD4+ and CD8+ T-Cell Estimation. Clinical and diagnostic laboratory immunology 12: 1006–1009. pmid:16085920
  25. 25. Pattanapanyasat K, Phuang-Ngern Y, Lerdwana S, Wasinrapee P, Sakulploy N, et al. (2007) Evaluation of a single-platform microcapillary flow cytometer for enumeration of absolute CD4+ T-lymphocyte counts in HIV-1 infected Thai patients. Cytometry Part B, Clinical cytometry 72: 387–396. pmid:17474130
  26. 26. Pattanapanyasat K, Phuang-Ngern Y, Sukapirom K, Lerdwana S, Thepthai C, et al. (2008) Comparison of 5 flow cytometric immunophenotyping systems for absolute CD4+ T-lymphocyte counts in HIV-1-infected patients living in resource-limited settings. Journal of acquired immune deficiency syndromes 49: 339–347. pmid:19186347
  27. 27. Renault CA, Traore A, Machekano RN, Israelski DM (2010) Validation of Microcapillary Flow Cytometry for Community-Based CD4+ T Lymphocyte Enumeration in Remote Burkina Faso. The open AIDS journal 4: 171–175. pmid:21253463
  28. 28. Spacek LA, Shihab HM, Lutwama F, Summerton J, Mayanja H, et al. (2006) Evaluation of a low-cost method, the Guava EasyCD4 assay, to enumerate CD4-positive lymphocyte counts in HIV-infected patients in the United States and Uganda. Journal of acquired immune deficiency syndromes 41: 607–610. pmid:16652034
  29. 29. Thakar MR, Kumar BK, Mahajan BA, Mehendale SM, Paranjape RS (2006) Comparison of capillary based microflurometric assay for CD4+ T cell count estimation with dual platform Flow cytometry. AIDS research and therapy 3: 26. pmid:17042936
  30. 30. Cassens U, Gohde W, Kuling G, Groning A, Schlenke P, et al. (2004) Simplified volumetric flow cytometry allows feasible and accurate determination of CD4 T lymphocytes in immunodeficient patients worldwide. Antiviral therapy 9: 395–405. pmid:15259902
  31. 31. Dieye TN, Vereecken C, Diallo AA, Ondoa P, Diaw PA, et al. (2005) Absolute CD4 T-cell counting in resource-poor settings: direct volumetric measurements versus bead-based clinical flow cytometry instruments. Journal of acquired immune deficiency syndromes 39: 32–37. pmid:15851911
  32. 32. Fryland M, Chaillet P, Zachariah R, Barnaba A, Bonte L, et al. (2006) The Partec CyFlow Counter could provide an option for CD4+ T-cell monitoring in the context of scaling-up antiretroviral treatment at the district level in Malawi. Transactions of the Royal Society of Tropical Medicine and Hygiene 100: 980–985. pmid:16542690
  33. 33. Lynen L, Teav S, Vereecken C, De Munter P, An S, et al. (2006) Validation of primary CD4 gating as an affordable strategy for absolute CD4 counting in Cambodia. Journal of acquired immune deficiency syndromes 43: 179–185. pmid:16940854
  34. 34. Manasa J, Musabaike H, Masimirembwa C, Burke E, Luthy R, et al. (2007) Evaluation of the Partec flow cytometer against the BD FACSCalibur system for monitoring immune responses of human immunodeficiency virus-infected patients in Zimbabwe. Clinical and vaccine immunology: CVI 14: 293–298. pmid:17267593
  35. 35. Pattanapanyasat K, Shain H, Noulsri E, Lerdwana S, Thepthai C, et al. (2005) A multicenter evaluation of the PanLeucogating method and the use of generic monoclonal antibody reagents for CD4 enumeration in HIV-infected patients in Thailand. Cytometry Part B, Clinical cytometry 65: 29–36. pmid:15800883
  36. 36. Zijenah LS, Kadzirange G, Madzime S, Borok M, Mudiwa C, et al. (2006) Affordable flow cytometry for enumeration of absolute CD4+ T-lymphocytes to identify subtype C HIV-1 infected adults requiring antiretroviral therapy (ART) and monitoring response to ART in a resource-limited setting. Journal of translational medicine 4: 33. pmid:16907973
  37. 37. Bergeron M, Ding T, Elharti E, Oumzil H, Soucy N, et al. (2010) Evaluation of a dry format reagent alternative for CD4 T-cell enumeration for the FACSCount system: a report on a Moroccan-Canadian study. Cytometry Part B, Clinical cytometry 78: 188–193. pmid:19847883
  38. 38. Johnson D, Hirschkorn D, Busch MP (1995) Evaluation of four alternative methodologies for determination of absolute CD4+ lymphocyte counts. The National Heart, Lung, and Blood Institute Retrovirus Epidemiology Donor Study. Journal of acquired immune deficiency syndromes and human retrovirology: official publication of the International Retrovirology Association 10: 522–530.
  39. 39. Lopez A, Caragol I, Candeias J, Villamor N, Echaniz P, et al. (1999) Enumeration of CD4(+) T-cells in the peripheral blood of HIV-infected patients: an interlaboratory study of the FACSCount system. Cytometry 38: 231–237. pmid:10516609
  40. 40. Nicholson JK, Velleca WM, Jubert S, Green TA, Bryan L (1994) Evaluation of alternative CD4 technologies for the enumeration of CD4 lymphocytes. Journal of immunological methods 177: 43–54. pmid:7822837
  41. 41. Sitoe N, Luecke E, Tembe N, Matavele R, Cumbane V, et al. (2011) Absolute and percent CD4+ T-cell enumeration by flow cytometry using capillary blood. Journal of immunological methods 372: 1–6. pmid:21787779
  42. 42. Strauss K, Hannet I, Engels S, Shiba A, Ward DM, et al. (1996) Performance evaluation of the FACSCount System: a dedicated system for clinical cellular analysis. Cytometry 26: 52–59. pmid:8809481
  43. 43. Higgins J, Hill V, Lau K, Simpson V, Roayaei J, et al. (2007) Evaluation of a single-platform technology for lymphocyte immunophenotyping. Clinical and vaccine immunology: CVI 14: 1342–1348. pmid:17761524
  44. 44. Jeganathan S, Bansal M, Smith DE, Gold J (2008) Comparison of different methodologies for CD4 estimation in a clinical setting. HIV medicine 9: 192–195. pmid:18366442
  45. 45. Nicholson JK, Stein D, Mui T, Mack R, Hubbard M, et al. (1997) Evaluation of a method for counting absolute numbers of cells with a flow cytometer. Clinical and diagnostic laboratory immunology 4: 309–313. pmid:9144369
  46. 46. Schnizlein-Bick CT, Spritzler J, Wilkening CL, Nicholson JK, O’Gorman MR (2000) Evaluation of TruCount absolute-count tubes for determining CD4 and CD8 cell numbers in human immunodeficiency virus-positive adults. Site Investigators and The NIAID DAIDS New Technologies Evaluation Group. Clinical and diagnostic laboratory immunology 7: 336–343. pmid:10799443
  47. 47. Reimann KA O, ’Gorman MR, Spritzler J, Wilkening CL, Sabath DE, et al. (2000) Multisite comparison of CD4 and CD8 T-lymphocyte counting by single- versus multiple-platform methodologies: evaluation of Beckman Coulter flow-count fluorospheres and the tetraONE system. The NIAID DAIDS New Technologies Evaluation Group. Clinical and diagnostic laboratory immunology 7: 344–351. pmid:10799444
  48. 48. Storie I, Sawle A, Goodfellow K, Whitby L, Granger V, et al. (2004) Perfect count: a novel approach for the single platform enumeration of absolute CD4+ T-lymphocytes. Cytometry Part B, Clinical cytometry 57: 47–52. pmid:14696063
  49. 49. Storie I, Sawle A, Whitby L, Goodfellow K, Granger V, et al. (2003) Flow rate calibration II: a clinical evaluation study using PanLeucoGating as a single-platform protocol. Cytometry Part B, Clinical cytometry 55: 8–13. pmid:12949954
  50. 50. Denny TN, Gelman R, Bergeron M, Landay A, Lam L, et al. (2008) A North American multilaboratory study of CD4 counts using flow cytometric panLeukogating (PLG): a NIAID-DAIDS Immunology Quality Assessment Program Study. Cytometry Part B, Clinical cytometry 74 Suppl 1: S52–64. pmid:18351622
  51. 51. Glencross D, Scott LE, Jani IV, Barnett D, Janossy G (2002) CD45-assisted PanLeucogating for accurate, cost-effective dual-platform CD4+ T-cell enumeration. Cytometry 50: 69–77. pmid:12116348
  52. 52. Pattanapanyasat K, Lerdwana S, Noulsri E, Chaowanachan T, Wasinrapee P, et al. (2005) Evaluation of a new single-parameter volumetric flow cytometer (CyFlow(green)) for enumeration of absolute CD4+ T lymphocytes in human immunodeficiency virus type 1-infected Thai patients. Clinical and diagnostic laboratory immunology 12: 1416–1424. pmid:16339065
  53. 53. Sippy-Chatrani N, Marshall S, Branch S, Carmichael-Simmons K, Landis RC, et al. (2008) Performance of the Panleucogating protocol for CD4+ T cell enumeration in an HIV dedicated laboratory facility in Barbados. Cytometry Part B, Clinical cytometry 74 Suppl 1: S65–68. pmid:18228556
  54. 54. Briggs C, Machin S, Muller M, Haase W, Hofmann K, et al. (2009) Measurement of CD4+ T cells in point-of-care settings with the Sysmex pocH-100i haematological analyser. International journal of laboratory hematology 31: 169–179. pmid:18177434
  55. 55. Srithanaviboonchai K, Rungruengthanakit K, Nouanthong P, Pata S, Sirisanthana T, et al. (2008) Novel low-cost assay for the monitoring of CD4 counts in HIV-infected individuals. Journal of acquired immune deficiency syndromes 47: 135–139. pmid:18091046
  56. 56. Jani IV, Sitoe NE, Chongo PL, Alfai ER, Quevedo JI, et al. (2011) Accurate CD4 T-cell enumeration and antiretroviral drug toxicity monitoring in primary healthcare clinics using point-of-care testing. Aids 25: 807–812. pmid:21378535
  57. 57. Mtapuri-Zinyowera S, Chideme M, Mangwanya D, Mugurungi O, Gudukeya S, et al. (2010) Evaluation of the PIMA point-of-care CD4 analyzer in VCT clinics in Zimbabwe. Journal of acquired immune deficiency syndromes 55: 1–7. pmid:20622679
  58. 58. Sukapirom K, Onlamoon N, Thepthai C, Polsrila K, Tassaneetrithep B, et al. (2011) Performance evaluation of the Alere PIMA CD4 test for monitoring HIV-infected individuals in resource-constrained settings. Journal of acquired immune deficiency syndromes 58: 141–147. pmid:21709568
  59. 59. Dieye TN, Diaw PA, Daneau G, Wade D, Sylla Niang M, et al. (2011) Evaluation of a flow cytometry method for CD4 T cell enumeration based on volumetric primary CD4 gating using thermoresistant reagents. Journal of immunological methods 372: 7–13. pmid:21835181
  60. 60. Bergeron M, Daneau G, Ding T, Sitoe NE, Westerman LE, et al. (2012) Performance of the PointCare NOW system for CD4 counting in HIV patients based on five independent evaluations. PloS one 7: e41166. pmid:22912668
  61. 61. Herbert S, Edwards S, Carrick G, Copas A, Sandford C, et al. (2013) Evaluation of PIMA point-of-care CD4 testing in a large UK HIV service. Sexually transmitted infections 88: 413–417.
  62. 62. Logan C, Givens M, Ives JT, Delaney M, Lochhead MJ, et al. (2013) Performance evaluation of the MBio Diagnostics point-of-care CD4 counter. Journal of immunological methods 387: 107–113. pmid:23063690
  63. 63. Mbopi-Keou FX, Mion S, Sagnia B, Belec L (2012) Validation of a single-platform, volumetric, CD45-assisted PanLeucogating Auto40 flow cytometer to determine the absolute number and percentages of CD4 T cells in resource-constrained settings using Cameroonian patients’ samples. Clinical and vaccine immunology: CVI 19: 609–615. pmid:22336291
  64. 64. Mbopi-Keou FX, Sagnia B, Ngogang J, Angwafo FF 3rd, Colizzi V, et al. (2012) Validation of a single-platform, volumetric, flow cytometry for CD4 T cell count monitoring in therapeutic mobile unit. Journal of translational medicine 10: 22. pmid:22309994
  65. 65. Fei DT, Paxton H, Chen AB (1993) Difficulties in precise quantitation of CD4+ T lymphocytes for clinical trials: a review. Biologicals: journal of the International Association of Biological Standardization 21: 221–231. pmid:7906948
  66. 66. Korenromp EL, Williams BG, Schmid GP, Dye C (2009) Clinical prognostic value of RNA viral load and CD4 cell counts during untreated HIV-1 infection—a quantitative review. PloS one 4: e5950. pmid:19536329
  67. 67. Raboud JM, Haley L, Montaner JS, Murphy C, Januszewska M, et al. (1995) Quantification of the variation due to laboratory and physiologic sources in CD4 lymphocyte counts of clinically stable HIV-infected individuals. Journal of acquired immune deficiency syndromes and human retrovirology: official publication of the International Retrovirology Association 10 Suppl 2: S67–73.
  68. 68. World Health Organization (2013) Consolidated guidelines on the use of antiretroviral drugs for treating and preventing HIV infection Geneva World Health Organization.
  69. 69. Wilson GA, Barnett D, Reilly JT (1995) Evaluation of the COULTER Manual CD4 Count Kit for the enumeration of absolute CD4 T-lymphocyte counts London, UK: Medical Devices Agency.
  70. 70. Whitby L, Whitby A, Fletcher M, Helbert M, Reilly JT, et al. (2013) Comparison of methodological data measurement limits in CD4(+) T lymphocyte flow cytometric enumeration and their clinical impact on HIV management. Cytometry Part B, Clinical cytometry 84: 248–254. pmid:23788473
  71. 71. Wynberg E, Cooke G, Shroufi A, Reid SD, Ford N (2014) Impact of point-of-care CD4 testing on linkage to HIV care: a systematic review. Journal of the International AIDS Society 17: 18809. pmid:24447595