Published on December 9, 2016 by

Using a well-characterized HIV-1 subtype C incidence cohort to compare the performance of two biomarker assays for recent HIV infection

Infection by the human immuno-deficiency virus (HIV) is one of the biggest global health threats facing the world and significant investments have been made in funding prevention, care and treatment programmes to control and manage HIV. UNAIDS has set a goal to end AIDS by 2030 (1) which includes reducing the incidence of HIV to less than 1 per 1,000 adults per annum.

Measuring HIV incidence directly can be done by repeatedly testing cohorts of people who are initially HIV negative but this is expensive and logistically challenging, and those tested may not be representative of the general population. The use of laboratory assays to identify recent infections among samples collected in a single cross-sectional survey could provide a better, cheaper and easier way of measuring HIV incidence. However, ongoing evaluations of candidate laboratory assays have highlighted both the strengths and weaknesses of individual assays in correctly identifying recent infections (2-5).

Two main characteristics define a laboratory assay: the mean duration of recent infection (MDRI) and the false recency rate (FRR). The MDRI is the mean time that an infected person remains in a state of recent infection but only up to a pre-defined time T. The test measures the concentration of HIV antibodies in a sample of blood as an optical density (OD) or avidity index (AI). If the OD or the AI is below a pre-selected value, C, the infection is regarded as being recent. The FRR is then the probability that a person appears to be recently infected when they have in fact been positive for a time greater than T. An ideal test should have a cut-off that is low enough to minimise the FRR but high enough to give a suitably long MDRI so that enough samples test positive and the sample size needed to estimate incidence at a given level of precision is not too great (6,7). The MDRI depends on the antibody kinetics, which vary among individuals, by HIV subtype and by geographical region.

Following an evaluation by Kassanjee et al. (4) of samples used by the Consortium for the Evaluation and Performance of HIV Incidence Assays (CEPHIA), we chose two candidate assays, Sedia Limiting Antigen (LAg) and BioRad avidity assays (BioRad) for further evaluation of cases infected with clade C virus, the dominant clade of HIV found in southern Africa. We characterised the two assays using samples from postpartum women recruited between 1997 and 2000 into a prospective cohort trial, the Zimbabwe Vitamin A for Mothers and Babies Project (ZVITAMBO), carried out in Harare, Zimbabwe. The samples were taken within 96 hours of delivery and then at three month intervals for two years and had been used in an earlier evaluation of the BED capture enzyme immunoassay assay (8). We determined the MDRI using 591 samples from 184 seroconverting women, and the FRR by testing 2,825 cases known to be HIV- positive for >12 months and used these results to estimate the HIV incidence over the first 12 months postpartum.

Using the recommended values of the cut-offs, C, the BioRad assay has a longer MDRI (141days) than the LAg assay (104 days) but a higher FRR (1.1%) than the Lag assay (0.6%). These estimates of the MDRI and FRR were both lower than the estimates obtained in the Clade C samples from the general population used by CEPHIA. The FRRs are also 4.4 and 8.0 times lower than the 4.8% FRR observed when the same samples were analysed using BED. The primary goal of an incidence assay is to classify an infection accurately either as a recent or as a long-term infection based on predefined ODn/AI cut-off. To this extent the much reduced FRR estimated for BioRad and, particularly, LAg compared with BED make these new assays a more attractive option for the estimation of HIV incidence.

The major difference between the two new assays lay in the variability of plots of AI versus the cut-off in BioRad, when compared to plots of optical density versus cut-off for LAg. For the BioRad assay, while the AI should plateau at 100%, we observed that some instances where the AI increased to a maximum then returned to below cut-off AI (Figure 1).

Figure 1: Distribution of ODn for BED and LAg and AI for BioRad by days since seroconversion for the three assays

Figure 1: Distribution of ODn for BED and LAg and AI for BioRad by days since seroconversion for the three assays

We concluded that the much-reduced FRR associated with BioRad and LAg avidity assays mark a major improvement in performance relative to assays such as the BED. The major difference between the performance of the BioRad and LAg assays as measured only on the basis of this evaluation arise from the greater variability in the pattern of increase in BioRad AI than in LAg normalised OD. This implies better precision of LAg estimates compared to BioRad. Whichever method is used in the field it will be necessary to ensure that MDRI and FRR are appropriate for the population being studied. Thus the lower values of MDRI we observed in our study, compared to published results, are consistent with the fact that all of the ZVITAMBO clients were postpartum women. This serves as a general caution that physiological status may affect the observed values of MDRI.

Funding Acknowledgements: This project was supported by funding from the President’s Emergency Plan for AIDS Relief (PEPFAR) through the US Centers for Disease Control and Prevention under terms of grant 5U2GGH00315-04 to Department of Community Medicine, University of Zimbabwe and a sub-grant to ZVITAMBO Project. Ms Elizabeth Gonese received a bursary from South Africa Centre for Epidemiological Modelling and Analysis, Stellenbosch University for the PHD studies.

The views expressed in this article do not represent those of manufactures of LAg or BioRad, CDC or SACEMA but are solely made by the author.