In 2013, the WHO recommended that viral load (VL) testing is the preferred monitoring approach for diagnosis and confirmation of virological failure in HIV positive patients on antiretroviral therapy (ART) (1). Evidence suggests that the current strategies of CD4 count and clinical monitoring to determine virological failure are not very sensitive and do not accurately predict true virological failure (1).
In 2016, the WHO updated their recommendations: “Routine viral load monitoring can be carried out at 6 months, at 12 months and then every 12 months thereafter if the patient is stable on ART to synchronize with routine monitoring and evaluation reporting” (2). Additional to this, the WHO also indicated that, where routine VL monitoring is available, treatment monitoring using CD4 cell count can be stopped in those who are stable on ART and have viral suppression (2). Both updated recommendations are conditional and based on low-quality evidence. The WHO guidelines also indicate that CD4 count and clinical monitoring should be used for diagnosing treatment failure where routine VL testing is not available, supplemented by targeted VL testing to confirm failure, if possible.
There is evidence that CD4 count monitoring can lead to premature (1) or late switching (3) to 2nd line treatments. This leads to ineffective resource utilisation. Patients who are switched to 2nd line treatment too early, when they still have sufficient virological suppression, receive 2nd line treatments unnecessarily, at a higher cost, compared to 1st line treatments. Patients who switch to 2nd line treatment too late, have higher VL counts, which will lead to higher transmission rates. CD4 count is, however, an indicator for the risk of developing severe opportunistic infections (OIs), as the incidence of severe OIs is higher in patients with a lower CD4 count. Furthermore, VL testing is more expensive than CD4 count testing.
Based on the quality of the evidence regarding the VL and CD4 count monitoring recommendations, more research in this field is required.
Mathematical models to compare different monitoring scenarios
Mathematical models were developed to investigate several policy options related to the public health and economic impact of VL, CD4 count and hybrid VL/CD4 count monitoring strategies for assessing treatment failure.
Disease progression was modelled from 6 months on ART until death. This modelling was performed in TreeAge Pro software. Costs, transmissions and OIs were modelled in Microsoft Excel. The models use a 5-year time horizon, with patients entering the model at a start age of 33 years.
Seven scenarios were investigated with different frequencies of VL testing per year:
- One VL test vs two CD4 tests
- Two VL tests vs two CD4 tests
- Two CD4 tests plus one VL test vs one VL test (hybrid scenario)
- Two CD4 tests plus two VL tests vs two VL tests (hybrid scenario)
- Two CD4 tests plus two VL tests vs two CD4 tests plus one VL test (hybrid scenario)
- Two CD4 tests plus one VL test vs two CD4 tests
- Two CD4 tests plus two VL tests vs two CD4 tests
For each scenario, a benefit-cost ratio was calculated by dividing the difference in benefits (converted to a monetary value) by the difference in costs between the intervention and comparator.
VL failure was defined by a persistent VL >1000 copies/ml. The benefit of CD4 count monitoring was defined as knowledge of the CD4 count, and therefore the ability to use co-trimoxazole prophylaxis to treat some OIs (toxoplasmosis and pneumocystis pneumonia) in persons with a CD4 count below 350 cells/mm3. Treatment of severe OIs was modelled, however, deaths from severe OIs were not modelled. All patients on 1st line ART receive isoniazid (INH) prophylaxis for preventing tuberculosis, regardless of monitoring strategy. Therefore, there is no additional benefit in terms of INH prophylaxis.
The benefit of VL monitoring was defined as the number of transmissions that could have been averted if switching was based on VL and persons who are not suppressed are switched to 2nd line treatment without transmitting the disease. Transmissions were only calculated for persons with a VL >1000 copies/ml, as the transmission risk below 1000 copies/ml is negligibly small. Transmissions are only calculated in this model at the time of switching to 2nd line treatment. The transmission probability with a single partner was calculated using the average VL in each VL category and an assumption of 100 sexual contacts with a single partner per year. The transmission probability for multiple partners was calculated by multiplying the transmission probability with a single partner, with the weighted average number of partners.
Patients with immunological failure before virological failure (CD4 <100 cells/mm3, but VL <1000 copies/ml) in the hybrid monitoring strategy, were considered as being unnecessarily switched to 2nd line treatment (switched too soon). These patients use more resources, as 2nd line treatment is provided for a longer time at a higher cost.
Costs considered in the model included 1st and 2nd line ART cost, VL and CD4 count monitoring costs, outpatient visits, laboratory tests for diagnosis of OIs, the cost of treatment of OIs and the cost of prophylaxis.
Public health impact
In the hybrid scenario of comparing 2 CD4 tests + 1 VL test versus 2 CD4 tests + 2 VL tests, public health benefits include a difference of 25 580 transmissions when patients switch to 2nd line treatment when they only received 1 VL test per year versus 2 VL tests per year. The difference between the cases treated with co-trimoxazole prophylaxis when switching is based on 1 VL vs 2 VL were 810 061 and the difference between the cases treated with INH prophylaxis when switching is based on 1 VL vs 2 VL were 1 787 716.
Two VL tests per year appeared to be more viable than one VL test per year compared to biannual CD4 monitoring. The best hybrid scenario was two VL tests plus two CD4 tests per year, compared to one VL test plus two CD4 tests per year. This scenario resulted in the highest benefit-cost ratio (Table 1). From these results, it can be seen that:
- Scenarios 1, 2, 4, 6 and 7 were not cost-beneficial (benefit-cost ratio < 1). For scenario 4 and 7, there was a gain in health care benefits, but the benefits did not outweigh the increased cost of the intervention. For scenarios 1, 2 and 7, there was a loss in health care benefits and more was spent on the intervention.
- Scenarios 3 and 5 were cost-beneficial (benefit-cost ratio > 1). The benefits of the intervention outweighed the increased cost of the intervention.
Table 1. Benefit-cost ratios for different HIV treatment monitoring scenarios
One VL test vs two CD4 tests
Two VL tests vs two CD4 tests
Two CD4 tests plus one VL test vs one VL test
Two CD4 tests plus two VL tests vs two VL tests
Two CD4 tests plus two VL tests vs two CD4 tests plus one VL test
Two CD4 tests plus one VL test vs two CD4 tests
Two CD4 tests plus two VL tests vs two CD4 tests
When considering the above results, from an economic perspective, biannual CD4 testing remains the preferred monitoring strategy given its lower cost and role in preventing OIs. However, when considering a hybrid monitoring strategy, the addition of two VL tests instead of one VL test to a biannual CD4 monitoring strategy would be beneficial due to additional benefits at a slightly increased cost.
Establishing the optimal monitoring strategy
The study attempted to evaluate several VL monitoring strategies based on the 2013 and 2016 WHO Guidelines. We addressed this by developing a mathematical model that mimics the natural history of people on ART in South Africa.
Firstly, the results indicate that, from both a public health and an economic perspective, the replacement of HIV monitoring from the current CD4 count testing to only VL testing, is not supported. Likewise, replacing two annual CD4 tests with two VL tests is not a good investment in health, although it is an improvement from the previous scenario.
The best economic results were achieved in the hybrid strategies, notably when comparing combinations of CD4 and VL tests (Scenarios 3 and 5).
These results are not in agreement with the recommendations from the WHO. It must be considered that some assumptions made during modelling might have impacted the results. Firstly, deaths during 2nd line treatment were not modelled. This means that the cost of 2nd line treatment might have been overestimated. Secondly, deaths from OIs were not modelled. This could have affected the number of persons progressing through the model. Thirdly, assuming only persons knowing their CD4 count and with a CD4 count below 350 cells/mm3 will receive co-trimoxazole prophylaxis, might overestimate the benefit of CD4 count, as this might not be what happens in practice. Lastly, as transmissions were only calculated at time of switching, the number of transmissions might have been underestimated.
In this study, we did not consider the budget impact of the different scenarios, however, it is fair to assume that, given the cost of VL testing, any scenario that includes a VL test will be costlier than a CD4 testing-only strategy. It is, however, also fair to assume that some of these additional VL testing costs will be offset by savings related to improved viral suppression results.
In conclusion, more cost modelling studies are required to evaluate the cost-benefit of the 2016 WHO Guidelines and to compare to the results obtained in this study.