HIV Prevention Clinical Trials: Size Matters
In February 2009 the Microbicide Trial Network (MTN) announced the main finding of its Phase II/IIb Safety and Effectiveness Study of the Vaginal Microbicides 0.5% PRO 2000 Gel for the Prevention of HIV Infection in Women (a.k.a HPTN 035).
This study conducted in Africa and the US was set up to find out if a chemical compound called PRO 2000 could prevent HIV infection in women. The compound itself came as a gel that women needed to apply vaginally prior to have sex. The concept of a product that can be used in such way to protect against HIV infection is called a microbicides. Several microbicides have been tested in what is called a “clinical trial” and to date none have been proven successful in preventing HIV infection.
A rather large number of women were taking part in the HPTN 035 study (more than 3,000). HIV negative participants were randomly given PRO 2000 with condoms or a placebo (gel without active substance) with condoms or just condoms. They also received counselling about condom use and safe sex. None of the women knew what they were given since the trial was double-blinded (even those giving the gel to the women did not know what they were giving). Women were followed at regular interval for one year. At the end of the study the number of HIV infection in each group was compared.
When the results came out there was quite a lot of excitement. The HPTN 035 study showed a 30% reduction in the number of HIV infection in the group of women who were given PRO 2000 compared to group of women who were given the placebo (there were 36 HIV infection in the PRO 2000 group compared to 51 in the placebo group, HR = 0.70, 95% CI :0.46, 1.08, p=0.10) . This was promising as this was the first time that a compound that was not an antiretroviral was showing a protective effect in the real world.
At the time, Salim S. Abdool Karim, University of KwaZulu-Natal, HPTN 035 Protocol Chair said “These findings provide the first signal that a microbicide gel may be able to prevent women from HIV infection. Indeed, for the millions women at risk for HIV, especially young women in Africa, there is now a glimmer of hope. But these findings also indicate that more research is needed; we can’t yet say that we have an effective microbicide.”
Indeed, there was one setback: the result was not “statistically significant”.
In plain English, this means that the “observed” result may simply have happened by “chance”. Toss a coin 10 times and count the number of heads and tails. You would expect 5 of each (assuming you have a perfectly well balanced coin and that you are not cheating!), but because the number of times you toss the coin is small, this is probably not what will happen. Toss the coin 100 or 1,000 or 1 million times, and then the number of heads and tails will come closer to being equal. The higher the number of times the coin is tossed the higher the chance to make a truthful observation. Statistics are useful mathematical tools to determine if what you see is really what you think you see (but they won’t tell you more than that). In clinical science, statistics help assessing the results of often subjective human observation (think “homeopathy” or “voodoo science”).
So, was it chance?
Today, the Microbicides Development Programme (MDP) released the main findings of its Phase III trial of PRO 2000 vaginal microbicide gel (a.k.a. MDP 301). More than 9,000 women participated in the study and were again randomly given PRO 2000 or a placebo without knowing what they were given.
The MDP 301 and HPTN 035 studies were very similar and assessed how well women used the gel (adherence) and did confirm that indeed they did use the gel and even liked it. They also confirmed that there were no difference in behaviour and no side-effect resulting from the use of the products.
The result of the MDP 301 is clear and definitive: PRO 2000 is not effective in preventing the transmission of HIV. The number of infections in the group given the microbicide was not statistically different from the number of infections in the group receiving the placebo.
|Statistics for dummies
Adapted front AVAC
No study can produce a simple “yes” or “no” on whether a product worked. To make sense of the headlines and discussions regarding the data from this or any other clinical trial, it is useful to understand some statistical terms used to describe the result. For the MDP 301 trial, the data analysis will include comparisons of the numbers and rates of infections in the microbicide and placebo group during the study.
One key term is statistical significance. If a result is described as statistically significant, it means that an observed difference (for example between rates of new infections in two groups of a trial) is very likely due to the product and is not a coincidence. Significance is always given with a confidence level. A 95 percent confidence level, which is standard for many trials, means that there is at most a 5 percent likelihood of a statistically significant result having occurred solely by chance.
The trial team will also report on the confidence intervals associated with its findings. A confidence interval is a way of describing the reliability of the finding, which is given as a point estimate—such as a 50 percent reduction in risk of infection. The confidence interval is a range of values within which the real value is. The narrower the confidence interval around a point estimate, the more likely it is that the result is accurate and would be seen again if the trial was repeated.
This can be confusing because all these values are interrelated, but to fully understand the strength of a result, one must know (1) the point value; (2) whether the result is statistically significant; (3) the confidence level, which may be expressed as a percent (95 percent or more) or a p-value (.05 or less); and (4) the confidence interval.
Are the results of the two studies different?
No, the results are not different. HPTN 035 investigators accepted that the results of their study was encouraging but inconclusive (because not statistically significant). The MDP 301 study simply confirmed what the HPTN 035 started to see but could not establish because the number of participants in the study was not big enough. In fact, if one considers the results of the MDP 301 study in February 2008, before all the participants had joined the study, they were very similar to that of the HPTN 035 study. Should the MDP study have stopped at this point in time, the conclusion would have been very similar.
Does it matter and why?
Without the results of the MDP 301 study another trial would have been necessary to confirm or infirm the 30% reduction observed in HPTN 035. It also means that a lot of money could have been invested in the development of a product that simply does not work for HIV prevention. It means that small trials are not “good enough” to test the efficacy of an HIV prevention product. It also confirmed that animal models of HIV are not good gatekeepers for studies in human (i.e. that the results of animal studies should not determine if a study in human should go ahead or not, other sources of information and data should be considered).
There will be lot more to learn from these two studies in the coming months but a major lesson is that when it comes to clinical trials for HIV prevention, size matters. These two studies together are sending a clear message to the funders of HIV prevention studies: Big trials are needed and necessary. Large trials come at a cost but it is crucial to invest in studies that can deliver answers we can trust.
Related posts brought to you by Yet Another Related Posts Plugin.