To date, clinical studies conducted worldwide have not demonstrated the effectiveness of using hydroxychloroquine to prevent infection with the new coronavirus Sars-CoV-2 or to improve the evolution of Covid-19 symptoms.
Now, two recently published articles have concluded that there is an efficacy of hydroxychloroquine (less toxic compound of chloroquine; chloroquine has not been evaluated) in early use by outpatients, and a third article says that, at the very least, it is not possible to discard its use. The quality of these works, however, is questioned.
The works are two meta-analyzes (analysis of data from several researches already published) and a systematic review (qualitative description of the available bibliography) and, therefore, did not evaluate new patients.
The apparent contradiction between the lack of clinical studies that point out the drug’s effectiveness for infection with the new coronavirus and analyzes with a strong indication of its benefit is explained by mathematics.
For Daniel Tausk, associate professor at USP’s Institute of Mathematics and Statistics, it is important to separate two things: the result that the data point and the strength of the evidence. In the case of studies published so far, there is not enough evidence to say that hydroxychloroquine works.
According to the mathematician, this is where meta-analyzes come in. “If before there was no good reason to believe that the drug works, the weak set of favorable evidence can produce strong favorable evidence.”
In both meta-analyzes, as well as in individual studies, no antiviral action of the drug was found to prevent infection. The positive effect found was for the development of symptoms.
“All randomized studies on prophylaxis or early treatment show favorable results in clinical symptoms, but none of them alone was powerful enough to establish an efficacy conclusion with significant statistical value,” says Tausk.
In the view of other experts, however, meta-analyzes are technically bad and do not serve this result.
Márcio Bittencourt, master in public health at Harvard University and doctor at USP University Hospital, explains that meta-analyzes bring together data from other studies, whose first basic premise is that the studies are homogeneous with each other, that is, use the same variables .
In addition, the studies included should have comparable outcomes, that is, to assess hospitalization, all studies that assess hospitalization rate should be included; if the interest was in prophylaxis, all randomized controlled studies that measure this variable should be included. Bittencourt says that the two meta-analyzes do not fulfill these basic premises.
“Some important studies with results in the opposite direction were not included, what we call selection bias. There is no recognition or scientific value in these analyzes to be presented or discussed ”, he says.
In this sense, an error that can occur in bad meta-analyzes is the so-called type 1 or false positive error, when, due to chance, a positive result is found, even if it is not true – in a larger sample of the population, this result is not would appear.
“This is worrying because any statistical analysis has a chance of generating false positives. The fact of selecting subgroups within a sample increases this chance ”, explains Natália Pasternak, PhD in microbiology from USP and president of Instituto Questão de Ciência.
Another frequent error is the so-called type 2, or false negative. “It occurs when there is a benefit correlation, but the study does not find it. Both errors are likely to happen in a meta-analysis when studies with a large sample size and others with smaller samples are mixed ”, explains Luís Correia, cardiologist, professor at the Federal University of Bahia and author of the blog Medicina Baseada em Evidência.
The researcher asks for care with meta-analyzes. “Systematic reviews and meta-analyzes serve to describe the universe of knowledge about a subject and can highlight uncertainty, saying what we don’t yet know, or increase the accuracy of what we already know. Meta-analyzes do not serve to generate knowledge that did not exist before. ”
One of the meta-analyzes, signed by a professor at Yale University, mixes variables by including five randomized clinical studies to evaluate the use of hydroxychloroquine as a prevention before and after exposure to the virus.
Separately, the studies did not show the efficacy of the drug, but when doing the analysis together, the meta-analysis favored the use of the drug, with a reduction in the risk of adverse effects in patients with Covid-19 and in the risk of contracting the disease ( found risk of 0.76, with a 95% confidence interval).
A risk below 1 in this case means that the group receiving hydroxychloroquine had fewer cases of infection compared to placebo. A 95% confidence interval means that in 95 out of 100 hypothetical samples, the result will be within this range.
In the other analysis, by Harvard researchers, the studies included individually pointed to a relative risk of contracting the virus below 1 (95% confidence interval), but with a very wide margin of error, indicating that the analyzed data may be behaving differently. random way.
The authors themselves conclude that, based on the individual evidence from the included studies, it is not possible to state the lack of efficacy of hydroxychloroquine just because they are not “statistically significant”.
For a study to be statistically significant, the so-called p-value must be equal to or less than 0.05 (5%), that is, in the analyzed data set, the chance of the correlation found being due to chance is low.
In the case of hydroxychloroquine, one of the studies included in the meta-analyzes obtained a p value equal to 35%, refuting the hypothesis of hydroxychloroquine efficacy. However, in the meta-analysis, it was possible to lower this value to 5%, although the margin of error for the relative risk of contracting the disease was still slightly above 1.
Lead author of this study, David Boulware, an infectologist and professor at the University of Minnesota, told the leaf that there was a bias in the selection of meta-analyzes for both studies and variables. In their article, Boulware and colleagues found higher clinical improvement in the group that did not take hydroxychloroquine correctly because it stopped treatment.
“Our study had an initial protocol to verify the reduction in half of the cases in the group with hydroxychloroquine compared to placebo, but we changed the protocol to reduce the severity of symptoms. We observed a greater reduction in the disease in those individuals in the intervention group who did not take hydroxychloroquine, that is, there is a strong indication that the improvement is at random. ”
For him, the inclusion of his essay in the meta-analyzes was “cherry picking”, which literally means “choosing cherries”, but in scientific jargon it is equivalent to selecting only those studies that are known to give a favorable result or, after the beginning research, eliminate studies that would bring noise to the final analysis – that is, harvest only the good cherries.
Tausk agrees that “cherry picking” can be a serious problem in a meta-analysis, but does not believe that the studies in question made “cherry picking”, since they all started from the same selection criteria: randomized controlled studies in outpatients. .
Boulware disagrees with the point above, and states that the inclusion of his clinical study, in addition to the exclusion of an article published on the last 30th that also did not show efficacy for prophylactic use of hydroxychloroquine, tipped the scale to one side.
“There are choices that the authors make that influence the results. Some can be overly biased, leading to wrong conclusions. So far, there are not many randomized controlled trials for treatment or prevention of outpatients with hydroxychloroquine, so the results are not robust. Which means that one or more studies can radically change that ”, he concludes.