Site icon News Release India

Covid-19: Masks still don't work


Masks are back in San Diego, California, where the school board has just decreed that students must cover their faces or be banned from entering classrooms. classroom. Never mind that, by CDC statistics and by the population numbers of the Census, more than 99, 99% of children in California (where Governor Gavin Newsom has regularly imposed masks) and more than 99,99% of children in Florida (where Governor Ron DeSantis let children live without a mask) did not die from Covid — either because they didn’t catch the disease or because they caught it and survived . It doesn’t matter that more than 99,99% of children in the country have not died from Covid, either. And it doesn’t matter that, again, based on CDC statistics, over 85 are more than two thousand times more likely to die from Covid than those under 18; and that of each 40 school-age children (five to years) who died during the Covid era, just one of those deaths involved Covid. Despite all this, school authorities have decided that everyone should wear a mask.

Schools are not alone in returning mandatory masks. The military has been one of the most masked institutions of all. Right off the bat, the Navy announced that everyone, uniformed or not, must wear masks indoors at its San Diego-area bases. Further north, the San Francisco Bay Area Public Rapid Transit (BART) system has reimposed the masks. Meanwhile, many universities across the country have announced that they will require masks [no retorno das aulas] this fall.

Such impositions ignore the fact that masks are physically uncomfortable, make breathing difficult and acutely compromise human social interaction. But none of that matters to mask fanatics, who are convinced the benefits far outweigh any potential costs. So where is the evidence?

The nature of the establishment’s adoption of masks of public health is captured perfectly in an article published last spring and live on the website of the National Institutes of Health. The article, authored by Seán M. Muller, speaks of a “failure of randomized controlled trials (RCTs) to provide supporting evidence” that masks work to reduce viral transmission — an issue I discussed at length last summer. .

Muller deserves credit for being more honest than most mask advocates. He points out that the World Health Organization said in March 600 that “there is no evidence” that masks work, and adds that “it was the absence of significant positive results from the pre-pandemic RCTs that informed the WHO starting position [antimáscaras]. Even so, Muller laments the reliance on RCTs rather than “mechanism-based reasoning”. It’s a fancy term for the use of one’s mental faculties. Muller’s reasoning leads him to become convinced that masks must work. But that’s why, of course, we have RCTs: to test people’s notions of what works and what doesn’t.

Muller recognizes that people “can transfer infectious material by touching their faces with non-sterile hands to put on or take off a mask”, but this important insight does not seem to affect their conclusions. Instead, he says that “mechanism-based reasoning provides a justification for the stance ultimately advocated by the WHO and adopted by many countries.” He admits that the “logic” that follows from such reasoning “is based only on a very simple germ theory of disease.” Yet – amazingly – he claims that such reasoning “puts the burden of proof on those who would argue against recommending masks.” Thus, even if the RCTs do not give evidence for the claim that masks work, even if they continue to suggest, on the contrary, that masks do not work, so health officials must still recommend masks — and likely make them mandatory — because the claim that they work seems logical to some.

This stance is fundamentally unscientific. However, it effectively captures the thinking that has animated mandatory masks for over two years now. This kind of thinking continues even despite the remarkable similarity in Covid test results (as detailed by John Tierney) between states with or without mandatory masks, and between countries with or without mandatory masks, which strongly suggests that masks do not work — as they have indicated. the RCTs.

The solitary and fragile scientific leaky canoe that supports mask advocates, at least in terms of RCTs, is a recent study from Bangladesh. Published well after a year after the CDC and others had already enthusiastically embraced masks, the study claimed to have found statistically significant benefits from using surgical masks. The first author listed in that study, Yale economics professor Jason Abaluck, publicly voiced his opinion on the mask debate before the study went to the field. In the early days of Covid, he opined that both the federal government and state governments should provide free masks and perhaps impose fines on those who refuse to wear them. To the regret of mask advocates, the very small differences that the study found and the questionable methodology on which those results were based gave little more scientific support for the use of masks than mechanism-based reasoning.

The Bangladesh ECR found that 1.086 people in the study mask group and 1.106 people in the control group [sem máscaras] caught Covid. Incredibly, these numbers did not come from the study authors — although it is these numbers that answer the main question what the study was about. Instead, Ben Recht, a professor of electrical engineering and computer science at the University of California at Berkeley, computed these numbers from the ones actually released by the authors, and Abaluck later confirmed Recht’s calculation of a difference of 17 people between the two groups.

This difference of 20 people (out of more than 300 thousand participants) meant that about one in every 132 people caught Covid in the control group, against one each 300 in the mask group. This is equivalent to 0,76% of the people in the control group and 0,68% of people in the mask group getting Covid — a difference of 0,08 percentage points that the authors prefer to describe as a 9% reduction. Abaluck and colleagues also describe their study in terms of it providing “clear evidence” that surgical masks work — although the claimed benefit of these masks registered statistically significant only after the researchers “adjusted” the fraction of how many people took them. Covid in each group by adding “base controls” whose nature they do not transparently describe. (This adjustment, however—and its role in achieving statistical significance—is made clear.)

This reported difference of 0, percentage point passed the statistical significance test only because of the gigantic sample size claimed by the authors, which allowed small differences to be significant rather than being attributable to chance. It is far from clear, however, that this study could actually produce such precision.

Imagine if the researchers divided at random 340 a thousand individuals, no matter where they lived, among a group with a mask (179 thousand people) or a control group without a mask (the others 170 thousand). It would be assumed that this random division would result in the groups being quite similar. This is part of the essence of ECR ​​— if you randomly assign people to one group or another, the two groups will end up being essentially similar simply by chance. It would be very different, however, to distribute entire cities of 170 thousand inhabitants into two groups, with each inhabitant of the same city going for the same group. In this case, it would not be clear whether any potential differences in outcomes would be due to the intervention (in this case, masks) or to differences between cities (in virus exposure rates, cultural norms, etc.).

The approach of the Bangladesh study is somewhere between these two scenarios. The researchers assigned 300 villages to the mask group (in which mask use was encouraged) and 300 villages with similar characteristics to the control group without masks (in which the use of masks was not encouraged). Every inhabitant of a given village was placed in the same group. As a result, says Recht, “although the sample size seems huge (340 thousand individuals), the effective number of samples was only 600 because of the treatment applied to individual villages.”

However, the researchers did not analyze the findings at the village level. Instead, they analyzed as if they had randomly assigned 340 1,000 individuals to the mask group and the control group. Recht says that because “individual results are not independent” and “results within a village are correlated”, analyzing the study in this way is “certainly wrong.” In other words, when individuals are randomly assigned to one group or another in an RCT, one person’s outcome shouldn’t affect another’s — but that’s hardly the case when looking at the effects of a highly contagious virus among people. people living in the same village, all of whom were allocated to the same group. In layman’s terms, every time you throw a die, it should be something independent and shouldn’t affect subsequent throws. But in the Bangladeshi study, each die throw actually altered the subsequent throws.

Recht cites an older ECR on masks (which I discussed in my 2021 text) that made adjustments for such a correlation — that is, it adjusted for the fact that one person’s outcome might influence another’s. Although that older RCT randomly assigned families rather than villages to a particular group, the correlation is still assumed and adjustments are made for it. The Bangladeshi study, which had a much higher correlation, assumed there was none. Adjusting for correlation, Recht found that the Bangladesh study showed no statistically significant benefit for masks.

The danger of pretending to be distributed at random 300 a thousand individuals is that huge sample sizes – which suggest great precision – allow small differences to be statistically significant, since there is a smaller probability that they reflect chance events. It’s okay if a test is in fact that accurate, but it’s okay if it’s inflating the sample size by a factor of more than 300 ( 600 versus 340 thousand) — or even for a factor of five. Such a scenario risks producing “statistically significant” results that are just a result of pure chance. This is exactly what seems to have happened in the Bangladesh study.

The mainstream press trumpeted this study as confirming that surgical masks work and suggesting than fabric masks ( that, overall, did not show a statistically significant benefit) perhaps should be abandoned. But the actual findings of the study were more interesting. He found no no statistically significant evidence that masks work for people younger than

Table of Contents

years old. For older than 40, however, it showed statistically significant evidence that the Fabric masks work, but no corresponding evidence to support the use of surgical masks. For those over 50 years old, the study found statistically significant evidence that masks Surgical work, but no corresponding evidence to support the use of cloth masks[sem máscaras] . To complicate matters further, the researchers handed out red and purple fabric masks. Recht, citing study data that the authors did not include in the manuscript or tables, says that based on the study’s method of analysis, “purple fabric masks did nothing, but red masks ‘work’.” He adds that “In fact, the red masks were more efficient than the surgical masks!” When a study starts to produce findings of this type, its results start to look like random noise.

Also, since there was only fewer cases of Covid in the group with masks than in the control group without masks, the most of the difference between the rate of 0,40% of Covid in the first and 0 ,76% in the second was due to differences in the sizes of two groups that supposedly had the same size. The researchers omitted from their analysis thousands of people — disproportionately in the control group — that they were unable to contact. Maria Chikina of the University of Pittsburgh; Wesley Pegden of Carnegie Mellon; and Recht found that the study’s “unblinded worker team” — that is, who knew which participants were assigned to which groups — “approached” participants in the mask group at much higher rates than those in the control group. In fact, Chikina, Pegden and Recht say that “the main significant difference” that led to an “imbalance” between the groups was because of the “behavior of the study team”.

Under the “intention-to-treat” principle, all people originally randomly assigned to each group should have been included in the analysis, whether or not they were contacted by the team. Eric McCoy, a physician at the University of California at Irvine, explains that intention-to-treat analysis “preserves the benefits of randomization, which cannot be assumed when using other methods of analysis.” In agreement with McCoy, Recht says that “for medical statistics experts, the intent-to-treat principle dictates that individuals who are unreachable or who refuse to respond should be included in the study. Omitting them invalidates the study.” Yet that is exactly what the authors of the Bangladeshi study did. When Chikina, Pegden and Recht investigated the study results using intent-to-treat analysis, they found no statistically significant difference between the number of people who got Covid in the mask group and the number who got it in the control group.

Thus, to show a statistically significant benefit from masks, the Bangladeshi study had to depart from the intention-to-treat analysis and also to treat 340 a thousand people who were not distributed to an individual group the individual as if they had been. Doing just one of these things would have failed to produce a statistically significant result.

Also, the study made it very clear that it was pro-masks, launching an open campaign to convince people in half the villages to use them. The researchers found that physical distancing was 18 % higher in villages with masks than in control villages, which muddles efforts to distinguish between the effects of masks and distancing. The study also provided financial incentives for some people, opening up the possibility that, given that participants and staff knew which group each person was in, some participants might have intended to give answers that pleased the researchers (and only those who reported it). symptoms with suspected Covid did the antibody test). Finally, the study did not test how many people had antibodies to Covid before, even though its main findings about the masks were based on how many people had antibodies to Covid afterward. This is equivalent to determining whether a family bought butter during their last supermarket visit by checking the refrigerator for butter.

In short, the findings of the Bangladeshi study show minuscule differences in how many people got Covid in the masked and unmasked (control) groups, and these minuscule differences were recorded as statistically significant only through several questionable methodological choices. The study researchers conducted their analysis as if they had randomly divided 340 1,000 subjects between the mask group and the control, when in In fact they had just randomly distributed 600 villages. They also deviated from the intent-to-treat analysis, a decision without which they would not have shown statistical significance even based on this inflated sample size. They only adjusted the ratio of Covid cases between the masked and unmasked group by adding baseline controls that were not well explained — without which the surgical masks would not have proven to be beneficial in the test with statistical significance. And they based their main findings on the possibility that people had developed antibodies against Covid at the end of the study, without having tested whether they had acquired them before the beginning of the study.

Despite all this, the CDC uses this study favorably as a reference and calls it “well designed”. And even before the endeavor was peer-reviewed and published as an official study, Abaluck proclaimed that “I think this should end any scientific debate about the effectiveness of masks.”

Remember that there is no basis for selecting the results of the Bangladesh study. If the study convinces people that masks work, then it should also convince people in their forties to wear cloth masks (only red ones, not purple ones!) and then switch to surgical masks when they turn fifty. All of these statistically significant findings result from the same abandonment of the intent-to-treat analysis and the same determination to analyze 340 a thousand people as if they had been randomly distributed to an individual group, when in fact they were crammed together with the rest of their villages. To express it in layman’s terms: if garbage comes in, garbage comes out.

The best scientific evidence continues to suggest that masks don’t work. Meanwhile, the public health establishment continues to ignore the evidence. Public health officials also remain almost completely blind to the profoundly adverse effects of masks on human interaction and quality of life. Seeing others’ faces and showing one’s own is the heart of human social life. In the words of political philosopher Pierre Manent, “Visibly presenting a refusal to be seen is an ongoing aggression against human coexistence.”

Using power of government to prevent individuals from showing their faces to others is even worse: it is a continual attack on human freedom. In fact, as Manent says, “the visibility of the face is one of the fundamental conditions of sociability, of conscience [mútua] that precedes any declarations of rights and is conditional on them”. It is possible that the only thing worse than denying the rights of free men and women is persecuting their children.

Jeffrey H. Anderson is president of the American Main Street Initiative, a think tank for ordinary Americans, and has served as director from the Bureau of Justice Statistics at the United States Department of Justice from 2017 to 2022 ).

©2022 City Journal. Published with permission. Original in English.
Exit mobile version