Dear Reader,
The wearing of a face mask to protect against a respiratory virus is an act of grand deceit. It is a behavior that defies research on the topic. Wearing a face mask, as this article (one of many) points to — is unsafe to do and is ineffective.
Until the narrative around mandatory masking has changed, each day by 6am Eastern, I will both post here and send out a science-based reason why no one should wear a face mask.
I ask that you help me circulate these pieces to those around you who you believe could most benefit from them. It is important not to remain silent on this topic. These are important discussions to be having with friends, family members, business owners, healthcare practitioners, public servants, and others in the community.
-Allan
All research must foremost be logically sound and philosophically sound in its design or else its outcome will be needlessly biased and perhaps even inaccurate. Below are a list of biases that have run amok over the last year of claims made about Covid-19 and the allegedly necessary public health interventions. Even one of these may traditionally, if severe enough, be enough to discredit the work of a researcher and send them back to the drawing board, though identifying these biases honestly and accounting for them goes a long way.
Contrary to that intellectually honest behave, since the Ides of March 2020, public health officials have practically had a contest to see how many of these biases and fallacies they can cram into a body of research and into each policy recommendation white paper.
Confirmation Bias — Focusing on outcomes that align with expectations. Data may therefore be interpreted to support a hypothesis or dismissed if it opposes a hypothesis.
Conformity Bias — This is pressure to be like those around you. It can often be described as peer pressure.
Halo Effect — Encountering positive information about a person makes you value their opinion more.
Horns Effect — The opposite of Halo Effect, hearing something bad about a person causes you to value their opinion less.
Both of these — Halo Effect and Horns Effect — are examples of Reactive Devaluation — We judge an idea based on how we feel about a person with the idea.
Ad Hominem Fallacies arise from this — the tendency to attack the person with the idea and not the idea itself.
Anchor Bias — Being unable to let go of a specific piece of information and thereby building an understanding around that not necessarily relevant piece of information. This is particularly harmful with over-specialization and lack of research outside of one’s field.
Authority Bias — A piece of information is more cared about because it comes from an authority figure.
Overconfidence Bias — Being too confident in one’s own ability. This often leads to being wrong.
Bandwagon Effect — One tends to believe in something because others believe it.
Groupthink — The goal is to avoid conflict, and pursue harmony, so a working group ultimately ends up with weird and disparate outcomes or results, far different from anything a single member of the group would individually consider to be good work. Some would call this compromise, a more correct description of it is awful output.
Ambiguity Effect — If the benefit is clear one is more likely to want that outcome and will take risk accordingly, however if the benefit is unclear one will be less likely to pursue that outcome and less like to take a risk at achieving it.
Curse of Knowledge — The better informed will not listen to the less informed. This is an inability to put oneself into the shoes of another. The more educated one is, the harder it becomes to empathize. This is a common outcome of knowledge accumulation and education. It is a problem with placing decision-making into the hands of the most knowledgeable — they tend to lack empathy. Almost all political debate since 2015 has fallen into these two camps. 1.) I know more than you; you must listen to me. 2.) You do not understand me, and your solution does not work for me.
Observer-Expectancy Effect — A researcher believes X will happen, thereby influencing research towards X outcome. This may be overt or subconscious.
Compassion Fade — A preference for easily identifiable, easily recognizable sources rather than less personal data sets, despite the fact that the less personal data sets may be even more effective at getting to underlying truth.
Law of the Instrument — To he who has a hammer, everything looks like a nail.
Ostrich Effect — Burying one’s head in the sand in response to criticism, rather than embracing criticism as an opportunity to more ardently pursue truth.
Stereotyping — Taking a set of characteristics and using them to create an artificial problem that does not exist. Applicable example: If Sars-Cov-2 causes Covid-19 and breathing is one way that a virus is transmitted, then restricting breathing will lead to less virus transmission.
Illusory Correlation — Just because a solution looks like a solution, does not mean it actually is a solution.
Framing Effect — The answer is based on how research is framed rather than what truth is.
Sponsor Bias — Did a participant know who was funding the study? Did they research it? What impact may that have had? Was a participant motivated by strong feelings about a research sponsor.
Habituation Bias — Asking a series of questions similarly, may lead to similar answers.
Many aspects of bias related to the asking of questions exist, which is why headlines of a study should be dismissed until methodology is looked at. Reading the fine print of a study is vital.
Habituation Bias is a variation of the more broad question-order bias, by which answer to a question may change based on how the question is asked.
This is related to wording bias, such as those used intentionally by push pollsters to see to it that outcomes match those which are desired.
Social Desirability Bias — This is responding a certain way to be liked by others, including the researcher, which is an example of friendliness bias — a participant wanting to agree with the researcher.
There may be a tendency to over-report that which is seen as socially desirable and under-report that which is seen as socially undesirable.
Recency Bias — More weight is put upon that which is recent than on facts that maybe more relevant but are less recent.
Hindsight Bias — “I knew it all along.”
Irrational Scalation — Also called sunk cost fallacy — So much has been invested into an idea that you cannot just abandon it, so you keep pushing for more investment into it, despite all indications showing that it is a losing proposition.
Knowledge Bias — Choosing what you know rather than what is best or better or even good.
As Christopher J. Pannucci and Edwin G. Wilkins, in the article “Identifying and Avoiding Bias in Research”, points out:1
“Bias is not a dichotomous variable. Interpretation of bias cannot be limited to a simple inquisition: is bias present or not? Instead, reviewers of the literature must consider the degree to which bias was prevented by proper study design and implementation. As some degree of bias is nearly always present in a published study, readers must also consider how bias might influence a study's conclusions.”2
Anyone reading research with a critical eye is called on to be conversant in bias and to be comfortable identifying that bias. Bias exists. Bias does not make a study inaccurate. An informed evaluation of bias helps one to gauge the usefulness of an outcome.
Looking at the various times the term “scientific consensus” has been rolled out over the past decade to silence all debate, it is often apparent that such extraordinary amounts of bias exist so as to make it unable to be deemed scientific in the search for optimal approaches. We are instead left with a great deal of bullying, as can be expected where there is irrational escalation alongside such an influential curse of knowledge.
Any mainstream reporting on one-size-fits-all public health approaches has become so heavily biased as to virtually guarantee that truth will not be reported on.
One may chose to ignore this research, then, waiting for a time in the future when such research may again become usable. “Evidence Based Medicine,” has received near cult like fanaticism. In theory it makes a great deal of sense, but it can be taken to pedantic lengths to crowd out both dialogue and dissent, leaving little room for individual outcomes and ridiculing notions as quaint as intuition. Yet certainly, those are important concepts that need consideration alongside evidence. Taken to pedantic lengths, as is now almost always the norm, Evidence Based Medicine feels much more like a form of control rather than a scientific pursuit of the truth. Understandably, it is clear why some would choose to dismiss the entire process of such data heavy control and to rely on faculties that have long worked to get humanity through the trials of life these many years humans have walked the planet.
Alternately, one may choose to engage in the process for keeping such research honest.
Using these powerful tools above, observers can and should poke holes in available research. Identified biases are so effective at that. We are all now consumers of public health research, whether we like it or not. Having such research so present in life and almost always presented in click-worthy headline format is largely a bad thing, but it is the reality we live on and correspondingly not a reality that anyone should be naive about or complacent regarding.
Many research teams have a correspondent with a publicly available email address who has agreed to reply to research questions. A consumer of researcher should feel free to ask questions they find pressing. You must often reach out multiple times if you expect to hear back. If they do not respond after three or four follow-up attempts, carbon copy the whole team. If that does not work, reach out to the department head of the correspondent. Then reach out to all department heads of all authors for the study. If that does not work, reach out to the funding source for the study. Eventually they will respond.
The same can be done to journalists or anyone else who reports upon poorly done research.
Do not bother to do this if it is a passing fancy. There are too many passing fancies and busy researchers will feel incentivized to ignore you. Push firmly, however, against those topics that are important to you and which are dishonestly portrayed, and you may be surprised by how effectively one vocal, committed person can move the needle and help to keep researchers as diligent as possible.
Censoring such poorly compiled news and research is not a useful tool, but being savvy in response to it and realizing its many limitations is very useful.
Pannucci CJ, Wilkins EG. Identifying and Avoiding Bias in Research. Plastic and Reconstructive Surgery. 2010;126(2):619-625. doi:10.1097/prs.0b013e3181de24bc
Gerhard T. Bias: Considerations for research practice. Am. J. Health. Syst. Pharm. 2008;65:2159–2168.
The bestselling book "Face Masks In One Lesson" by Allan Stevo describes how to never wear a face mask again. The follow-up to the book, "Face Masks Hurt Kids," describes why to never wear a face mask again. We must defeat the awful, narrative around the mandates.
Examples of how face masks hurt kids will be posted to the Lockdown Land Substack each morning by 6am Eastern until the narrative around this ineffective and harmful medical intervention has shifted. Face masks are, in fact, not just harmful to children. Face masks are harmful to everyone. Thank you so much for helping me circulate this research.