Any animal that uses emotions to focus its attention has a
evolutionarily relevant fear of death. When something supposedly
death-related appears in its immediacy, the animal must attend to
identifying and responding to possible threats, each in its own way. We
humans have inherited this fear of death, but human intelligence allows
many new ways for environmental cues to suggest death to us. This is a
clear comparative advantage, but it brings us some problems too. So we
usually have a hard time trying to manage our wandering thoughts, in
order to avoid triggering this fear too often, and to avoid a more than
necessary level of response to the identified threat. This trouble in
thinking about death is often used to explain "irrational" behaviors
like delay in in writing wills, or arranging for life insurance or HIV
tests, or interest in odd religious believes.
Death as a great mistery
People in every culture believe in some kind of afterlife or, at the very least, are unsure about what happens to the mind at death. Why do we wonder where our mind goes when the body is dead? After all, the mind is what the brain does - i.e., it's more a verb than a noum.
The common view of death as a great mistery usually is brushed
aside as an emotionally fueled desire to believe that death isn't the
end of the road. According to the proponents of the Terror Management Theory,
a school of research in Social Psychology, we possess a "secret"
arsenal of psychological defenses designed to keep our death anxieties
at bay. Writing books, for example, would be interpreted as an exercise
in "symbolic immortality", even if we are no believers in the afterlife:
We write them for posterity, to enable a concrete set of our ephemeral
ideas to outlive our biological organisms. Other, less obvious,
beliefs, behaviors and attitudes would also exist to assuage what would
otherwise be crippling anxiety about the ego's existence. Actually,
this is only a part of a wider machinery: Our innate, dangerous and
necessary art of self-deception, something that, e.g., Adam
Smith, Freud and Sartre have placed at the core of their theories of
human behavior and emotions. But the concept of self-deception, in its
various forms, has a longer history in the western thought. E.g,.
Penelope doesn't really want Odysseus to come back home, nor is he
truly so sad to be leaving again at the end of the story; Romeo and
Juliette were not quite sure of their love.
Here's a brief sequence of daily facts representing common instances of human self-deception. "Marital aggrandizement"
is an unrealisticly positive assessment of one's spouse and marriage, a
technical term often used by psychologists to refer to the
all-important selective forgetfulness, one of the basis elements of a satisfactory marriage. Most people think they are smarter than average; if necessary, we redefine the dimension of the "competition" so we can "win". People talk down some of their skills in some areas to signal and maintain a self-image
of modesty. We look with an artificial skepticism to our previous
selves, seeking to confirm the (usually false) impression that we are
capable of taking a long, hard look at ourselves, and, most of all, to
support our current and largely positive self-image. When people cannot
claim a credit themselves, they often seek the glory of affiliation,
and then they cannot fathom that their affiliated causes couldn't be
always nation-worthy. Debates and exchange of information between
experts often polarize opinion rather than producing even partial
convergence. This reluctance to defer not always appears to be a well-grounded
suspicion of others. Many people are deliberately dismissive and
irrational in their points of view, while maintaining a passionate
self-righteousness. They are the opposite of how "information
gatherers/processors" would behave; i.e., people don't behave as having
"rational expectations", so their biased perceptions of the world is one of the most important factors for understanding incentives.
We usually don't feel very afraid of death, and we are not aware of
thinking about death much. However, it seems that we are just not very
aware of how much the fear of death directs our thinking. Psychologists
have found that even weak death cues measurably change our
thinking, usually in the direction of larger self-deception. Some
people think the religion and other supernatural beliefs exist
primarily to help people deal with death fears. However, a small number
of researchers on psychology are now increasingly arguing that these
irrational beliefs are an inevitable by-product of the evolution of self-consciousness.
All the abovementioned behaviors/beliefs were fully functional for our
distant ancestors: By believing more in themselves and in their
affiliations, they would credibly signal their ability and loyalty,
which might inspire their neighbors to help to protect them more from
threats. They also "suffered" the illusion that their minds were
immortal. Unmistakenly, we have inherited all these gross irrationality
from them, so that we, by virtue of our evolved cognitive architectures, e.g., had trouble conceptualizing our own psychological inexistence from the start. E.g., be you an "extinctivist" or a "continuist", your own mortality is unfalsifiable from the first-person perspective. Instead of the usual "cultural indoctrination hypothesis", this new theory assumes the falsifiable "simulation constraint hypothesis": As we've never consciously
been without consciousness, even our best simulations of "true
nothingness" just aren't good enough, then in order to imagine what
it's like to be dead we appeal to our background of conscious
experiences(1). Indeed, from an evolutionary perspective, a coherent
theory about psychological death is not necessarily vital, sufficing,
instead, that we understand the cessation of "agency", something even a
baby can recognize.
Despite the underlying reasons, some kind and level of
self-deception is often OK or even necessary. E.g, depressed people,
even though their thought processes are often quite irrational, tend to
have more accurate views about their real standing in the world.
However, self-deception can also exist for our great disadvantage.
Individuals often stick with their political views even when a contrary
reality stares them in the face. Entire sectors of today's modern
economies are based on self-deception. E.g., people spend their money
with gym memberships because they are unwilling to confromt their
illusions about how much they dislike exercise. (Or they might think,
incorrectly, that the flat fee will get them to go to the gym more
often.) This clear cost of a specific self-deception (about exercise)
demonstrates how strong an incentive we have to self-deceive. Add to
this the higher and not so measurable long-run health costs of actually
not doing the exercises.
Fairy Tales and Quack Cures
We
usually believe that all that abovementioned tales of self-deception
are not that applicable to modern society's ever advancing medicine and
resulting health/life span improvements. As we understand a great deal
about the mechanisms of life, we have generated reasoned hypotheses
about the causes of death and the interventions that might prevent it.
But biological systems are far too complex for us to have so much
confidence in such hypotheses. Furthermore, informal medical judgements
often fall short of statistical standards, so that clinical experience
cannot by itself assure much thing. Indeed, doctors mostly just copy what doctors around them do and don't rely on specific scientific studies.
Drugs approved by the regulatory body to treat a certain condition
can be freely prescribed to treat any other condition, regardless of
what studies say. Even within the approved uses there are serious
doubts. Today's blind randomized clinical trials (RCTs) have being under hard criticism for more than 10 years. Three common problems are most often cited:
- drug companies often do a bunch of trials, but only report the trials that make their drugs look good;
- drug companines usually run best case trials - i.e., on patients most likely to benefit and under doctors most likely to make good decisions;
- patients who experience side-effects are assured that they are getting the real treatment (the enhanced placebo effect, or unblinding).
The antidepressants (ADs) debate
is a particularly "hot" instance of this problem, with formally
published populist titles as "Listening to Prozac but hearing placebo"
and "The Emperor's new drugs". Although the pooling of various studies,
including previously unreported "negatives", results in a highly
statistically significant effect in favor of ADs (against placebo), the
cause and the size of this effect are much disputed. Even if we assume
or conclude that at least the Emperor's clothes are made of "real
cloth"(2), remains the "weaker" argument that the effect is small enough to be clinically unimportant.
Aside from the discussion about how big an effect is required to be
clinically important, the fact is that today's standard RCTs are not
designed to determine the size of the effect in usual clinical practice
(i.e., their effectiveness). Indeed, to be able to estimate the real
added value of ADs in particular patient groups is generally recognized
to be no easy task.
Generally speaking, there's increasing concern that in modern medical research, false findings may be the majority, or even the vast majority of published research claims.
In august, 2005, PLoS Medicine published a paper by John P.A.
Ioannides, in which he argues that this should not be surprising, as it
can be "easily" proved. Ioannidis derives the following list of
corollaries about the probability that a research find is already true
(some of them are pretty intuitive):
- the smaller the number of studies conducted, the less likely the finding are to be true;
- the smaller the effect sizes, the less likely the finds are to be true;
- the greater the number and the lesser the selection of tested relationships, the less likely the findings are to be true;
- the greater the flexibility in desings, definitions, and analytical modes, the less likely the findings are to be true;
- the greater the financial and other interests/prejudices, the less likely the finds are to be true;
- the greater the number of scientific teams involved (i.e., replications), the less likely the findings are to be true(3).
At his framework, findings from underpowered, early phase
clinical trials would be true about one in four times, or even less
frequently if bias is present. A fact: The majority of modern
(bio)medical research is operating in areas with very low pre- and
post-study probability for true findings. Furthermore, assuming a "null
field" (i.e., one with absolutely no yield of true scientific
information(4), then one for which is usually expected that all
observed effect sizes vary by chance around the null in the absence of
bias), all deviations from what's expected by chance alone would be
simply a pure measure of the prevailing bias. It follows that, between
many "null fields", the fields that claim stronger effects (often with
accompanying claims of medical and publich health relevance) are simply
those that have sustained the worst biases. It means that too large and too highly significant effects may actually be more likely to be signs of large bias and then should lead researchers to careful thinking about what have gone wrong with their data, analysis, and results.
Correlations-in-the-world studies usually see little to no effect at all of marginal medical care
- i.e., the care some people get and other people not get, that is, an
useless-in-average care - on health, while gender, exercise and social
status, for example, can change lifespans by 10 to 15 years or more.
Even a large, randomized trial conducted in the late 1970s by the RAND
Institute found no significant difference in general health between
people with more (public) health care and those with less(5).
Furthermore, it can also be concluded that common/basic care
(roughly two thirds of the spending) and marginal care (the remaing
third) had the same fraction of "innapropriate" hospital admissions and
care-days, and the same fraction of major and catastrophic desease
presentations (relative to moderate and asymptomatic desease
manifestations; as reviewed a posteriori).
Thus, the strange fact is that we cannot be so sure about why we now live so much longer.
However, this type of reasoning is largely ignored by society as a whole,
as the idea that we mostly don't understand and actually cannot control
death, despite all the advancements of medicine and technology in
general, is just not a message people want to hear. Then, nations and
families spend now a significant proportion of their incomes on
medicine, even though at least a third of that has been clearly
demonstrated to be on average useless (marginal care), and even though
we have good reasons to doubt the value of most of the other two thirds
(common care). Meanwhile, we seem relatively uninterested in living longer by trying the things the evidence suggests to work,
at an individual basis, such as gaining high social status, exercising
more, smoking less, and living in rural areas. We eagerly hand over all
our life/death decisions to doctors, in order to think about other
things. It's easier just to trust our doctors than having to think about death, or any serious desease at all.
In summary, humans evolved particular health behaviors because of
specific contingent features of our ancestors' environments, features
that are largely irrelevant today. Furthermore, evolution has lead us
largely unware of how self-serving and in-group-oriented (e.g.,
favouring our families) the functions performed by those behaviors were.
Psychiatry is being lead by the siren call of semiotics, and it's saying, follow me, I'm made of words
A phrase by a psychiatrist whose online idientity is called "Alone".
For quite obvious reasons, psychiatry provides one the best
examples of the viciousness of the current medical research and clinical
practice, and of how a completely unresponsive demand-side have contributed to this state of affairs.
Like everything else in life, the problem is on the demand side. It
clearly follows from the above description that the demand-side of the
market for medicines, or, more generally, for medical treatment, are
actually the doctors, rather than the ultimate user of the medical
products and services - i.e., you, the patient. It was mentioned also
that doctors are highly biased and actually don't merit our blind trust
in them, at least in general terms. Not to mention the preposterously
biased and uninformative medical journals.
As a matter of fact, plugging in the supply-side of the equation,
it appears that the actual efficacy of some would-be drug or treatment
could heavily work against it: The doctors (the real demand-side) who
would use it were, frequently, the least likely to want to do so. Not
necessarilty because they are bad and corrupt, but because they usually
have a hard mini-paradigm, and they simply couldn't see anything beyond
it.
Medicine is politics, not science: Doctors are generally
resistant to published data, journal studies and even to logic, so that
to break that mini-paradigms drug companies may often have to recurr to
key opinion leaders in the specific field, usually a little group of distinguished academicians, to change perceptions and practices.
So medicine is practiced by what works for the doctors, not the
patients, and all the incentive drug companies have is to "discover" drugs that will sell to the doctors
(and that can be approved by the regulatory body). And that's all the
incentive medical researchers always have had, within drug companies or
even in government-sponsored studies: to gain regulatory approval for
their would-be drugs, not really future sales, not to mention
usefulness for the patient.
Doctors have no current incentive to consider the cost
(effectiveness) of the meds they prescribe, therefore there's no
incentive to drug companies to create cost-worthy products and
services. Within this framework, the best business model the entire
supply-side can adopt is the "blockbuster drug model".
A common criticism here is that it entices other companies to invest in
"me too" drugs, so we always have a quite big number of products of the
same class, addressing the same problem/condition, usually the
"condition of the moment", without clear differences between them. It means no real incentive to investment in innovation.
On the feedback, the model makes the doctors think that its action
mechanisms is the only or the most important one, creating a paradigm
that's hard to thing outside of. It means that the model confuses
science.
There are staggering social consequences attached to this state of
affairs. In psychiatry, at least from 1980 to 1998, doctors were all
obsessed with Serotonin Selective Reuptake Inhibitors
(SSRIs), to the point that they tried to explain nearly all psychiatric
phenomena by means of serotonin mechanisms - i.e., they have prescribed
SSRIs for everything. 1998 marks the beggining of the abovementioned AD
debate, at least in US. Although increasing over time, the vociferous
group of critics was, until recently, just a minority. The cited
"Alone" used to refer to April 4th, 2007, as the day when the ADs
finally have dead. In this day, the prestigious New England Journal of
Medicine published a paper by a group of very distinguished
academicians concluding that ADs didn't provide additional benefit to
mood stabilizers. In practice, they were declaring the actual
abandonment of ADs and the abandonment of their support of the
diagnosis "major dipressive disorder" (MDD) as a whole. They were also
signaling that the future was indeed in the bipolar spectrum and in the
wide use of atypical antipsychotics. After that the arguments of that
vociferous minority subtly become mainstream: SSRIs aren't that
effective at all, the old "10% better than placebo" is just a
statistical trick with little clinical utility, MDD is overdiagnosed,
etc. Three interesting facts are worth mentioning here:
- these are the same authors who have pushed psychiatry into polipharmacy with ADs (to "everything") long ago;
- the data used to corroborate their conclusions are 10 years old;
- this was not a big drug company-sponsored study, but a NIMH one.
In
fact, distinguished and manipulative academicians are no longer getting drug companies money; they are now getting the easier and flexible
government money, and so they are dangerously pushing the government line.
In summary, all this means some squeeze out of MDD into "life" and
the bipolar spectrum, and atypical antipsychotics are now 1st line
agents. The clear signal is to psychiatry to use them to replace everything.
According to said "Alone", there's no many changes like this in
psychiatry, maybe one every 10 years; the last one was the beggining of
the Depakote era, before were the SSRIs, each one with its egregeous semantics
(e.g., "the kindling model", "the serotonin model of depression").
Being more precise, the recent semantic trick was the following: "mood
stabilizers" now include atypical antipsychotics and we've gone from
"polipharmacy is not better" to "monotherapy with mood stabilizers -
i.e., atypical antipsychotics in this case - is just as good as 2 drugs
at once." This little tinkering with language and loyalties
brings psychiatry from the (serotonin-based) depression mode to the
manic(-depressive) mode. Suffice to say that the developments suffer
only some degree of delay in the rest of the world.
Furthermore, note that psychiatry doesn't explain, it only identify. What it commonly does is to call the symptom itself a disorder, which makes it definitional, safely axiomatic, then unrefutable. You're not depressed because you lost your house, you always have
depression, and one of the triggers was losing your house. The real
problem here is that people are demanding any easy relief, instead of
confromting the root causes. And religion is not alone here. In going to a psychiatrist, their
socioeconomic problems get demoted to "factors" and their feelings
become pathologized. Society needs that illusion/lie, because it has
created unrealistic expectations in people, and no real ways of
fulfilling them. Furthermore, people "cannot look directly at either
the sun or the death" (La Rochefoucauld).
Footnotes:
(1) Note that, as the
"nothingness" of unconsciousness cannot be an experienced actuality,
the common assumption that we experience periods of unconsciousness,
e.g., when in dreamless sleep, is actually impossible.
(2) I.e., if we admit that RCTs of ADs are producing meaningful results, in the scientifically and regularorily important sense that they are telling us which compounds work (i.e., have efficacy) safelly.
(3) The rapid early succession of contradictory conclusions is usually called the "Proteus Phenomenum."
(4) At least based on our current understanding.
(5) Since this was a randomized but not blind clinical trial, this no-effect result also includes any health benefits people can get from feeling that they are being cared more.