So, idly pondering again Sam Harris’s offer to pay $20,000 to anyone who
can change his mind about the thesis of his almost-universally panned The
Moral Landscape, I revisited the book the other night only to put it aside
in frustration.
Its problems start, it seems to me, with the polarised contrast he sets
up between religious people who believe that moral truth exists and
non-religious ones who believe that 'good' and 'evil' are merely subjective and
non-binding products of evolution and culture; one wonders where Kant would
fall in this scenario, let alone the many proponents of virtue ethics from
Aristotle on, perhaps most notably in recent decades Elizabeth Anscombe and
Alasdair MacIntyre. Indeed, it’s striking that neither Anscombe’s‘ Modern Moral Philosophy’ nor MacIntyre’s After Virtue even appear in Harris’s bibliography, let alone are engaged with
meaningfully in the text.
Anscombe and MacIntyre might be wrong, of course – I don’t think they
are, but that’s neither here nor there – but wrong on not, their views are
serious ones and have been enormously influential; Harris’s failure to engage
with virtue ethics is a glaring failure of the book, and I don’t think it’s
good enough to say that the language of philosophy 'directly increases the
amount of boredom in the universe'.
Given his background, it’s baffling how weak on philosophy Harris seems
to be, and his hand-waving dismissal of Hume’s is/ought distinction – a
distinction pointed to by Kant and Kierkegaard too, the former merely nodded to
and the latter wholly absent from the book – suggests that, far from Harris
having refuted Hume, he simply hasn’t understood him.
Harris’s thesis, ultimately, is a utilitarian one, and I think it’s fair
to say that his claim that science can help us discern moral questions is a
reasonable one, assuming utilitarianism is true and leaving aside the question
of how practical it would be to do this in real-life situations.
However, it is far from a given that utilitarianism offers the best
approach to morality, and indeed Harris’ thesis contains all the
long-identified problems of utilitarianism; this is one reason why it doesn’t
work for Harris to say that he didn’t arrive at his views from reading
philosophy, but from 'considering the logical implications of our making
continued progress in the sciences of mind'. That’s all very well, but
regardless of what path he took there, the position he reached is one of
utilitarianism, and that’s a position long-established as vulnerable to some
very serious criticisms.
Bentham and others had taken a similar approach to Harris long ago,
starting from the premise that the only real motives for human action are
attraction to pleasure and aversion to pain, and from this argued that moral
choices should always ultimately entail taking the action that will produce the
most pleasure and the least pain for the largest number of people; Mill added a
bit of nuance later by effectively recognising that 'happiness' is not a simple
thing, and that there are different sorts of pleasure.
In practice, Harris does exactly the same thing as Bentham, save that
he’s replaced the concept of 'pleasure' with that of 'well-being'. Given,
however, that despite deploying the term well over a hundred times, Harris
refuses to define 'well-being', it’s hard to see how he’s improved one jot upon
Bentham and Mill.
'Well-being,' is, it would seem for Harris, something you know when you
see; constantly open to redefinition, it’s akin to a sense of fulfilment and
happiness, while not, as far as I can gather, being identical with either. That
which contributes to well-being is, Harris says, the only intelligible basis
for morality and values, saying that it’s clear that 'most of what matters to
the average person – like fairness, justice, compassion, and a general
awareness of terrestrial reality – will be integral to our creating a thriving
global civilization and, therefore, to the general well-being of humanity.'
An obvious problem with this is how one quantifies something that is
constantly reopen to definition; if morality is, ultimately, a scientific
question, then it’s hard to see how anyone can do the maths when one of the
variables in any given moral equation resists a fixed value. How can you
measure that which you cannot define? It won't do to say that 'health' is a
similarly flexible concept, but that this doesn't stop us from pursuing
'medicine' in a scientific fashion; at its bluntest level, we know that a dead
person is not a healthy one, giving us a clear demonstration of what health
certainly is not; is there any comparable state for Harris's 'well-being'?
Another serious problem is that this when Harris speaks of 'the average
person', it’s not clear who he has in mind. He says that 'a general awareness
of terrestrial reality' matters to the average person, but also says that 'a
majority of Americans believe that the Bible provides an accurate account of
the ancient world'. Does he mean the ancient world in general, or is he in
particular speaking of the creation account in Genesis? Certainly,
there are large numbers of Americans – not far off half, I gather – who believe
in creationism rather than evolution, such that for the average American, an
awareness of terrestrial reality would entail a denial of certain scientific
discoveries and their implications. Might it, therefore, be morally better, to
Harris’s mind, to encourage such denial in order to foster national – even
global - harmony?
(There’s a question: does truth matter? Is honesty, for Harris, a
virtue, even if it may sometimes be a dangerous one? Or is truth only a morally
good thing when it is contributes to well-being, however that may be defined or
quantified?)
‘A thriving global civilization’ is something Harris claims to aspire
to, and as such one might think he’d seek to engage with cultural differences
in the whole field of morality; I don’t mean specific cultural practices which
we might find admirable or reprehensible, but rather how cultural differences
affect how we approach moral questions. As Jonathan Haidt demonstrates in The Righteous Mind, compared to the vast majority of people around the world, comfortable
westerners tend to have a limited moral palate; while Harris might assert that
Haidt’s 'ethic of sanctity', for instance, isn’t really a moral issue as it
doesn't affect 'conscious minds', this merely shows that he’s defining morality
in an idiosyncratic fashion, at odds with most people in the world, just as
Bentham did, although Bentham, at least, was open about was what he was doing.
Of course, that’s a big part of the difficulty in engaging with this
sort of argument nowadays. The whole language of morality is, at least in the
West, inherited from a Classical tradition that we largely abandoned between
the mid-fifteenth and the mid-seventeenth century, such that we’ve inherited
words without inheriting meanings; MacIntyre demonstrates that our moral
language has come so far adrift of our shared historical moorings that we can
no longer even disagree with each other in a meaningful sense!
Still, moving from theory to practice for a moment, Harris observes in a
footnote that many people assume that a moral emphasis on human 'well-being'
would lead us towards the reintroduction of slavery, the harvesting of the
organs of the poor, periodic nuclear bombing of the developing world, and other
such monstrosities; such expectations, he says, are the result of not thinking
about these things seriously, as there are clear reasons not to do such things,
relating to the immensity of suffering that they’d entail and the possibilities
of future happiness that they would foreclose. 'Does anyone really believe,' he
asks, 'that the highest possible state of human flourishing is compatible with
slavery, organ theft, and genocide?'
I’m fairly confident plenty of people have believed precisely that, and
suspect that there are no shortage of people who’d think it now – in recent
years there may well have been Hutus and Serbs, for instance, who thought the
world would be a better place bereft of Tutsis or Bosnian Muslims – which
returns us to one of the basic problems here.
Harris’s thesis is, when you get down to it, an argument that science can tell
us how to be nice, for some value of 'nice', and while I don’t think many
people would contest that science certainly can help us to
make moral decisions, it doesn’t really say why there's an obligation on us to be nice in the first place. There's no sense in
which it's normative, in which it imposes a duty towards niceness upon us; the
best Harris manages is to say that it stands to reason that it's good for the
species if we do things that are good for the well-being of the species, whatever
that may be.
(The fact that there have been and will be plenty of people who’d
readily have put the good of a subset of the species above the good of the
species as a whole is something that really doesn’t fit into his paradigm;
neither does he engage with those thinkers – Machiavelli and Nietzsche spring
very obviously to mind – who have argued that we’re most certainly not under
any obligation to niceness.)
In any case, when Harris talks both of the immensity of present day
suffering and the foreclosure of future happiness, it seems to me that his
concept of 'well-being' is so broad as to be, in practical terms, useless; we
need to evaluate the suffering of people directly affected by an action, and
the happiness denied to potential people whose very existence would be
prevented by said action, and weigh this up against the increased happiness of
those whose lives might have been improved by, say, having slaves, or
replacement organs, or reduced competition for resources. But then, of course,
we’d need to factor in how people might be plagued by guilt because of the
atrocities they’d committed, or how their children might feel…
Good luck with that, especially given that 'well-being' still awaits a
definition.
I was never a fan of Star Trek:Voyager, but even so, I’ve
seen quite a few episodes over the years; one episode features a Cardassian
scientist who saved thousands of lives by discovering a cure for a virus;
unfortunately he’d achieved this medical breakthrough by experimenting on
hundreds of prisoners in concentration camps. Obviously, we have real-world
analogues for this in recent history, but the episode in question thrashed out
the issues in a useful way that’s always stuck in my mind. So here’s the
question: how does this kind of scenario play in Harris’s analysis?
Imagine, if you like, that to find the cure to something will entail
experimenting on 100 people, each one dying in a horrible and humiliating
fashion, their wills, minds, and bodies broken; I think we can probably say
that their well-being would have deteriorated 100 pc as a result of this. The
experiments therefore would have cost humanity at large 10,000 'well-being
points', for want of a better term. But what if, as a result of these
experiments, medical progress meant that the lives of 10,001 people were
improved by 1pc each, such that the net effect of the experiment would be that
the totally of human well-being would have increased of 1 'well-being point';
could we, therefore, say that the experiments had been morally right?
Obviously, we can tweak the numbers in various ways – the ever amusing Bluff your way in Philosophy envisages a
situation where three people are suffering from the terminal collapse of a
vital organ, asking whether a fourth healthy person ought to agree to give up
his life for donation purposes to ensure ‘a net gain of two lives’ – but I
think the core questions stand: does the end justify the means, and is it ever
acceptable to treat human beings as things?
Lending real interest to that question is how Harris flatly denies that
human lives are intrinsically equal in value. Now, 'value' here is a term that
needs unpacking, with Harris seeming to think a person’s value measurable based
on how much suffering and happiness would be generated or prevented by their
death, but given that he juxtaposes his observations on human beings differing
in value with the statement that it is 'worse to run experiments on monkeys
than on mice,' it's worth asking whether it would – to Harris's mind – be worse
to run experiments on intelligent, sensitive, educated, gregarious people than
on foolish, insensitive, ignorant, shy ones? Or, putting it another way, would
it be better to experiment on less valuable human beings?
I don’t want to misrepresent Harris; he does, after all, say that it’s
probably good that laws ignore the fact – as he sees it – that all people are
not equally valuable, but he qualifies this by saying he might be wrong on
this. In any case, he says, he’s confident that whether or not laws should
treat people as though they’re equal is one that has a scientific answer.
As well he would, given that he thinks moral questions always – in principle –
have scientific answers.
How he squares this with his observations on 'utility monsters', I don't
know. Saying that it would be entirely 'ethical for our species to be
sacrificed for the unimaginably vast happiness of some superbeings', he
imagines the distinction between us and these superbeings as analogous to that
between bacteria and us, but really, these are differences of degree, not of
kind. This raises the question of whether it would be ethical, by Harris's
scheme, for the 'less valuable' members of our species to be sacrificed – in,
say, the kind of medical experiments considered earlier – for the increased
well-being of the 'more valuable' members of the species.
I'm not sure whether his answer to that would be 'clearly, yes', or
whether it would be to say that he doesn't know, but he's sure that the answer,
as ever, can be found through science.
-- From the files, October 2013.
No comments:
Post a Comment