Cosmic Ethics: a philosophical primer

July 26, 2017 | Autor: Kelly Smith | Categoria: Ethics, Applied Ethics, Astrobiology, Environmental Ethics
Share Embed


Descrição do Produto

A revised verision of this paper appeared in print as:

Smith, Kelly C. (2007) "Cosmic Ethics" in C. Bertka (Ed.) Workshop
Report:
Philosophical, Ethical, and Theological Implications of
Astrobiology, C. Bertka,
N. Roth and M. Shindell (Eds.), Washington, AAAS








Cosmic Ethics:

a philosophical primer





Kelly C. Smith
Department of Philosophy & Religion and the Rutland Center for Ethics
Clemson University
Hardin Hall
Clemson, SC 29634
[email protected]











I. Introductory Musings
A philosopher always needs to know his audience. In this case, I am
going to assume that most readers of this piece will be scientists and
engineers working in the space sciences – that is, non-philosophers who
have relatively little familiarity with the way we philosophers deal with
ethics and relatively little patience with prolonged and elaborate
conceptual analysis that does not clearly impact practical decision making.
As a result, I am taking a very different approach than I would take if I
were writing for a group of fellow philosophers.
It's been my experience that non-philosophers often have some very basic
misconceptions about the nature of ethics which get in the way of fruitful
discussion of ethical issues. Certainly we all know about ethics in the
sense that we all know it has to do with rules about our behavior,
especially our behavior towards other people. We all have ethical rules of
one sort or another as well, so what more do we need? In this sense,
ethics is rather similar to evolution – people think they know what it is
and they do in the sense that they have the basic idea. However, there is
much more to evolution than can be captured in a paragraph explaining
natural selection, just as there is much more to ethics than can be
summarized in a few simple statements about rules of behavior. Indeed,
when people attempt to give simplified versions of what ethics is, they
often undercut the whole ethical enterprise without even realizing it.
Therefore, I will begin this piece with some common points of confusion
concerning the nature of ethics and try to show why we should care about
them as well as they are mistaken. After that, I will discuss how moral
value is assigned, again because there are confusions here about the
relative merits of the two basic approaches. After this, I will move on to
sketch my own, ratio-centric approach to moral value, which I will then
apply to the specific case of whether we should consider Terraforming Mars.

II. Some Basic Ethical Concepts
A. Ethics, Science and Reason
Ok, so what is ethics then? Ethics is a field of study which attempts
to systematically justify principles governing how we should treat others.
Ethicists attempt to formulate general ethical principles which cover as
wide a range of situations as possible. They spend a lot of time making
these principles very clear (e.g., distinguishing them from similar
principles, clarifying when they do and do not apply, etc.). They justify
the principles using general considerations of epistemology, reason,
psychology, etc. Finally, they elucidate the practical implications of the
general theories (though some philosophers worry about this last step more
than others).
What I mean to suggest here is that ethical reasoning is really not so
different from scientific reasoning (which is not surprising, since science
has its origins in philosophy). Scientists and philosophers both have a
tendency to emphasize the differences between their respective disciplines,
but there is also important common ground. For example, the emphasis in
both disciplines is on explaining what happens (or what should happen) by
recourse to very general principles or laws. It is not enough to talk
about what happens in a particular case, whether one is doing physics or
ethics. Rather, one seeks always to show how what happens in one case is
the result of more general considerations of a theory. Both scientists and
philosophers have to clarify the theory as best they can, though in both
cases this can be complicated by the fact that the theory itself is not
directly testable, often contains vague elements in need of clarification,
etc. Both disciplines place great emphasis on logical argument and
terminological clarity to minimize irrelevancies and confusions. The
bottom line is that both scientists and ethicists are in the business of
generating systematic justification of their claims.
A scientist may have a personal opinion about something, but he should
be careful to differentiate this from what science tells us. Similarly, we
all have personal opinions about what constitutes ethical behavior and what
does not. However, we are not all as careful here to distinguish personal
opinion from what can be justified in this more general and systematic
sense. This is the first important confusion about ethics – it is not
merely a collection of personal opinions any more than science is a
collection of personal opinions. Ethical answers are the products of a
general normative theory of human action. I freely grant, of course, that
the question of which theory is correct or best is a very complicated
debate. However, that is a separate question from whether ethical
questions have defensible answers, just as the debate about which possible
explanation of quantum gravity is the best is distinct from an explanation
of why physics is a science.
Part of the reason people tend to think of ethics as "just opinion" is a
distressing tendency to approach complex issues in a dichotomous, black or
white fashion. Children are taught in elementary school to differentiate
between "statements of fact" and "statements of opinion" as if this were an
easy thing to do. We get used to thinking this way and thus easily
classify science as dealing with facts and ethics as dealing with mere
opinions[i]. But if "fact" is taken to mean something which we are so
certain of it could not be false, then science does not have any. If
"opinion" is taken to mean something for which we have no evidence at all
beyond mere personal intuition, then ethics is not about that. A more
realistic way to think about both science and ethics is that evidence comes
in many different forms with many different degrees of weight. As rational
creatures, we try to apportion our beliefs in accord with the weight of the
evidence. Many, if not most, of our beliefs are more "theoretical" than we
generally acknowledge, but this is really not a problem.

B. Got Facts?
When listening to an interminable ethical dispute, we have all been
tempted to roll our eyes in disgust. Sometimes when we feel this way,
however, we try to take the easy way out by avoiding the ethical argument
and simply following the rules. While this is psychologically
understandable, the reasoning behind it is fundamentally mistaken, and in a
dangerous way.
One important difference between science and ethics is that science
attempts to generate true descriptive statements about the universe. It's
job is primarily to correctly describe how things actually work. Ethics,
on the other hand, is not about describing the world, but about prescribing
how it should be. The normative statements of ethics are thus
fundamentally different from the descriptive statements of science.
Now this is a point most of us either grasp already or can quickly
learn. However, it's also a point that people routinely lose sight of in
the midst of ethical argument. This is perhaps especially true of
scientists, who are so accustomed to thinking about descriptive statements
and their empirical support that they instinctively seek shelter in them
when things get vague. For example, trying to determine what the ethically
correct procedures for protecting Martian life is very difficult. It is
neither empirically tractable nor subject to easy consensus. The rules
NASA has written to govern such procedures are another matter, however.
Anyone who knows where to look can read these and figure out how to apply
them. The tendency to run from difficult normative questions to tractable
descriptive ones is thus understandable, if misguided.
Consider this though: What precisely are NASA's rules based on? More
to the point, what should they be based on? Would we think they provided
an appropriate answer to questions about how to manage Martian exploration
missions if we knew they represented merely the personal opinion of a
particular NASA administrator? I think not. Rather, they have the force
they do because we believe (or at least are willing to assume when we are
frustrated and overworked) that they are based on legitimate ethical
principles. Therefore, rules of behavior, whether we are talking about
NASA regulations or federal law, should be based on ethical principles, not
vice versa. Taking the rule itself as the final word rather than engaging
in the discussion of ethics, painful as the latter may sometimes be, is
ultimately nothing more than rule worship.
A slightly different way in which people blur the line between ethical
and descriptive analysis is to try to attempt to derive ethical principles
from descriptive ones (a move philosophers refer to as the naturalistic
fallacy). Suppose we are discussing whether we should terraform Mars for
the benefit, not of terrestrial lifeforms like us, but of native Martian
organisms. A scientist or engineer might well point out that this proposal
isn't going to get very far politically because people are selfish and they
aren't going to vote to spend money on things which do not directly benefit
them. This is usually put in such a way that it implies the discussion is
over and we should act in accord with how people do act. Whatever you may
think of the proposal's merits, this argument is fundamentally wrongheaded.
Let's suppose without argument that the preceding is a true descriptive
claim about human psychology. We still need to know what bearing this has
on the question, "What should we do on Mars?", since this is not a
descriptive claim but a normative one. With a little reflection, the
answer is clearly that it should have very little bearing at all - unless
we also assume, not only that human psychology works this way, but that it
is impossible to change it. But that seems clearly false. And consider
the wider implications: if we truly adopt such a counsel of despair
regarding the possibility of ethical progress, no ethical progress would
ever be made. Thus, it was certainly true in the America of 1750 that the
vast majority of citizens were in favor of slavery. To conclude, however,
that this made slavery moral or that debate concerning the morality of
slavery was a waste of time would be deeply confused. Therefore, as long
as there is a possibility for change, statements to the effect that we
should do things we do not in fact do have real meaning.

C. It's all Relative?
Another response one is tempted to offer sometimes when listening to
philosopher wrangle about ethics is something like this, "Whose to say what
is right and what is wrong? It's all just a matter of social convention."
This sort of position is what philosophers call ethical relativism. One of
the most difficult challenges facing someone who wishes to have a serious
ethical discussion is the widespread and entirely uncritical acceptance of
ethical relativism, especially here in the US. Basically, an ethical
relativist claims that there is nothing even close to truth in ethics, but
rather that what is ethical is entirely relative. If ultimately the only
thing "This is ethically good" means is that I personally like or that the
society I happen to live in approves of it, then it's not sensible to
think one ethical position is objectively any better or worse than another.
One of the reasons this view is so popular is that is sounds so
intuitively correct – until and unless you examine its consequences. The
bottom line is that relativism is a far, far more radical position than
most of its causal adherents realize.
What do I mean? Well, one quick way to see the basic relativist claim
is that ethical judgments are ultimately just like subjective judgments of
taste. Thus, "It is immoral to kill innocents" is in the same essential
basket as "I don't like North Carolina style barbeque". Let's suppose
briefly that this is true and consider where is leads. For one thing, if
ethical judgments are really just subjective matters of taste, ethical
arguments are, quite literally, meaningless. There is no way I can
verbally convince another that killing innocents is wrong any more than I
can argue them into believing that NC style BBQ doesn't taste good. It's
just not a rational process. Isn't it deeply strange then that people who
say they are relativists waste enormous amounts of time debating ethical
points – why bother, after all? It's even stranger for a relativist to
applaud someone who sacrifices greatly for an ethical principle - say,
gives up their life for their country. Isn't this person really just
confused rather than heroic? Would we applaud someone willing to die for
the gustatory honor of NC style BBQ or view them as deeply misguided?
Finally, because there is no rational process behind ethical opinions, a
relativist must in effect endorse a "might makes right" view of ethical
disputes. After all, the only way I can "win" an ethical dispute is to get
the other person to agree with me. There are no common rational principles
guiding personal opinions, so there is no room for reasoned argument.
Therefore, if I truly need to get agreement, I will have to force it. To a
relativist then, might really does make right.
Now, you can be a relativist and bite these bullets. If, however, you
find it too radical to believe that ethical discussions are silly, that
people who stand up for ethical principles are chumps and that might really
does make right, then you need to reconsider. Again, what precisely we
should put in place of relativism is a complex question, but we can't even
address that until people see the need for an answer. The solution, it
seems to me, is to keep firmly in mind that science and ethics are not
really so different after all. Both must believe that there is something
like "truth" out there and that we can, however haltingly, make progress
towards it by applying basic tools of reason.
I want to make one final point along these lines: believing in
"something like truth" is not necessarily the same as believing that there
is a uniquely correct answer to any particular question. Indeed, confusion
on this point is part of the reason for the popularity of ethical
relativism. After all, if I have to choose between believing that there is
a single correct answer even when people disagree widely and believing that
all answers are equally arbitrary, then the relativist position might look
attractive. Fortunately, however, this is a false alternative. All that
we really have to believe in order to have a concept of moral progress is
the idea that not all answers to ethical questions are equally good. There
need not be a single best answer any more than there need be a single best
design for a bridge. The fact that there is no objective way to defend one
bridge design as "simply the best" does not mean all bridge designs are
equal or that there are no objective standards which can be applied.
Whether we are talking about bridges or ethical positions, we can reject
many popular and intuitively appealing alternatives for very good reasons,
thus making progress of an important sort. We might wish the situation
were clearer still, but if physics can live with the particle/wave duality
of light, surely ethics can live with its own type of pluralism.

III. Building an Ethical Framework
A. On Valuing
Having discussed what the possibility and basic nature of an ethical
system, we now turn to a more specific examination of what kinds of ethical
values can be assigned. One of the most important axes of debate in
modern ethics has to do with the "moral community question". That is, to
whom precisely do we have moral obligations? In the good old days, this
question did not arise much since it was so obvious that only healthy,
wealthy, European males had a serious claim to moral standing. The demise
of this certitude may have made the world a better place in many ways, but
it greatly complicates the lives of us ethicists. No longer is it so
obvious where to draw the line between things which have moral value and
things which do not: Do we have obligations to future generations not yet
born? Do non-human animals have ethical standing? Do rivers and
mountains?
Of course, any attempt to draw clear lines is going to have a difficult
time defending their precise location against all possible alternatives.
There are two points here it's crucial to keep in mind, however. First,
the fact that the precise location of a line is not clear does not indicate
that there is no meaningful distinction to be made. This is an example of
something philosophers call a Sorites paradox. For example, it is
difficult to say exactly when a collection of rocks becomes a "pile", but
this does not mean the term "pile" is meaningless. Rather, it simply means
that we will have to accept some ineliminable degree of fuzziness about the
boundary between piles and other sorts of collections. Fuzzy boundaries
are actually quite common, as when, to take an astrobiological example, one
tries to distinguish between living and non-living systems. Second, this
is a problem faced by any account which imposes distinctions. People often
avoid this problem, consciously or unconsciously, by failing to make clear
distinctions at all since, if I fail to take a clear position, it is much
harder to criticize me. I submit that it is far better to have a
distinction which flows from general principles we can agree to and makes
sense, even if it doesn't perfectly deflect all objections, than to take a
position which sidesteps the problem by not saying anything specific at
all. We must always keep in mind that ethical theories (and scientific
ones, for that matter) represent choices between imperfect alternatives.
The question we must always focus on therefore is not whether a theory can
avoid all criticism, ambiguity, etc., but rather whether it fares better
than any available alternative. This is again a similarity between science
and ethics since no scientific theory is without its warts – it's just it
is the best we have currently managed to come up with.
It is important to be clear on a critical distinction first, however,
before we can have a fruitful discussion of moral value. Things can be
valued in at least two crucially different ways. On the one hand, I can
value something because it serves my needs. If I want to slice a fifty
pound sack of carrots, a food processor might be seen as quite valuable.
However, the value of a food processor is entirely dependent on its utility
as a means to my own ends. If I were to find myself stranded on a desert
island with only a food processor for company, most of us would allow that
its value is now very close to zero because I have no real use for it (and,
even if I found items to slice, there are no electrical outlets to power
the thing). Philosophers call this sort of valuation "instrumental", since
it is a function of the way in which I can use something as an instrument
to meet my own goals.
On the other hand, most of us believe that we should value other human
beings, not because of the uses to which they can be put, but because they
are the sorts of things which have value in and of themselves. This sort
of value is called intrinsic because it has to do with the intrinsic nature
of a thing, not its relation to other things or its utility. Thus, while
it's true that I may value a plumber in part because I need my toilet
fixed, this is not the most important reason to value her. If the plumber
were stranded on a desert island all by herself and we somehow knew for a
fact that she would never be of any further use to anyone else, we do not
think this reduces her ethical value at all. Thus we think it would be
immoral to kill her even if doing so would not further inconvenience any
other creature.
Traditionally, philosophers think of intrinsic value as a kind of line
in the sand. Those things with intrinsic value are the fundamental units
of ethical analysis. Instrumental value is viewed as either irrelevant or
at least of decidedly secondary importance. One reason this is enormously
important is that the values of items which have a merely instrumental
value are subject to the forces of the market and thus fluctuate greatly
depending on external circumstances. If we wish to prevent, say, haggling
over the ethical value of a human being, we need to forestall any
instrumental calculation by claiming it misses a larger point. As Immanuel
Kant puts it, "People have a dignity, not a price."
In more modern times, it has become fashionable to argue that the
positioning of the line distinguishing things with intrinsic value from
those with merely instrumental value should be moved again. We have
already moved it so as to include female humans, humans without much money,
sick humans and humans other than pink ones as intrinsically valuable.
With the benefit of hindsight, we can say this move seems perfectly
justified since the differences between these types of humans are not ones
which are important to their intrinsic ethical status. Why should skin
color make any difference at all in whether it's moral to kill another
human, for example? Wherever we might say the true essence of the ethical
value of a human lies, it seems extremely implausible to locate it in such
a superficial character as color.
As often happens in philosophy, however, blurring a formerly clear
distinction opens up an annoyingly large can of worms. Indeed, the entire
field of environmental ethics can be characterized without too much
oversimplification as an attempt to fix the boundaries of our moral
community without begging the question of the status of humans in the grand
scheme of things. Ethics must do this systematically, however, and thus
the sixty-four million dollar question becomes "Precisely what is it that
gives something its moral status?"

B. The Intrinsic Value Shuffle
Nobody is completely objective, not even philosophers. It's my view
that the environmental ethics movement has historically suffered from a
kind of selection bias which affects its character in an odd way. The fact
of the matter is that most people who go into environmental ethics have a
deep interest in, and even reverence for, natural things. As a result,
there's an understandable push to establish the significance of non-human
"things" in the environment: animals, plants, ecosystems, mountains,
rivers, etc.
Now this in itself is not necessarily a problem. One could argue that a
bias in the current state of environmental ethics is a much needed
corrective to the prior bias against any claim that non-humans entities
could have moral value of any kind. As an exercise in the sociology of
academia, this certainly makes some sense. However, it can cause real
problems when people outside of philosophy read, say, introductory texts in
environmental ethics. They are not aware of the deeper movements within
ethical theory and thus take the views presented in these highly
specialized texts as being representative of the discipline. I first
became aware of this problem when I started presenting my own ratio-centric
account to astrobiological audiences. When I told people that I thought
only creatures with reason have ethical standing (more on this later), the
response I often got was something like, "Yeah, but surely that's not your
ethical position?" The clear implication is that this sort of view has
long since been discredited and is currently considered something of a
fringe position. In fact, nothing could be further from the truth.
Whatever else might be said about the merits of a position like ratio-
centrism, it is most certainly not an outlier within the community of
professional ethicists.
Indeed, some version of ratio-centrism is probably the position adopted
by the vast majority of professional philosophers today. Of course, I have
no hard data on this, but it is instructive to note the kind of reaction I
get from my ethicist colleagues when I describe some positions taken in
astrobiology. For example, Carl Sagan's famous idea that we should leave
Mars to the Martians, even if it is inhabited merely by microbes, is
routinely met with expressions of incredulity. The typical reaction is
first to question whether that is truly what he meant, and when it's clear
that it is, to dismiss it with a rueful shake of the head as simply absurd.
Note, however, that even more radical positions routinely pass without the
slightest murmur of dissent among astrobiologists discussing issues like
Terraforming. My point here is not to argue that the popularity of a
position is evidence of its truth, but simply to illustrate a very wide gap
between the way ethical theory is portrayed by those in the environmental
movement and how it's viewed by a more general sample of the ethics
community.
In any event, the real difficulty arises when the zeal to establish the
value of non-human entities causes people to embrace intrinsic value
arguments in a very uncritical fashion – in particular, without paying
sufficient attention to their ultimate implications. The reason
environmental ethicists tend to be fond of the intrinsic value approach is
because of the widely acknowledged superiority of intrinsic value over
instrumental value in moral arguments. If a river have merely instrumental
value, then I can only argue weakly for it's conservation, since there are
many different ways to use the river, all valuable in an instrumental
sense. Moreover, nothing I do to a river is "simply wrong" because it is
not the kind of thing which really has ethical status in its own right.
One who views the river as merely of instrumental value might argue, for
example, that people derive pleasure from boating on the river and
companies derive value from dumping pollutants in the river, etc. In order
to decide what to do with the river, I need to objectively weigh all these
different uses to which it might be put. If, on the other hand, the river
has intrinsic moral value, then I can use this as a trump card to utility
arguments of this sort. Something with intrinsic value simply should not
be treated in certain ways, irrespective of the possible benefits to others
of doing so. If the river in intrinsically valuable, then I might well
have an ethical duty to protect the river which is not weakened by other
considerations like the efficiencies of pollution.
As I said earlier, what sets ethics apart from mere personal intuition
is the attempt to justify ethical positions in the larger context of a
general ethical theory. For our purposes, this has two important
implications. First, we can't simply assert the intrinsic value of non-
human entities, rather we must put forward a principle of division which
draws a line between those things which have intrinsic value and those
which have merely instrumental value. To be sure, there are a great many
ways one might argue the principle of division should work. Throughout
much of Western philosophy it was thought that the possession of reason is
the sine qua non of intrinsic ethical value. More recently alternative
concepts have been put forward, including the ability to feel pleasure/pain
or self awareness or being alive or a certain level of organized
complexity, etc., etc.
Second, we have to seek to show that our principle of division is not
simply an ad hoc move that happens to result in an intuitively appealing
division. Both in science and in ethics, the best way to guard against ad
hoc moves is to require them to flow from a more general theory. If we
want to view a particular principle of division as general rather than ad
hoc, we must look carefully at the implications of applying it in a wide
variety of cases, not just those of immediate interest. It's in this
second aspect of assessment that many of the positions I wish to critique
fall down. If I can show that a principle which seems to result in an
intuitively appealing division between intrinsic and instrumental values in
one sort of situation also results in a division which does great violence
to our most cherished intuitions in another, this is a problem. Perhaps
the problem ultimately can be overcome somehow, though I personally doubt
it. What is so disturbing about the uncritical acceptance of intrinsic
value accounts in environmental contexts (particularly when applied by non-
philosophers) is that they often simply do not recognize there is a problem
at all.
Let me illustrate what I mean. If we cast the net of intrinsic moral
value widely, we always generate uncomfortable cases philosophers sometimes
classify as lifeboat cases. Suppose, for example, that I am stranded in a
lifeboat with my son, Sam, and my dog, Cleo. Suppose further that we are
all on the verge of dying of dehydration and it seems rescue can not
possibly arrive in time to save us. What should we do? Pretty much any
alternative to reason as a principle of division will classify all three
occupants of the boat as having intrinsic moral status, since it seems
pretty clear that Cleo is alive, can feel pleasure/pain, is a complex
entity suitable for a proper name, etc. We like this in some contexts
because it gives us compelling moral reasons not to exploit Cleo. However,
in this context we have to make a difficult decision. This situation pits
Cleo's moral status against that of myself and my son – someone is going to
lose, the only question is who[ii].
Now, someone who wants to defend Cleo's intrinsic moral status has a
real problem. She has two basic options. On the one hand, she could stick
to her guns and insist that Cleo has intrinsic moral value and that such
value trumps any considerations of Cleo's utility as a source of water for
the humans in the boat. Traditionally, intrinsic value is thought of as
something which does not come in degrees. Something either has intrinsic
value or it doesn't. Thus, Immanuel Kant argues that humans have intrinsic
value because they are rational, but he does not think smarter humans have
more value than dumb ones. This is important because viewing intrinsic
value this way prevents market haggling over who is more valuable, etc.
From the environmental ethics perspective, it typically places non-human
animals in the same basic category as humans and thus makes it very
difficult to argue that humans should be allowed to use other animals for
their own selfish ends. Presumably the moral recommendation from a
defender of Cleo's intrinsic value to the boat's occupants would either be
to do nothing and pray for rescue or to devise some fair and impartial way
of deciding who will give up their water for the sake of the others. I
think it's safe to say the vast majority of people would find it
exceedingly difficult to accept these consequences and in fact the word
"absurd" probably occurs to most people at this point. At least, however,
an intrinsic value theorist is attempting to clearly elucidate the broader
implications of their ethical theory and always has the option of biting
the proffered, highly counter-intuitive, bullet.
Another option that might be taken is to waffle on the practical value
of non-humans with intrinsic moral value. One might point out that there's
no reason in principle why intrinsic value could not come in degrees.
Perhaps we can admit that Cleo has intrinsic moral value, yet still allow
that her value is not as great as mine or my son's. This would open the
door to allowing that sacrificing Cleo for the sake of the humans is
morally permissible. In effect we are saying that, though all creatures
with intrinsic moral value are equal, some are more equal than others.
This sort of move is very intuitively appealing, because it seems to let us
have our cake and eat it to. The problem here is that such a move robs
the intrinsic value account of its original motivation. The whole point of
distinguishing intrinsic from instrumental value in the first place was to
prevent any kind of market haggling over who is more important than whom in
different circumstances. In the eyes of a traditional defender of
intrinsic moral value like Kant, to even enter into a discussion about
which humans are more valuable morally than others is to have missed the
whole point of moral value in a fundamental way. The solution is not to
more precisely determine the conditions which set the moral price, but to
reject the very idea that intrinsic values depend on circumstance at all.
Dignity is non-negotiable.
To put it another way, it may make us feel that we are being truly
serious about the environment to say that Cleo has intrinsic moral value.
Bravo. However, is that really changing anything if every time Cleo's
supposedly intrinsic value is pitted against significant human interests,
she loses out? We are still drawing an implicit line in the sand which
separates humans from non-humans, we are just not being clear about that up
front. As J.S. Mill observed, what good does it do to recognize human
rights in the abstract if they have no application in practical
circumstances? I submit that if we really want to make this move, then the
argument over whether moral value is intrinsic or instrumental becomes
purely academic, since it ultimately does not affect the practical
decisions moral theory must make. I for one share the practical
scientist's distaste for discussions of abstract meta-ethical points which
have no real bearing on decision-making.
Actually, there is at least one other option open to the defender of
Cleo's intrinsic value that I have not discussed. It's not really a
position so much as a refusal to take a position, but it is nevertheless a
common way to "solve" the problem. I could always maintain that Cleo's
moral status is intrinsic just like that of the human castaways, but then
refuse to take any clear position on how we should adjudicate the conflict
in the lifeboat. Typically this is not put as baldly as I state it, but
rather by the less obvious route of issuing very vague ethical guidance
such as "We must respect the rights of all concerned." This certainly
sounds good, but ultimately we have an obligation on behalf of the theory
to state at least something about what it means to "respect rights",
including some mechanism for how we can decide when we should respect the
rights of some entities more than others. If we don't do that, then we have
not really engaged the problem at all. This is certainly effective in a
rhetorical sense, which is why this kind of move is the darling of
politicians, but it really does nothing to further our understanding of our
moral obligations. For philosophers with a practical bent (yes, we do
exist), this is extremely frustrating. Using ethics to argue for a
position but then failing to provide clear guidance precisely when it is
most needed – when the decisions are difficult and hard choices must be
made – gives ethics a deservedly bad name.


C. The Ratio-Centric Account
1) The Account Outlined
If we are serious about actually getting to the truth of the matter in
ethics, we must insist on some basic requirements for any ethical theory.
For our purposes, the two most important are that: 1) The theory must
unflinchingly explore its implications in a wide variety of cases and 2)
The theory must be specific enough to be of practical utility in difficult
situations. It seems to me that these are absolutely minimal necessary
conditions. If a theory makes no effort to be general or does not deal
with difficult situations, it should not even be seriously considered. If
it does do these things, then it's a further question as to whether it's
the best alternative amongst competing theories. And of course, any
ethical theory, like any scientific theory, is going to be a choice between
imperfect alternatives. No theory is going to be perfect in the sense that
it fully incorporates everyone's intuitions or that it draws all lines
without ambiguity. However, as Churchill famously observed, "Democracy is
the worst form of government - except for all those others that have been
tried."
I would now like to offer for the reader's consideration an account
which avoids the sort of problems I have discussed above while also
allowing for a real environmental ethic. It is true that this account
also, depending on your ethical intuitions, might have some counter-
intuitive consequences, but at least it does not buy respectability at the
price of clarity. I must emphasize too that this is merely a sketch, since
ratio-centrism draws on such an enormous body of prior work that doing this
justice is far, far beyond the scope of this paper.
The position I want to sketch is called ratio-centrism because it views
the possession of reason as the essential feature dividing those things
with intrinsic moral value from those things with merely instrumental
value. So the first issue I need to address is why should reason by the
sine qua non of ethical value? I would argue that the ultimate point of
ethical systems in general is to provide the means for the smooth
functioning of societies. The basic idea is that ethics are a product of
cultural evolution in complex societies and their function is to stabilize
and enrich such societies[iii]. I will not defend this claim here in
detail, though I will note that part of the reason for finding it
attractive is the paucity of alternatives if on the one hand you reject any
kind of supernatural basis for ethical objectivity while on the other hand
recognizing the dangers of ethical relativism.
In any event, if the purpose of ethics really is to provide for the
smooth functioning of societies, then it seems to follow that it's primary
objects are those entities capable of functioning in such a society. In
other words, the primary moral community is composed of those with the
requisite abilities to form a society. Some level of rationality seems a
prerequisite for social functioning[iv] since, at a minimum, participation
in a society implies the ability to conceive of rules of behavior, to
communicate with each other concerning those rules and then to constrain
one's behavior accordingly. Entities which have these abilities can
participate in a moral community and thus the community must recognize
their moral worth as a necessary requirement for its formation. Entities
which can not participate in the moral community may certainly be valued –
indeed, their value in some circumstances can be incredibly high - but this
value is always of a fundamentally different character than that of
rational creatures.
From an ethical point of view, then, a ratio-centrist world is populated
with three basic classes of things. First, there are entities which
clearly possess reason sufficient for participation in a moral community
and have intrinsic moral value precisely because of this (e.g., humans,
space-faring aliens, God). Second, there are entities which clearly do not
possess reason and thus have merely instrumental value (e.g., mountains,
rocks, rivers). Finally, there are entities which possess at least some of
the aspects of what might be called reason, but seem to fall significantly
short of what would be needed for participation in a moral community (e.g.,
dolphins, octopi, Cleo, other primates). Entities with intrinsic moral
value are more important, ethically speaking, than entities with merely
instrumental value. Entities without reason really have no moral standings
at all save indirectly through their utility to those with reason. The
third class of entities, those which have some aspects of reason, are, of
course, a problem. Perhaps the simplest thing to say about them is that
they do not seem to have intrinsic moral status but we are less certain of
this judgment than we are in the case of rocks. Certainly if the interests
of such entities are pitted directly against the interests of rational
creatures, as in the lifeboat case above, the rational creatures should
win. However, we should remain open to the possibility that our judgment
about a dolphin's rational capacity could be wrong.
2) Anthropocentrism
I could go on to elucidate this theory at great length, but I think the
best way of restricting myself to the salient points of comparison with
more inclusive ethical theories is to respond to a series of common
criticisms. The most obvious criticism is that this account is
unacceptably anthropocentric, or in other words, it rather too conveniently
justifies the same old, tired, "humans are the best" view of ethics which
has been so destructive in the past in so many ways. The first thing to
say in my defense here is that the account is not anthropocentric, but
ratio-centric. This is more than a semantic game too. An anthropocentric
account would accord humans special status simply because they are human.
This is clearly indefensible, but not because it results in a bias in favor
of humans. Rather, it's because the principle itself is indefensible. Why
should membership in a particular species, in and of itself, have any
bearing on moral standing at all? In general, any time we want to make a
claim that there has been morally unacceptable discrimination, we need to
establish not just that different entities have been treated differently,
but that they were treated differently for indefensible reasons.
Suppose, for example, I hire a woman for a job in preference to a man.
Suppose I do this all the time, so that the only people I have ever hired
for this job in the past 30 years are all women. It is tempting to accuse
me of sexism here, but that would be premature. The question we need to
answer is whether the principle of division I am using is justifiable, and
we don't know this simply from the fact that it results in a skewed pattern
of hiring. If the job for which I am hiring is that of an exotic dancer in
a men's club, for instance, we readily admit that there has been no morally
problematic discrimination. Similarly, the question in evaluating the
ratio-centric account is not simply whether it places humans in a
privileged ethical class, but whether the reasons for doing so are
defensible.
So, privileging humans simply because they are human is one thing,
privileging them because they are rational is another mater entirely. We
must keep in mind here that humans need not be the sole occupants of the
intrinsic moral value category. First, there could well be extra-
terrestrial creatures with rational capacities equal to or even exceeding
our own. Second, even on Earth, it is an open empirical question whether
any non-human animals possess reason sufficient for participation in a
moral community. I personally think this possibility unlikely, but if we
could establish that dolphins really are capable of the sort of reasoning
necessary for moral interaction, I would welcome to them to the club.
I think one reason people react so viscerally towards ratio-centrism is
that they feel it won't be possible to accord non-human entities any
serious weight in our moral deliberations. In one sense, of course, this
is true – non-rational entities have only instrumental value. However,
this is far from saying they are not valuable. Modern science has revealed
much about how the interests of rational humans are intimately tied up with
the interests of non-human entities in the environment. A ratio-centrist
should argue quite forcefully for the preservation of the world's remaining
tropical rainforests, for example, even when this means imposing pain on
humans who would wish to farm the land. The reason for this has nothing to
do with the intrinsic moral value of trees or rainforest ecosystems,
however. Rather, it's simply not very smart to destroy that which provides
you with what you need to stay alive, even when doing so might meet some
short term goals. Killing the goose that lays the golden eggs because you
are feeling peckish is just not smart. Thus, a thoughtful and
scientifically literate ratio-centrist will agree on many points with
someone who wishes to accord non-rational entities intrinsic value, though
for very different reasons. What makes ratio-centrism so powerful is that
it not only provides a principle of division, it also establishes a common
system of measuring moral worth and thus a means for adjudicating ethical
disputes. Intrinsic value theories, as discussed above, can not do this
well at all.
3) Abuse
A slightly different criticism of ratio-centrism is that such a ratio-
centric account is likely to encourage people to do horrible things to non-
human entities. To some extent, I agree. It's undeniable that the past
history of ratio-centrism is nothing to be especially proud of. We have
tended to take the most naïve, most short-sighted and most narrow possible
view of the value of anything other than humans. People often think that
any time humans interests conflict with the interests of non-human
entities, humans should win, regardless of the circumstances. It may even
be that people who are likely to adopt these simplistic analyses tend to
flock to ratio-centrism because it seems to provide them with what they
want. Note, however, that the problem here lies not with the theory of
ratio-centrism, but with it's application. People who think this way are
either sloppy ratio-centrists who place far too much value on short term
rather than long term analysis or they fail to understand what we have
learned about the importance of ecological interactions. In any event, the
way to show that these positions are indefensible is to adopt a good
general ethical theory. If we use ratio-centrism, we can easily show
people that in many cases what they think is in their best interests are
not in fact. Far from being the problem, a proper version of ratio-
centrism is the cure.
Moreover, it's important to keep in mind as well that any ethical
position is likely to be abused. Certainly intrinsic value accounts of
environmental ethics have been used to support radical, even terrorist
activities such as spiking trees. No doubt the list of abuses attributable
to people who ascribe to an intrinsic value type account will grow as those
positions become more popular and dominant. However, it's quite unfair to
criticize the theory for the abuses of its most radical zealots.

IV Issues Astrobiological
A. The insights of astrobiology
We can at long last turn to the application of all this theory to
ethical issues in astrobiology, especially to the question of whether it's
ethical to terraform Mars. Before we do that, I do want to talk briefly
about how astrobiology enrichs ethical theory. One thing I find especially
fascinating about astrobiology is that it throws old questions in ethics
into a new light. For one thing, when you are talking about what sorts of
ethical obligations we might have to extra-terrestrial life, it forces you
to take the distinction between ratio-centrism and anthropocentrism very
seriously indeed. In more traditional discussion in ethics, we really only
knew of one case of rational creatures – humans. True, God might exist and
might be rational, but this seems a very different thing and, in any event,
we are not sure of Her existence. True also that there are non-humans
animals with some rational abilities, but there is no good evidence that
even our primate cousins have rational abilities very close to what would
be required for true moral participation. If, however, you are addressing
a group of people whose goal is to actually find life on other planets, you
are forced to explicitly consider the possibility that some of this life
may be fully rational and thus have intrinsic moral value.
There is also the point that the distinction between intrinsic and
instrumental value is much more stark in astrobiological contexts. One
problem you always have in terrestrial contexts is that people often stake
out the same position for very different reasons. Because they are
ultimately defending the same position, they have the luxury of being less
than clear about precisely what their reasons are and how they differ from
alternative justifications. As I pointed out above, a ratio-centrist might
be a vociferous advocate of rainforest conservation, but for completely
different reasons than some of his fellow travelers. The bottom line is
that Earth is such an interconnected biosphere, that fiddling with any part
of it is also going to affect other parts, including ourselves. Thus there
is always rational self-interest at stake on both sides of terrestrial
environmental issues. This is why, for example, environmental publications
often freely mix very different kinds of ethical justifications – here an
appeal to economic considerations, there a plea for common moral standing.
If we do find life beyond Earth, however, we will not have to worry about a
shared biosphere. There will still be instrumental value to extra-
terrestrial life, of course, but it will be much, much easier to separate
this from intrinsic values. We will not be in the same ecological
lifeboat, so to speak.
Finally, at least for the next few hundred years, there are quite likely
to be fewer problem cases in astrobiology than on Earth. That is, there
will be fewer cases like dolphins - entities which seem to possess some but
not all aspects of reason and thus present real challenges to any rational
principle of division. Part of the reason for this is that we will not
share a common evolutionary history (presumably) with extra-terrestrial
life and thus the differences will be much more pronounced. There's also
this, though: if we think of the search for extraterrestrial life as a
random walk though the possibility space of reason, we are far more likely
to encounter entities with very high or very low levels of reason relative
to us than entities which are very similar to us. Thus, we are likely
either to encounter indisputably rational extra terrestrial life or we will
discover interesting puddles of slime on a planet's surface. To a ratio-
centrist, this is nice because it means most of the ethical disputes will
be much less messy than on Earth. They will either involve interactions
between two rational species, in which case both sides clearly have
intrinsic moral value and we can treat potential conflicts of interest the
same general way we would treat any ethical dispute between humans, or they
will involve interactions between rational creatures and entities which
clearly have no reason at all and we can treat them as straightforward
questions of maximizing utility for those within the moral community.

B. On Terraforming Mars
So, what does a ratio-centrist have to say about a major manipulation of
the environment beyond Earth for human purposes, like Terraforming Mars.
Should we attempt to remake Mars in the image of Earth so that we can more
efficiently exploit its resources, or should we leave large portions of the
solar system as unspoiled wilderness?
Let me begin by pointing out what a ratio-centrist would not do. He
would not argue that we should rush in immediately and begin building
parking lots and strip malls on Mars, destroying the Martian landscape and
any potential ecosystem without compunction and dismissing the objections
of environmentalists as silly. True, when the technology is at hand, I am
quite confident there will be people saying we should do this sort of
thing, but they are not really ratio-centrists. In fact, I'm not sure they
have an ethical position at all as opposed to a very short-sighted kind of
self-interest. There is absolutely no reason why a ratio-centrist can't
defend a pretty robust sense of environmental conservation. The question
we have to ask is why would he do this?
A ratio-centrist is primarily concerned with how the interests of
rational creatures are impacted by decisions. If there are rational
creatures on Mars, then he is going to argue that we should leave the red
planet entirely alone, since it is not ours to exploit (whether this advise
will be listened to is another matter entirely). Given what we know, this
seems an extremely remote possibility, however. There are clearly rational
creatures on Earth, however, and we must seek to serve their interests.
However, the rational creatures of Earth do not speak with one voice. Some
of them want strip malls and parking lots, some want areas of pristine
wilderness to contemplate, some want scientific investigation of the
unknown, etc. Even when we were to grant that a question about
Terraforming Mars is exclusively about serving human interests, we are
still pulled in many different directions.
The first thing we have to keep firmly in mind is that Mars is a
treasure trove of scientific information. This is true even if there has
never been a single speck on life anywhere on the planet. Of course, the
scientific value is multiplied many times over if there is life, even more
if the life is of an independent evolutionary origin, even yet still more
if it is organized in complex ecosystems, etc. Indeed, we will not even
know what Mars contains, living or otherwise, until a significant piece of
science has been conducted there. It does not require a view that Mars or
its potential inhabitants have intrinsic moral value to see that we should
not destroy, willy nilly, that which we do not even understand. Therefore,
we should all be able to agree that a strong conservative approach to
exploration is in order initially. We should not do things to Mars,
especially things which we can never reverse, until we have a much better
picture of what is at stake. This is going to require an extended period
of scientific exploration without large scale intervention.
The second important point we must consider is that some rational
creatures on Earth wish us to leave Mars in pristine condition. To the
ratio-centrist, it doesn't matter in some sense whether their reasons are
correct. All that matters is that they desire this. Even though as a
ratio-centrist I feel I have no ethical duty at all towards the Martian
landscape or possible Martian life, I do have obligations to my fellow
rational creatures on Earth. Failing to take their feelings into account,
even if I don't share them, will harm them and I have to care about this.
Kant argues, for example, that though I have no direct moral obligation to
your cat, I do have an obligation to you. Thus killing your cat is
immoral, not because it harms the cat but because it harms you, a rational
creature.
For these kinds of reasons, we should be cautious about Terraforming
Mars. We need to take adequate time to develop a clear picture, both of
what Mars has to offer and of how rational creatures on Earth really feel
about the use of Mars. It's important to emphasize when rational
convergence occurs, and here we have convergence: both an intrinsic value
theorist and a ratio-centrist will agree that we should leave off large
scale manipulations of Mars until we are better able to answer these
questions. Of course, they will immediately get into an argument about how
long we must wait and when precisely we will have answered the questions
adequately, but certainly a period of 50 years or so is not at all
unreasonable.
On the other hand, we can't lose sight of the fact that Mars does offer
very enticing possibilities. Sooner or later, the ratio-centrist will
argue that the scales have tipped and we need to end the relatively passive
exploration phase. Assuming it's technologically feasible, if we were to
terraform and colonize Mars, it would offer enormous advantages to billions
of humans. To begin with, it would establish a second home base for the
human race, thus significantly reducing the likelihood of our perishing in
some disaster or other. It's hard to calculate the value of something like
this, but certainly avoiding the destruction of our species has to be one
of the most important considerations any ratio-centrist could possibly
contemplate. A large colony on Mars would certainly offer access to all
sorts of resources which we could use to fuel our economies. Money itse;f
is of no moral value, of course, but efficient and thriving economies
improve standards of living for rational creatures and this is certainly
important. Finally, expanding human habitation to Mars would allow us to
maintain larger populations of humans. This might seem like a bad thing,
but it does have real advantages such as a faster rate of progress in
science and technology (via larger absolute numbers of scientists), a more
diverse and rich culture, and larger economies of scale. To those who
still remain unconvinced of the utility, I ask them to consider a question
like this: what is the relative ethical value of contemplating a pristine
Mars versus contemplating an new human culture on another planet?
Now I grant that things are not as simple as I paint them. Terraforming
may not be feasible for a very long time, determining when we have learned
enough to progress to active measures on Mars will be controversial, etc.,
etc. However, at least with ratio-centrism we have not just an abstract
theory of ethical value, but a clear means of making a decision. True, one
ratio-centrist may disagree with another about the details since these are,
after all, horrendously complex issues. However, they at least are
speaking the same language and attempting to assess ethical value with a
common scale. In short, ratio-centrism offers a framework for making
difficult decisions which the intrinsic value account simply can never do.

-----------------------
[i] This is why, for example, the claim that evolution is "just a theory"
is such an effective tool in the hands of the creationists. Evolution is
certainly a theory, but adding the word "just" implies that theories mere
opinions which at beast serve as stopgaps while we await the discovery of
irrefutable facts.
[ii] This is, of course, an unlikely scenario designed to elicit a strong
intuitive reaction. However, conflicts between individuals with intrinsic
moral value will be commonplace and, the wider you cast the net of
intrinsic value, the more often you will be faced with such dilemmas
(albeit usually less poignant than this one)
[iii] Honesty compels me to point out that, unlike ratio-centrism in
general, this is not a popular view amongst my fellow ethicists.
[iv] I am also assuming a certain notion of society here, namely one which
is capable of producing culture. This rules out insect "societies", but
might well include some non-human primate groups.
Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.