Valuing Knowledge: A Deontological Approach

July 27, 2017 | Autor: Christian Piller | Categoria: Epistemology
Share Embed


Descrição do Produto

Ethic Theory Moral Prac (2009) 12:413–428 DOI 10.1007/s10677-009-9185-3

Valuing Knowledge: A Deontological Approach Christian Piller

Accepted: 1 June 2009 / Published online: 16 July 2009 # Springer Science + Business Media B.V. 2009

Abstract The fact that we ought to prefer what is comparatively more likely to be good, I argue, does, contrary to consequentialism, not rest on any evaluative facts. It is, in this sense, a deontological requirement. As such it is the basis of our valuing those things which are in accordance with it. We value acting (and believing) well, i.e. we value acting (and believing) as we ought to act (and to believe). In this way, despite the fact that our interest in justification depends on our interest in truth, we value believing with justification on non-instrumental grounds. A deontological understanding of justification, thus, solves the Value of Knowledge Problem. Keywords Value of knowledge . Epistemic value . Consequentialism . Virtue epistemology . Reliabilism There is no immediate problem to explain why we care about knowledge. If you know how to get the things you want, then you are in position to get them, and you will get them if you act on such means-end beliefs. If you are curious about something, knowledge will satisfy you. And your friends might be impressed and hold you in even higher regard because of all the things you know. Knowledge, it is clear, can benefit you in these and other ways and so you value your knowing things. In the same way, my knowledge benefits me and I value my knowing things. (Whether I value your knowing things and you mine depends on whether we meet in a cooperative or in a competitive context.) In this sense, it seems true, we all value knowledge. Exceptions are allowed. ‘I wish I had not known that he keeps a dead cat in his shed to study its decay’, I say before the dinner guest arrives. ‘I wish Tiresias had not told me’ says Oedipus to Jokasta. Everyone interested in crime novels (of a particular kind) will be annoyed by first page annotations, which reveal who did it. Similarly, as long as we are well, we have no interest in knowing how and when we are going to die (and, in the case of a violent death, in who did it). C. Piller (*) Department of Philosophy, University of York, York YO10 5DD, UK e-mail: [email protected]

414

C. Piller

There is no immediate problem to explain what is good about knowledge until Socrates arrives (Meno 97a-c). Socrates: If someone knows the way to Larissa, or anywhere else you like, then when he goes there and takes others with him he will be a good and capable guide, you would agree? Meno: Of course. Socrates: But if a man judges correctly which is the road, though he has never been there and does not know it, will he not also guide others aright? Meno: Yes, he will. Socrates: And as long as he has a correct opinion on the points about which the other has knowledge, he will be just as good a guide, believing the truth but not knowing it. Meno: Just as good. Socrates: Therefore, true opinion is as good a guide as knowledge for the purpose of acting rightly. If you have true beliefs about how to get the things you want, then you are in position to get them, and you will get them if you act on such true means-end beliefs. If you are curious about something, a true belief will satisfy you, and your friends will be equally impressed by all your true beliefs than they were when you knew– they may not even notice the difference.1 Knowledge, I said, can benefit you in many ways and so you value your knowing things. The same holds for true beliefs. The Meno Problem is the problem to explain why knowledge is better than mere true belief.2 In Section 1, I argue that knowledge is not always better than true belief. However, a restricted version of the Meno Problem remains. In Section 2, I try to show that the restricted Meno Problem is an instance of a more general problem, which relates to our choice of good means. Having a choice between something which is likely to be good (and evaluativly neutral if not good) and something else which is unlikely to be good (and evaluativly neutral if not good), we should choose the former. This deontic intuition regarding what we ought to choose, I will argue in Section 3, is not supported by any evaluative truth. In Section 4, I apply this lesson to the case of knowledge, arguing that a deontic notion of justification, though not itself based on what is good, will provide a basis for understanding why we value knowledge.

1 Explaining What the Problem Is Berit Brogaard (Brogaard 2006, 335) starts her paper as follows, ‘A fundamental intuition about knowledge is that it is more valuable than mere true belief. Erik Olsson (Olsson 2007, 1

Knowing that p goes beyond truly believing that p. Nevertheless, there is a clear sense in which knowledge cannot outrun true belief (and this explains why your friends may not notice the difference between your knowledge and your true beliefs.) Take any fact F which when added to truly believing that p will make it a case of knowing that p. (Obviously, there are different views about which fact will play this role.) As long as the person truly believes that F obtains (whatever F is), he or she will satisfy the condition imposed on him or her for knowing that p. In order to know that p one has to truly believe p and one has to truly believe some further things. 2 I tackle the evaluative problem–why is knowledge better than true belief–via the analogous question about attitudes–why do we prefer knowledge to (or value more than) merely true belief. This need not disturb anyone who has a more robustly objectivist view of values as long as the reasonableness of our attitudes is allowed to count in favour of the respective evaluative claims (however we understand the metaphysical status of what they are about). What we, after reflection, desire might not be the only evidence for what is good, as Mill thought, but it will be some evidence.

Valuing knowledge: a deontological approach

415

343), another representative of a view common amongst philosophers, observes that ‘knowledge is more valuable than true belief’, and, he continues, ‘any account of knowledge that failed to make room for this common-sense observation would be defective’. Olsson cites Plato as his witness. He is right about Plato; common sense, however, is not on his side. Let me explain. Not all of my friends are philosophers and those, who are not, do not care much about it. Nevertheless, when we talk, they pick up some details about my professional life. ‘Are you still working on your paper about Oldman and Golson?’ my friend asks me. ‘Alvin Goldman and Erik Olsson’, I say. ‘Olson, Schmolsen’, he replies, ‘whatever’. He does not care. It is a detail about my life, which is of no consequence for our friendship. I do not blame him. In fact, I share this indifference when it comes to details of the glorious past of Barnsley Football Club. (He is from Barnsley.) Friends have things in common, but people can be good friends without sharing all each other’s interests. Suppose my friend had remembered these names correctly. Whether it would have been knowledge or mere true belief would have made no difference to him. As I said already, his indifference goes deeper than this. He is indifferent between believing truly and believing falsely; he is indifferent between having no epistemic attitude about the philosophers I discuss and having some, be it true or false. Would it be better as Olsson suggests if my friend knew? It would not be better for him, for me, or anyone else. I cannot think of any relevant sense in which it would be better if he knew.3 Knowledge need not be better than mere true belief, I say, because when we lack interest, any epistemic attitude or lack therefore will be as good as any other. In (Goldman and Olsson 2009, footnote 3), the authors disagree with my claim that, lacking any interest in a question, one can be indifferent between having a true belief and having a false belief. True belief, they say, is generally better than ignorance. ‘For, if p is true, it is generally better to believe p than not to believe it, especially if the truth of p is something which the agent cares about.’ There are many more things no one is interested in than there are things, which interest at least someone. For example, no one is (or ever will be) interested in the string of letters we get, when we combine the third letters of the first ten passenger’s family names who fly on FR2462 to Bydgoszcz no more than seventeen weeks after their birthday with untied shoe laces. Is it maybe ‘IDONOTCARE’? Was there a day in which the number of words uttered by John Wayne’s favourite aunt matched exactly the numbers of onions peeled in red coloured kitchens in Budapest by people who like John Wayne movies (or, at least, know someone who really loves them)? If there was such a day, was it, maybe, a Wednesday? And then there are old phone books, for example the telephone directory of Bydgoszcz from 1983. What is the sum of all the numbers to be found there (including house numbers)? Is it divisible by three? I don’t think so, but be this as it may, I really do not care (nor could anyone else care). The realm of what is devoid of any conceivable interest is simply huge. Disjunctive expansion is a device that makes it even bigger. Thus, I do not agree with Goldman and Olsson that an interest in a subject matter makes the general betterness of true belief over ignorance especially salient; it rather is the case that without

3 I exaggerate. There is a sense in which it would be better to know–it would be ‘epistemically better’. The axioms of ‘epistemic axiology’, I suppose, could simply put knowledge ahead of mere true belief. I want to open this can of worms at another occasion. One of the problems is whether epistemic betterness is a kind of betterness, analogous to, let us say, aesthetic betterness, or whether it is supposed to be built on Geach’s idea that goodness is an attributive and not a predicative notion.

416

C. Piller

any interest, it does not matter at all whether you are ignorant, misinformed or whether you have, by whatever odd means, a true belief about such things.4 Paul Horwich is another philosopher who disagrees. He thinks that it is always a good thing to have a true belief. For one, you never know, it might turn out to be useful. And furthermore, it has intrinsic value. ‘What do we mean when we say that truth is valuable for its own sake?’ Horwich asks and he answers as follows, ‘A plausible answer is that we have in mind a moral value. We think that someone who seeks knowledge for its own sake displays a moral virtue’ (Horwich 2006, 351f). I could not disagree more. Someone who adds up the numbers of a phone book (but is otherwise like us) does not display a moral virtue (in doing so). He is simply an idiot. Suppose you have been visiting me for hours and I offer you to look at some of my holiday pictures. You tell me that, unfortunately, you had seen them already. I answer, ‘I am sure you will notice some previously unnoticed features by looking at them again. Furthermore, we could put them in a completely new order.’ For Horwich, there is always a moral reason not to go to bed and to continue your studies of phone books, look at boring pictures and so on. When reflecting on these examples, I do not find Horwich’s view persuasive. (And nor would he, I think.) What about his first point, the potential usefulness of true beliefs? Sure, God could check whether you have added all these phone numbers correctly. However, it strikes me as equally likely that he punishes the resulting knowledge as that he rewards it. Many things can arouse a legitimate interest and, being limited as we are, no one can be interested in everything. Furthermore, there is nothing wrong with people who develop their interests in very different directions. Therefore, neither is there anything wrong with people who do not care about whether some of their beliefs happen to be true or false nor do they exhibit any shortcoming if the difference between mere true belief and knowledge (about the subject matter they are indifferent about) does not matter to them. If there is nothing wrong about not caring about a difference, it would be odd to claim that, nevertheless, the difference is of deep evaluative significance. Samuel Coldridge (1772–1834) was on to something when he wrote in Aids to Reflection (1825), ‘the worth and value of knowledge is in proportion to the worth and value of its object’. Strictly speaking, however, Coldridge was wrong. There are bad things – especially those that are very bad – we ought to know about. The badness of what is known does not make knowing it bad.5 Despite this point’s triviality, there is a lesson to be drawn. The interest, of which I say that it is a condition for the value of having a true belief, is not an interest in p. It is an interest in whether p or not-p, i.e., it is an interest in a question. More precisely, it is an

4

For GE Moore only some cases of knowledge (or of true belief) are important, namely knowledge of the existence of something good. He writes in Principia Ethica (Moore 1903, 248), ‘...it appears that knowledge though having little or no value by itself, is an absolutely essential constituent in the highest goods, and contributes immensely to their value.’ Sosa also seems to share the view expressed here when he writes, ‘... take some bit of trivia known to me at the moment: that it was sunny in Rhode Island at noon on October 21, 1999. I confess that I will not rue my loss of this information, nor do I care either that or how early it will be gone. As interpreted so far, the view that we rationally want truth as such reduces to absurdity, or is at best problematic.’ (Sosa 2000, 49). Jane Heal, see (Heal 1987/88), has forcefully made a similar point. 5 Goldman and Olsson’s claim (ibid.) that ‘...if p is believed, it is better that it be true than false’. This is certainly false. No one wants any of the bad things to happen that he or she believes are going to happen. They are, after all, bad things and bad things do not become any better by having been expected. See (Piller 2009) for how this point affects our understanding of our epistemic aims. Note that, in contrast to (Goldman and Olsson 2009), (Goldman 1999) emphasizes the importance of interests for the evaluation of (social) epistemic practices. I have no disagreement with what he says there.

Valuing knowledge: a deontological approach

417

interest in answering a question, obviously in answering it correctly.6 The lesson to draw is that we have not come very far. We value knowledge and true belief over ignorance only in those cases in which we have an interest in answering a question correctly. Thus, we end up with a triviality: We value knowledge and true belief, if we are interested in having knowledge or, at least, a true belief about a subject matter. For many things it does not matter whether we know them or not. Some things, however, we ought to know. As a philosopher, I ought to know quite a few things about philosophy. Furthermore, I ought to know my name, the names of my children, the location of my office; I ought to know that birds can fly but pigs cannot and so on. In such cases, it is more natural to talk about obligations to know than about obligations to have true beliefs and, according to the most straightforward line we can take here, it is more natural to talk like this because we actually have obligations to know and not only to truly believe. This demands an explanation. When we are interested in a question, why do we want to know the answer and not merely to have a true belief? When we ought to be interested in something, why is it that we ought to know? The restricted Meno Problem is the problem to provide such an explanation. Remember what, according to many philosophers explains our preference for knowledge over mere true belief: knowing something is better (more valuable) than having merely a true belief about it. Why is it better? Suppose knowledge is justified true belief. The difference in value, according to what is certainly a natural thought, must reside in what distinguishes knowledge from mere true belief. The value we place on our beliefs’ being justified would then be what explains the evaluative difference between knowledge and mere true belief. Why do we want our beliefs to be justified?7 Some philosophers, for example (De Paul and Grimm 2007, 507), think that our preference for justified over unjustified beliefs can only be explained by the fact that justification simply is a good thing. This evaluative fact explains the psychological fact of our evaluations; itself, however, it escapes any further explanation. They agree that ‘it remains mysterious why it is a good thing for a person to believe what he or she is justified in believing and a bad thing to believe otherwise.’ It remains mysterious because the value of justification cannot be explained, they think, by appeal to something else that is valuable. This kind of mysteriousness, however, is shared by all claims that something is intrinsically valuable and, thus, they do not worry about this lack of explanation. Our preference for being happy over being miserable is not mysterious, and, leaving complete scepticism about values aside, neither is the idea that it is better to be happy than to be miserable. Claims about what has and what lacks intrinsic value are not all on a par when it comes to whether or how mysterious they are. In the case of justification, assigning intrinsic value to it is problematic because of the role justification plays and how this role relates the concept of justification to the attainment of true beliefs. Our interest in justification seems to depend on an interest in truth. If we were not interested in answering some question correctly, we would not be interested in the normative status which different methods of answering this question assign to the resulting beliefs. In this way, our interest 6

Whenever there is a (significant) evaluative difference between p and not-p, our interest in the presence or absence of p can ground an interest in the question of whether p. Curiosity, however, will often go beyond what is evaluatively significant. 7 If the assumption of this paragraph, namely that knowledge is justified true belief, is correct, explaining why we value justified beliefs over unjustified ones would explain the value of knowledge. If, however, an anti-Gettier condition would need to be added to turn justified true belief into knowledge, then we would have explained the value of something that falls short of knowledge. I will briefly revisit this issue in the penultimate footnote. For details of how this point has affected the contemporary debate see (Pritchard 2007).

418

C. Piller

in justification depends on our interest in truth. Furthermore, this dependency is unsurprising and has a straightforward explanation. On most theories of justification, justified beliefs are or look more likely to be true than unjustified beliefs.8 We engage in epistemic practices, which ensure (or seem to ensure) that our beliefs are justified, because we are interested in answering questions correctly. The fact that a belief is justified cannot cause it to be true–justified beliefs simply are true or not–but the fact that we use methods to ensure that our beliefs are justified seems to be related purely instrumentally to our aim of getting the right answers to the questions we are interested in. If this relation is indeed purely instrumental, then it becomes doubtful whether the feature that beliefs acquire simply in virtue of being produced by such methods (namely being or looking likely to be true) can add to the value these beliefs achieve in virtue of being correct (or incorrect) answers to our questions. If we know that p, we have a true belief, which, furthermore, is likely to be true. How can this be better (or valued higher) than simply having a true belief?9

2 The General Problem In the literature on this problem, authors focus on the following case. The fact that a true belief is also likely to be true cannot add to its value.10 More precisely, the fact that a true belief is, given the way it was acquired, also likely to be true cannot add to its value. Moving to the general problem, Kvanvig illustrates the point made about knowledge with a 8

I choose this formulation to make room for both externalist and internalist accounts of justification. Externalists will emphasise the actual likelihood, internalists the apparent likelihood of justified beliefs. Let me add that my focus on this aspect of justification–its comparatively higher likelihood of being true–leaves other important aspects aside. For example, we seek justification not only to get things right but also to make our views publicly defensible. Without justification we would not be a respectable participant in practices of epistemic cooperation. Thus, more could be said about the value of justification than I attempt to say here. 9 For developments of this problem see, for example, (Zagzebski 2003) and (Kvanvig 2003, Kvanvig 2004). Let me add here an idea for a quick solution of this problem. It is often true that when I aim at something, I also want to find out (when the time comes) whether I have achieved my aim or not. If I aim at owning a Ferrari one day, then, should my efforts be successful, I also want to find out that I own a Ferrari. (Sometimes finding out whether and to what extent one’s goal has been achieved might interfere with other of one’s ends. This is why I claim that one’s ends are typically (but not always) conjoined with a desire for finding out whether one’s goal has been fulfilled.) Suppose this holds for our epistemic interests. Wanting to have a true belief regarding whether p or not-p, I also want to find out whether I have achieved my aim, once I have settled on a belief. Arguably, if one finds out that one’s belief is true, then one knows that p. We value knowing that p over mere true belief because in knowing we satisfy a typical further concern we have, namely the concern to find out whether we have achieved our aim. 10 There is awkwardness in presenting the case in this manner, as one might be puzzled by the fact that a true proposition need not have a probability of one. Actually, true propositions can have any probability, even a very low one, as long as it is higher than zero. We are interested in the probability of a belief being true given the method which has been used I arriving at this belief. If an unreliable method has been used, the probability of the belief being true will be small, even if it is true. This sounds odd because we talk of ‘the probability of a proposition being true’, when we actually mean a conditional probability. Once clarified, no harm should come from continuing to talk in this way. Kvanvig’s favourite example (Kvanvig 2004, 203) involves two computer generated lists, one which tells you where chocolate is sold and another which tells you where it is likely that chocolate is sold. (His point is that a third list, which is the intersection of the first two, is no better than the first.) Analogous to the puzzlement above, one will wonder why the first list is not a proper subset of the second. To make his example plausible, Kvanvig has to assume that the list generator uses different programmes to compute each of the two lists, one generating a list of conditional probabilities of chocolate selling places given, for example what else they sell, and the other generating from a different set of data a list of chocolate selling places. The information used for the list of chocolate sellers has to be unavailable to the probability list generator.

Valuing knowledge: a deontological approach

419

different example. He writes (Kvanvig 2004, 203), ‘Beautiful people are not more beautiful because they have beautiful parents.’ This is true. Let me spell out the relevance of this observation. The conditional probability of being beautiful given that one’s parents are or were beautiful is higher than if one’s parents were not beautiful. In this sense, people with beautiful parents are likely to be beautiful. If they are beautiful, their being likely to be beautiful does not add to their aesthetic value. Kvanvig (ibid.) calls this ‘the swamping problem’ and describes its general nature as follows : ‘The problem is that when a property is valuable by being instrumentally related to another property, the value of the instrumental property is swamped by the presence of that for which it is instrumental.’ Let me adjust Kvanvig’s presentation of the general problem in three respects. First, something can be valuable because it has a certain property. It is not the property itself, which is valuable, but that, which has it. Beauty is a value (not valuable) and beautiful things are valuable because they have the property of being beautiful. Secondly, it might be misleading to talk about an instrumental relation here. I already mentioned that a justified belief does not cause itself to be true–a justified belief is or is not true. Similarly, something, which is likely to be good, does not cause itself to be good–either it is or it is not. Thirdly, it might be misleading to say that the value of, let us say, beauty, swamps the value of being likely to be beautiful, if by ‘swamping’ we mean something like overpowering or overwhelming or even undermining. If something is very dangerous but slightly entertaining (crossing a busy road with your eyes closed and your ears plugged), the disvalue of the danger swamps the entertainment value. If, however, the road had been closed and, thereby, any danger had been eliminated, the entertainment value (which we have to assume does not depend on the presence of danger) would become noticeable again. Swamping takes place if the presence of one thing overpowers the evaluative relevance of another. Were the ‘swamper’ absent, however, the value of what has been swamped would become noticeable again. If this second condition would not hold, the difference between having its value swamped and having no value whatsoever would be lost. Given such an understanding of ‘swamping’, the value of the good does not swamp the value of what is likely to be good. Let us consider beauty as an example of a good. True, when beauty is there likelihood of beauty does not increase the value of what is beautiful. What is also true (and hardly ever mentioned) is the fact that the value of something’s being likely to be beautiful is absent also in cases were beauty is not present. Not only do beautiful people not become any more beautiful by having had beautiful parents, ugly people also do not become any more beautiful by having had beautiful parents. More generally, what is bad is not made any better by being such that its likelihood of being good was high. The problem in explaining why knowledge is better than true belief is the following: If we accept a direct link between truth and justification, it looks like the difference between knowledge and mere true belief rests in the higher likelihood of a justified belief to be true. This difference, however, seems evaluatively insignificant. The general problem is to explain the contribution facts about the likelihood of achieving some good can make to the evaluation of the resulting state of affairs, whether the good is achieved or not. Instrumental value–and I will come back to this point later on–will not do because it never shows up. Intrinsic value would do, but then we cannot explain the value of the likely good (or the value of justification) in terms of the value of the good (or the value of true belief). We need to add one more ingredient to this puzzle. It points to a hidden premise, namely a consequentialist frame of mind, the rejection of which will, in my view, solve the puzzle. Being justified or being likely to be true, I said, does not add to the value of a true belief, like having beautiful parents (and so being likely to be beautiful) does not add to one’s

420

C. Piller

beauty. I said ugly people are not any less ugly by having had beautiful parents. It was unlikely for them to turn out ugly, but bad things happen. However, is the same true for false beliefs? Are justified false beliefs not in some way better than stupid false beliefs? If we answer affirmatively, as I think we should, this might be taken to illustrate ‘the intrinsic value’ of justification. However, we need not understand it in this way. We are dealing with a general problem about the relation between the good and that which is likely to be good. Truth and beauty are just instances of the good. Thus, we should try to preserve the parallel between the cases of truth and beauty. The obstacle is that justified false beliefs seem to be preferable to unjustified false beliefs whereas the aesthetic value of ugly people does not change when we vary the aesthetic value of their parents. Nevertheless, there is an important aspect, namely a deontic aspect, in respect to which the cases of beauty and truth do run in parallel. Think about what you would choose when your options are choosing between being likely to be beautiful and being unlikely to be beautiful (when beauty is, in this context, the only relevant value). I hope we all agree that you should prefer being likely to be beautiful to being unlikely to be beautiful. The likely good is preferable to the unlikely good. Similarly, if you can choose between a reliable method of belief acquisition and an unreliable one, you should (if the belief answers an interesting question), prefer the reliable method. Other things being equal, a higher likelihood of truth given the method is preferable to a lower likelihood of truth. In this way, we preserve a parallel between the goods of beauty and truth. Likely goods are preferable to unlikely goods. Now, the consequentialist premise, which, in my view, creates the problem, is the following: the preferability of the likely good has to be understood in terms of value. The right, according to consequentialism, is determined by the good. Let me spell out the general puzzle, which underlies the problem to explain the value of knowledge, in more detail. It is a puzzle about how to understand the normative force of what is likely to be good and of what is such that it is likely to produce something good. The rationale for doing what has a chance of bringing about something good is the same, I want to suggest, as the rationale for doing what has a chance of being good.11 To be able to talk about these two cases at the same time, I introduce the notion of ‘probabilistic goodness’. Means to good ends have a chance of bringing about something good. I say ‘they have probabilistic goodness’ or ‘they are probabilistically good’. Things that are likely to be good, I say, have probabilistic goodness as well, they are, like means, also probabilistically good. A further reason for using this term of probabilistic goodness is the ambiguity of the term ‘probabilistic’, which allows subjectivist and objectivist interpretations. The following three claims (A)–(C), all of which look initially quite plausible, seem to be in tension with each other. Why should we take the means, which promises to achieve some good? (Alternatively, why should we prefer what is likely to be good to what is not likely to be good?) One answer is that means that are likely to bring about something good derive their normative status – namely that we ought to take or prefer them –from their relationship to what is good. (A)

The normativity of probabilistic goodness (i.e., of what is probabilistically good) is derived from (and depends upon) the normativity of goodness.

11 Kvanvig and others use the notion of instrumental value with some justification or ‘in a wide sense’ because the same problem which arises in the case of likely to be good and good also arises in the case of likely to produce some good and good.

Valuing knowledge: a deontological approach

421

Furthermore, we (seem to) know the following: (B)

(What has) probabilistic goodness need not be good (it will on occasion fail to be good).

It seems, however, that we always ought to care about probabilistic goodness or more precisely, about what is probabilistically best. For example, suppose we can choose between getting one free ticket of a fair lottery and getting all the other tickets for free. I think we ought to choose what has a higher chance of producing the good (even if, in fact, the one ticket will win.) (C)

We always ought to care about (the things that have) probabilistic goodness, i.e. probabilistic goodness is always normative in the sense that probabilistic betterness will determine what we ought to choose.12

These three claims are in tension with each other. According to claim (A), the normativity of probabilistic goodness needs a ground. It ‘comes from’ the normativity of the good. According to (C), however, we always ought to care about probabilistic goodness, even if the basis of its normative significance is not present, which, according to (B), might well be the case in particular instances.

3 Solving the General Problem How should we react to this tension? We can distinguish the following three positions, each characterized by the denial of one of the three claims. GE Moore denies (C). According to Moore, only success or actual goodness (and whatever actually brings it about) has normative significance. John Broome, in his book Weighing Goods, argues that decision theory provides the structure of the good. According to Broome, there is a notion of goodness which is a probabilistic notion. Therefore, we can understand him as rejecting claim (B). I side with Moore in his acceptance of (B) and with Broome in his acceptance of (C). In my view, we need to reject the consequentialist outlook shared by Moore and Broome. We need to reject (A). I start with Moore’s position. It can be denied that we always ought to care about the things that are probabilistically good. GE Moore has denied it. Let me motivate this denial of claim (C) by looking at an analogous set of claims (for which the denial of (C*) is obvious). (A*)

The moral obligatoriness of helping John is derived from and depends upon the fact that helping John maximizes general happiness. (B*) Helping John need not maximize general happiness. (C*) We always ought to help John. If the presence of one feature (the obligatoriness of helping John) depends on the presence of another feature (the maximization of general happiness), then if this other feature is not present, the first will not be present either. Here it seems clear that claim (C*) has to be denied. Moore thinks the same holds for our original set of premises. 12

I assume that in the choice between the likely and the unlikely good the same state of affairs results not only if they are successful (you win the lottery) but also if they are unsuccessful (you do not win the lottery). In such circumstances, a comparison between the expectation of goodness for both options reduces to a comparison of the likelihood by which they will be good.

422

C. Piller

In his book Ethics, Moore takes up the following objection to his version of utilitarianism. What we ought to do is, according to this objection, not what actually will have the best consequences but what we can reasonably expect to have the best consequences. (To make room for a notion of probabilistic goodness build on objective probabilities we can run a parallel objection. What we ought to do, according to this version of the objection, depends on what is objectively most likely to have the best consequences.) We have already met one example that supports this objection, namely the case of a lottery in which, intuitively, we ought to take, let us say the 99 tickets not the 1 ticket. Moore would disagree. He sticks to his view: we ought to do what is actually best, and if, against all odds, the one wins, then in taking the 99 (or in a bigger lottery the 999999) we did not do what we ought to have done. GE Moore vehemently defended the view that what actually happens and not what we, even reasonably, expect to happen, (or what is most likely to happen) determines what we ought to do. ‘The only possible reason that can justify any action’, Moore writes, ‘is that by it the greatest possible amount of what is good absolutely should be realized’ (Moore 1903, 153). He considers the case of the lottery. ‘Unlucky Failure’: Suppose, then, that a man has taken all possible care to assure himself that a given course will be best, and has adopted it for that reason, but that owing to some subsequent event, which he could not possibly have foreseen, it turns out not to be best: are we for that reason to say that his action was wrong? (Moore 1912, 81) Moore is uncompromising when it comes to his account of duties, but other kinds of moral judgments are supposed to absorb the pressure of this example. The agent did act wrongly, Moore insists, but he is not to be blamed for what he did. The excuse of not having been in a position to know some of one’s action’s consequences affects an action’s blameworthiness, but, being an excuse, it does not affect the action’s wrongness. Moore also considers the reverse case. ‘Lucky Failure’: Or suppose that a man has deliberately chosen a course, which he has every reason to suppose will not produce the best consequences, but that some unforeseen accident defeats his purpose and makes it actually turn out to be best: are we to say that such a man, because of this unforeseen accident had acted rightly? (ibid.) Moore makes the same move as above: Sometimes we have to blame an agent for having acted rightly. Moore agrees that this sounds odd, but hard-nosed he adds: ‘I do not see why we should not accept this paradox’ (Moore 1912, 82). This view denies that we should always take the best probabilistic means to our ends. It denies that the notion of probabilistic goodness has any normative significance. In the end, it denies any account of normativity, which does not equate it with success. Expediency and duty or, in my terminology, success and normativity are one and the same thing. Applying such a view to beliefs, we end up with the claim that all and only true beliefs are justified. Moore’s view is consistent but harsh. What is harsh about it? Nothing short of success will do. In most of our endeavours, however, we need the cooperation of others (or of the world in general) to achieve success. We think, we can only do so much and, once we have done it, we have done all we can be asked to do. In my view, a theory of rationality is supposed to build a bridge from our own limited resources to success. If the aim is ambitious or the evidence is weak, the bridge might not hold our weight. Still, why wait for a lucky gust of wind to blow us over to the other side, when we have the material to start

Valuing knowledge: a deontological approach

423

working on the bridge? Moore’s view is that the only thing that counts is to get to the other end. He is not interested in how we build our bridges. In his view, there is no theory of rationality that would have genuine normative significance. In contrast to Moore, I think of a theory of rationality as the very answer to the normative question ‘what should I do’ or ‘what should I believe’. In the lottery case, it is obvious to me that I should take what I see as (and what is) so much more likely to succeed. This does not dislodge Moore’s view. I have no good argument against it but the appeal to the plausibility of (C). Nevertheless, this appeal, I hope, will carry some weight. Let me turn to the next way in which we could resolve the tension between claims (A)–(C). According to claim (B), what is probabilistically good need not be good. How can this claim be denied? Decision theory is, in its most common interpretation, a theory of rational preferences (and building on this a theory of rational choice). If preferences satisfy certain intuitively plausible axioms, like transitivity, completeness and what is called the sure-thing principle, then these preferences can be represented by a numerical function, the utility function, which has the expected utility property, i.e. we can evaluate options by mapping them into a set of evaluated consequences so that the utility of an option is equal to the probability weighted average of the value of its consequences. We can abstract from this interpretation and see decision theory as a set of axioms and a proof (the representation theorem). Decision theory then shows that any relation (it need not be the preference relation) which satisfies the axioms can be represented by an expectational utility function. John Broome applies decision theory to the betterness relation. What he gains is a theory of the structure of the good. The notion of good (or of betterness) can be represented by an expectational utility function. On this view, the notion of good (introduced on the basis of the betterness relation) is a probabilistic notion. Therefore, claim (2) exhibits a misunderstanding of the very notion of goodness and its probabilistic structure. Goodness is always probabilistic, thus it is wrong to say that what is probabilistically good need not be good. Anything that is probabilistically good is simply good. The criticism that follows is not a criticism of decision theory as an uninterpreted theory nor is it a criticism of its standard interpretation. All I want to do is to put some pressure on the idea that goodness is a probabilistic notion. If goodness was a probabilistic notion this would make it highly unusual. What do I mean by this? In all other case, in which we apply a probability modifier to some feature, the claims analogous to (B) are correct. What is probably blue need not be blue, what is probably true need not be true, what is probably probable need not be probable. These examples invite their generalization: anything that is probably F need not be F. On Broome’s view, goodness is an exception. Why should a modifier behave generally one way and another way when applied to goodness? We need, at least, an explanation of this curiosity. Any such explanation, I suspect, will have to appeal to the normative force of probabilistic goodness, i.e. it will have to appeal to (C). Thus a defence of Broome’s view would start like this: Something positive can be said about preferring the likely good over the likely good. It would continue by asking why we could not introduce a notion of value which captured what positively discriminates between the likely and the unlikely good. Remember that the position I defend here accepts (C) as well. What is ‘positive’ about the likely good is that it is the thing we ought to prefer. And the reason why we should not describe this fact in evaluative terms is simple: evaluative talk is Moorean. Whether the lottery ticket we have is good or not depends on whether it is going to win. If the fact whether the ticket wins settles its evaluative status, we should not introduce a second layer of evaluations (which would be in conflict with the first

424

C. Piller

layer) because what we use it for is already captured by deontic facts. The more likely good is what we ought to choose. 13 Let me consider another attempt to deny (B). One looks to the notion of instrumental value in arguing for the idea that what is probably good is, thereby, good. Goodness, many people think, comes in different forms. One form is to be good in itself. Another way to be good is to be instrumentally good, i.e. to have instrumental value. Can this notion of instrumental value support a denial of (B), i.e. a denial of the idea that what is likely to be good need not be good. In other words, what is likely to e good is, thereby good, namely instrumentally good. Remember what role the notion of instrumental value is supposed to play. We are trying to defend (C) – a general concern for the things that are probabilistically good – within a consequentialist framework. Why should we prefer what is likely to be good to what is unlikely to be good? Given consequentialism, there has to be a sense in which our preference for the likely good is a preference for what is better. Can we say the likely good is better than the unlikely good because it has more value, i.e. instrumental value? This would provide an explanation of the right form. However, despite the explanation having the right form it lacks any substance. Having instrumental value simply is being likely to be good. The notion of instrumental value provides a pseudo-explanation. It uses value words but it leaves no conceptual space in which any explanatory work could be done. This last observation might not reach the real issue. Let us simply introduce the required conceptual space. We do not equate being likely to be good with having instrumental value. Instead we say that something has instrumental value because or in virtue of its being likely to be good. Something is of value if it makes a difference at least in some contexts in which other goods are either present or absent. One sign of the fact that your happiness is of value is the following: it makes a positive difference to an overall evaluation of our states of wellbeing. Whatever my happiness, if we add yours, we improve the situation overall. Comparing the likely and the unlikely good, however, we realize that whether the good has been achieved or not, the presence of the likely (or unlikely) good makes no such difference. The likely good is as good as the unlikely good in the presence of the good. The

13

I have made my disagreement with Broome look bigger than it actually is. He might actually be an ally when it comes to the rejection of (A). Broome, if I understand his view correctly, would not object to my claim that deontic facts come first. Broome says, ‘The pursuit of good may give to ethics, not an objective, but a structure. It may fix the way in which ethical considerations work and how they combine together. It may provide not a foundation or an objective, but an organising principle for ethics’ (Broome 1991, 17). Thus, the normativity of probabilistic goodness is not derived from goodness as an outside objective, like it was for Moore. Broome appeals to the structure of rationality and, thereby, to deontic facts when he tries to justify his probabilistic notion of goodness. ‘The structure of rationality’, he says, ‘must tell us something about the structure of ethics’ (Broome 1991, 18). Thus, his consequentialism is also only structural and not like in claim (A) foundational. This means that the normativity of probabilistic goodness is not derived from the normativity of goodness. The disagreement that remains is, therefore, less important for the purposes of this essay. I object to a deontically based notion of goodness on the grounds that it leads into evaluative conflicts. Broome would say that there is a sense in which the likely good is only good if the likely things which make it good happen. But there is also another sense in which the likely good is good independently of what happens. It is good in the sense which expresses in evaluative terminology the fact that we ought to choose it. The tension between these notions surfaces when Broome talks about ‘the different status of probabilities’. ‘Lower status goodness [goodness determined by a probability less than 1] is a sort of interim goodness, which has to be revised in the final account.’ (Broome 1991, 130) Here Broome gives priority to a notion of goodness that is not probabilistic. In the end, it seems to me, Broome is, like me, a Moorean about goodness. Not in the sense that goodness is a non-natural property–we have made no commitment regarding the metaphysical status of evaluative facts–but in the sense that, in contrast to their normative status, the goodness of probabilistic means is settled by facts about there effectiveness.

Valuing knowledge: a deontological approach

425

likely good is as good as the unlikely good in the absence of the good. This suggests that there is no evaluative difference between the likely and the unlikely good.14 Moore’s position, the denial of (C), is normatively implausible. Defending (C) by using decision theory, offends against the general truth that probable Fs need not be Fs.15 Thus, I conclude, we have to abandon the consequentialist premise (A). In support, I can point to Kant. He did not think that the Hypothetical Imperative rested on the goodness of what following it would achieve. Neo-Kantians, like Korsgaard (1997), try to derive both Imperatives from ideas about what it is to be an agent. I do not take any stance on the success of their project. To take the means to one’s ends is, for me, a free-standing deontological requirement. I say ‘free-standing’ to mark the idea that it is not grounded in considerations of value. Whether other considerations succeed in anchoring its plausibility is, for the purposes of this paper, irrelevant.

4 Applying the Solution to the Value of Knowledge Problem What can we learn from these considerations about probabilistic goodness about the value of knowledge? We find an analogous triad of claims when we consider the normative status of justification, as it applies to beliefs. (A')

The normative significance of justification (in cases of interest) is derived from the value we place on (or the value of) having true beliefs. (B') What is justified need not be true. (C') When interested in arriving at a true belief regarding some question, we always ought to care about justification, i.e. justification is always normatively significant (in issues we take interest in). Analogous to the solution of the previous puzzle, I think we have to abandon (A’). The normative significance of justification (in questions of interest) is not derived from anything valuable it might lead to. It is, like the fact that we ought to care about probabilistic goodness generally, a free-standing deontological requirement that, when interested in some matter, we ought to have justified beliefs about it. The pressure the puzzle exerts should ease the move towards a deontological way of thinking about justification. We ought to have justified beliefs (whatever the right theory of justification turns out to be). When Marian David considers such a proposal, he writes, ‘If there is a proper place for absolute oughts - for oughts that have a hold on us no matter what then their place must surely be in ethics’ (David 2005, 309). Remember the point with which I started. It would be implausible to assume that knowledge is always better than true belief. We have to restrict such a claim to cases where we are interested in the right answer. This Thus, instrumental value is a ‘value’ in name only. The consequentialist had to invent an evaluative notion, which mirrors basic deontic facts, namely that we ought to prefer the likely good to the unlikely good, in order to defend the idea that evaluative facts explain deontic facts. I read (Ross (1939, 257) as defending a similar position. 15 Goldman’s recent attempt to solve the Meno Problem from a reliabilist basis can be seen as yet another attempt to deny claim (B). He calls his view ‘type-instrumentalism’ and explains it as follows, ‘When tokens of type T1 regularly cause tokens of type T2, which has independent value, then type T1 tends to inherit (ascribed) value from type T2. Furthermore, the inherited value accruing to type T1 is also assigned or imputed to each token of T1, whether or not such a token causes a token of T2’ (Goldman and Olsson, 2009, 16). I am unconvinced. A train, which derails and kills you, does not seem to inherit any goodness from its safe relatives. Similarly, the winning lottery ticket did not inherit the property of being a useless piece of paper from its million useless cousins. For more details see (Piller 2009a). 14

426

C. Piller

restriction is still in place. My recommendation is not to find a place for an epistemological Categorical Imperative. The normativity of justification (as it applies to beliefs) is hypothetical. When we are interested in a question, then we ought to choose a reliable method of answering it (or, more generally, we should prefer having a justified to an unjustified belief). Justification is only normative in those cases in which we have such an interest in truth. How does such a deontological understanding of justification answer the problem we started with? Can we now explain why knowledge is often preferred to true belief? We need to take two more steps. First, it is a general fact about us that (not always but in many cases) we value being active. We care about our children, our careers and, maybe, our creditworthiness, but there is more. We do not want to be passive recipients of the goods we care about. We want to be active in their pursuit. If we act stupidly and still get what we are after, we are very lucky. If, however, we pursue our aims efficiently and rationally, i.e. if we pursue them, as we ought to pursue them, then we do the best we can do in terms of active participation in the realization of the things we care about. In other words, when we act, we not only want to achieve whatever it is we are after, we also want to do it well, because in acting well, we satisfy our aim of being involved and of being active. The second step is already hinted at in what I said above. A deontological conception of doing what is likely to be or to produce the good tells us what it is to act well. A deontological conception of justification tells us what it is to believe well. We value acting well and we value believing well. Nothing seems to be wrong with such attitudes, therefore we can express the same point in realist terminology: it is a good thing to act well and to believe well. Knowledge is better than mere true belief because in knowing we believe well, i.e. we have satisfied the deontological requirement the right theory of justification imposes on us. The crucial move in my attempt to solve the Value of Knowledge Problem is the following. Compared to the dominant consequentialist way of thinking about these matters, we have reversed the order of explanation. We do not need to explain the normative significance of justification in terms of the good it might lead to (it might not lead to any good), rather we explain the value of justification and why we value justification in terms of having satisfied a free standing deontological requirement. Instead of looking for a value that would ground an ought, we use the deontic fact expressed in (C) as the basis for our evaluation. Reliabilists, like Goldman and Olsson, object to virtue epistemologists, like Sosa and Greco, on grounds of parsimony. For them true belief is the only fundamental value and the value of knowledge needs to be explained in terms of what matters fundamentally, which is getting things right. Do I just fall on the side of (Sosa 2007) and (Greco 2002), or have I offered an independent account? I agree with Sosa and Greco that when we know, we believe well. Sosa’s favourite analogy is the skilled archer who hits the target because of his or her own skill. The person who knows achieves a true belief in a likewise skilful way. Sosa and Greco emphasize the causal link between ability and success which, in their view, increases the status of such a belief. I have not yet mentioned it, but nothing I said here is in conflict with the importance of this causal condition. Not only does the knower believe truly and as he ought to believe, there is more, if Sosa and Greco are correct: the knower believes truly because she believes as she ought to believe.16 16

If Sosa is right in thinking that his account of knowledge does not need a separate anti-Gettier condition, then we would have indeed solved the restricted Meno Problem. Although the deontological account offered here is compatible with Sosa’s condition of aptness, which is the idea that a belief is true because in holding it one believes as one ought to believe, I am not convinced that a deontological account profits from the addition of Sosa’s condition.

Valuing knowledge: a deontological approach

427

My account is at heart deontological. Epistemological rules – and I have only mentioned one of them, namely that we ought to believe what is (or looks to us) more likely to be true – give some substance to what are epistemic abilities and virtues. The virtue theorist and the deontologist share their opposition to consequentialism. The ground they have to fight over is whether we need rules to understand virtues or whether independently understood epistemic abilities and virtues are the only ground for epistemological rules. In the context of the debate about the normative force of probabilistic goodness, the first option looks more plausible to me. Both virtue theory and deontology will oppose the reliabilist’s ideal, which is parsimony in the ways in which beliefs can be better or worse. General considerations about instrumental rationality show us that we have to accept requirements, which are independent of considerations about the good. The value that justification adds to true beliefs is the value of doing things well. I wanted to show that doing things well (or, as the case may be, not doing things badly) is a value that rests on the satisfaction of a deontological requirement. If this account of instrumental rationality is correct, then one thereby accepts a new source of value. Facts about what we ought to do and how we ought to believe tell us what it is to act and to believe well. When some activity is important to you, either because of its own features or because of what it leads to, and furthermore, you want to perform this activity well, then you might well want to do it well because thereby you are active in the pursuit of some good. Doing it well is being active and, I suggested above, this is something we value for its own sake. Nevertheless, we have to value something else, namely to believe truly, in order to value justification. It is still true that if we were not interested in a question, we would not value choosing the best method to answer it. This dependency is preserved but it does not show that we value justification for the sake of believing truly. We value it for its own sake under the condition that we are interested in truth. Think about it this way. You do not want to write well simply in order to write. Nevertheless, you would not want to write well if there was no point in writing. 17

References Brogaard B (2006) Can virtue reliabilism explain the value of knowledge? Can J Philos 36:335–354. doi:10.1353/cjp. 2006.0015 Broome J (1991) Weighing goods. Blackwell, Oxford David M (2005) Truth as the primary epistemic goal: a working hypothesis. In: Steup M, Sosa E (eds) Contemporary debates in epistemology. Blackwell, Oxford, pp 296–312 De Paul M, Grimm S (2007) Review essay on Jonathan Kvanvig’s the value of knowledge and the pursuit of understanding. Philos Phenomenol Res 74:498–514. doi:10.1111/j.1933-1592.2007.00034.x Goldman A (1999) Knowledge in a social world. Clarendon Press, Oxford Goldman A, Olsson E (2009) Reliabilism and the value of knowledge. In: Haddock A, Millar A, Pritchard D (eds) Epistemic value. Oxford University Press, Oxford Greco J (2002) Knowledge as credit for true belief. In: DePaul M, Zagzebski L (eds) Intellectual virtue: perspectives from ethics and epistemology. Oxford University Press, Oxford, pp 111–134 Heal J (1987/88) The disinterested search for truth. Proc Aristot Soc 88:79–108 Horwich P (2006) The value of truth. Nous 40:347–360. doi:10.1111/j.0029-4624.2006.00613.x Korsgaard C (1997) The normativity of instrumental reason. In: Cullity G, Gaut B (eds) Ethics and practical reason. Oxford University Press, pp 215-254 17

I have presented material from this paper in talks in Amsterdam, Dresden, Krakow, York, and Bristol. I thank the audiences on these occasions for their helpful comments. Special thanks to Johan Brännmark, John Broome, Dorothea Debus, and Wlodek Rabinowicz.

428

C. Piller

Kvanvig J (2003) The value of knowledge and the pursuit of understanding. Cambridge University Press, Cambridge Kvanvig J (2004) Nozickean epistemology and the value of knowledge. Philos Issues 14:201–218. doi:10.1111/j.1533-6077.2004.00028.x Moore GE ([1903], 1993) Principia ethica. In: Baldwin, T (ed) Cambridge University Press, Cambridge Moore GE ([1912], 1965) Ethics. Oxford University Press, Oxford Olsson E (2007) Reliabilism, stability, and the value of knowledge. Am Philos Q 44:343–355 Piller C (2009a) Reliabilist responses to the value of knowledge problem. In: Grazer Philosophische Studien, 2009, forthcoming Piller C (2009b) Desiring the truth and nothing but the truth. Nous 43:193–213 Pritchard D (2007) The value of knowledge. In: Stanford Encyclopedia of Philosophy, http://plato.stanford. edu/ Ross WD ([1939], 1963) The foundations of ethics. Clarendon Press, Oxford Sosa E (2000) For the love of truth. In: Fairweather A, Zagzebski L (eds) Virtue epistemology: essays on epistemic virtue and responsibility. Oxford University Press, Oxford, pp 49–62 Sosa E (2007) A virtue epistemology. Oxford University Press, Oxford Zagzebski L (2003) The search for the source of the epistemic good. Metaphilosophy 34:12–28. doi:10.1111/ 1467-9973.00257

Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.