Saturday, March 17, 2007

Conflicting conceptions of what it takes to have knowledge vs. what it takes to have *mathematical* knowledge?

Suppose someone is doing a bunch of really long sums e.g. adding 12 digit numbers, with a blunt pencil and in a hurry. Under these circumstances they are quite likely to make at least one mistake during the course of each of the sums, so (as they learn when they check over their answers) overall they get only about one sum in ten correct. Now after doing this for a while, suppose they do one more sum and, being the confident person they are, they believe that the answer to it is in fact 1789200056911 as their calculations suggest. And suppose that this is, in fact the right answer, and in this case they have been lucky enough not to make any mistakes along the way. Then do you think that they know that whatever + whatever = 1789200056911 or not?/ Is their belief justified?

On the one hand, it seems like they know since they have gone through and been convinced by a correct process of reasoning which entails that this is the right answer. On the other hand, it seems like they don’t know that the answer is that because they usually make so many mistakes that the mere fact of their computing a certain result is very little evidence that that result is correct.

My impulse would be to say that this shows that we two different standards – for mathematical knowledge and for knowledge in general which are clashing in this case. Maybe one can also get a conflict between these standards for knowledge in the opposite direction: setting computers to check the first few billion cases of goldbach’s conjecture (that every number greater than two can be written as the sum of two primes) could eventually give you very strong justification for believing it (and hence perhaps knowledge in the ordinary sense) but it would be strange to say that you know the conjecture was true if you didn’t have a proof.

Also this case seems similar to the familiar lottery example 'do you know that you won't win the lottery, when you have evidence that that your chances of loosing are overwhealmingly good?' so maybe the lottery example is further evidence that our conception of knowledge is fragmented/highly context dependent.


Simon said...

I don't think the mathematician with the blunt pencil (the blunt mathematician, shall we say?) knows the answer, or has a justified belief about it. He has only a 1 in 10 chance of being correct, and he ought to know this. So, I don't think the belief about the answer would be justified for him.

Why are you inclined to think he does know? Two things: First, his answer is *much* more likely to be correct than a pure guess, even though it's still very unlikely to be correct. This is enough to make us think an answer reached this way deserves some special status - it's not just a stab in the dark.

Second, and more importantly, there's an ambiguity in the phrase "process of reasoning" you used - does it refer to the naturalistic process involving fumbling with a pencil that led to the result (this only has a 1/10 chance of getting the right result), or does it refer to the ideal, mathematical procedure (which always gets the right result?). I'd argue that the former process was the one the blunt mathematician chose; the latter was followed only by chance (even though the blunt mathematician was - in a blunt kind of way - aiming at it).

We could argue for an additional level of reliabilism here; it's not only that you need to follow a reliable method, but you need to reliably follow a reliable method. But I don't think this will work - because I'm still inclined to think the beliefs wouldn't be justified if it were just accidental that he happened to be even reliably following a reliable method.

My sense is that it's not just the method you use to reach your beliefs that justifies them, but your choosing it. We're inclined to think the blunt mathematician is justified only because we're not careful enough about distinguishing which method he's chosen.

CPF said...

I guess I'm not sure what is being asked? Nor

Does he know? Am I inclined to say that he knows? Do I find within myself some sort of oracular urge to apply the word 'knowledge' to this case?

What kind of progress or increase in our understanding would we achieve by answering one of these questions?

As you've described the case, it seems perfectly clear already: our guy is a competent but fallible arithmetician who, in similar circumstances, tends to be rather unreliable but has been lucky in the present case and gotten the right answer. Now you want us to tell you whether this is a case of knowledge. Why? As I said, the facts of the case seem perfectly clear; confusion, unclarity and a sense of mysitfication arise only when you insist on re-describing the case using the word 'know' (or one of its cognates). But we were clear about the case already; why should we have to re-describe it?

Epistemology isn't my area so perhaps I'm missing something basic. But I don't know what it could be. Can someone enlighten me?

oblomovitis said...

I definately sympthize with the feeling that is 'does y apply to x wierd thing?' questions are often boring and futile (e.g. does a virus count as being alive?) For example, there may just be legitamate extensions of our present lingusitic practice which would say 'yes' or say 'no' and no definate fact given how we actually use words.

However, let me say 3 things about this question
1. i do often find it interesting (and this is pretty much a taste judgment) to hear *general theories* of what factors are relevant to our counting something as knowledge/life/a fork/whatever. to use an extremely simplifying example i don't care to quibble about whether mr. so-and-so is bald, but i am amused and interested to hear people point out things like the fact that arraingement of hairs matters to whether someone is bald as well as the number. it's not The Meaning Of Life, but imo its interesting :)

2) As a person who aspires to answer various questions about how mathematical knowledge is possible, (I would have thought that you were such a person too, is this right?) I want to know general things about what mathematical knowledge is supposed to be like. This isn't a matter of tracking down the plus in JTB+ and going into ever so many epicycles and borderline cases. Rather it *seems* like there are radically different notions (do you have the right kind of relationship to certain canonical kinds of proof vs. do you have reason to beleive) associated with 'knowledge' and 'mathematical knowledge'. Whether this is right or wrong (which i would like to find out) seems like the kind of thing an epistemologist of math should know.

3) I have exactly the same feeling as you that everything relevant about the mathematician's situation has been specified. that is what makes me feel like the wide disagreement in people's answers comes from something deep and interesting (ambiguity in our conception of knowledge) and not just people imagining the case diferently from one another or drawing the border for a sorietes-vague term in a slightly different place.

But if you are asking 'why should I care about the most general features of our concept of knowledge?' there is no categorical obligation ;). If you don't find thinking about this kind of thing fun it would be very irrational of you to keep thinking about it!

oblomovitis said...

simon: I think your point about ambiguity in re what method the person is using is interesting. If i understand it correctly it can be broken down into two pieces.
1) people are inclined to give different answers to the question of whether the blunt mathematician knows depending on what method they thinnk he is employing. If they think of him as empolying a detailed mathematical procedure they say he knows (this is a good procedure). If they think of him as writing stuff down in a way that follows the algorithm for adition as best he can then they say he doesnt know (this is a bad unrelyable procedure at least for this person at this time).
2)The correct way to think about the guy is the second way just listed because "the [pure mathematical procedure] was followed only by chance"
Could you say something a little more detailed about that? What relation *do* you have to have to a good method in order for acting in accordance with it to give you knowledge?

Maybe there is some distinction between merely acting in acordance with a rule vs. following it and/or between following a rule vs. following it for the right epistemic reasons? I feel like there must be some sophisticated moral phil understanding of this issue which could be transplanted to the ethical case...

logicnazi said...

So let me sketch my intuitions/theory of knowledge. First of all I adopt a contextual theory of knowledge (as many people do). Whether one has knowledge of something is relative to a certain assumed context which dictates the basic premises that are accepted without justification. For instance we might be willing to say that I know my car is in the driveway because I can see it while not being willing to say that I know that my sense experiences aren't being manufactured by Descartes's demon. The context for the first question will (normally) let us take the reliability of sense data for granted while that for the second will not.

Now we model people as believing things pursuant to certain mental arguments. What arguments they believe some proposition pursuant to are pretty much the justifications they would offer if you asked 'why do you believe this?' and 'If that justification was invalid would you still believe it?' (note counterfactually they might not believe it if they found out the first argument was invalid). This is only more or less because we don't count overly clever people like me who might list every possible finite argument as they kept being asked for justifications. More precisely the person must really believe that the arguments they offer are valid justifications.

The person knows the proposition just if one of the arguments they offer is valid and establishes the conclusion is true with a sufficiently large confidence (determined by context). An argument is valid only if it proceeds by acceptable inferences (context determined) AND EACH INTERMEDIATE STEP ASSERTS A TRUE STATEMENT (avoids gettier).

So now let's apply this theory to Sharon's question. If the person who did the computation thinks 'the answer to this sum is such and such because the way one calculates a sum is by doing blah and this number is the result of doing blah to the components' then indeed they know that the answer is the number they calculated. On the other hand if their only justification they have for that answer is 'I did some process and I generally apply mathematical processes reliably' then they do not know the answer.

As far as the lotto question I think this is a trick question because the context of the question implicitly sets the degree of confidence much lower than normal. The question is implicitly asking whether you have reason to believe your probability of having won the lotto is very low compared to the normal chances of having won the lotto.

Frankly I think it's a mistake to try and force these natural language terms into overly simple pseudo-mathematical definitions. We can clean them up a bit but that's about it.

logicnazi said...

BTW I think context also determines whether we are asking 'do we know this is true' or 'do we know it is true that this follows from the axioms' or 'do we know that it is true that this follows from a simple set of axioms that would be generally accepted as describing arithmetic.'

CPF said...

A quick response to Simon and "oblomovitis." (can I just call you Sharon?)

First, I didn't mean to be forwarding some sort of radical Wittgensteinian skepticism according to which all philosophical problems are pseudo-problems arising from the misuse of language. And I sympathize with you that there seems to be a genuine mystery about how "mathematical knowledge" is related to "empirical knowledge" - or, to avoid that concentious word, how our relation to mathematical propositions differs from our relation to empirical ones.

(I'm sure there is a better way to formulate the problem but I'm not going to try that now - I've always found Benacerraf's formulation quite compelling)

But my worry is that I don't see how querying our intuitions about when to apply the word 'knowledge' (especially if we're talking about outlying, borderline cases) is going to be helpful in adressing the problem.

Simon claims that the blunt mathematician doesn't know because he has a less than 1 in 10 chance of being right. I don't understand that inference. I suppose we could *decide* to start using our word 'know' in that way but it's not clear that we do already. And I don't see what use we would have for concept with a sharp boundary like that.

And suppose we do use the word that way already. Suppose someone did some real empirical studies and it turned out that an overwhelming majority of native speakers agreed that a belief arrived at through a process that is only 10% reliable does not deserve the honorific lable 'knowledge.' What would that show?

-It would tell us something about our *concept* of knowledge.

Well, it would tell us something about how native speakers use the word. Is that what you mean?

And to connect this back to Sharon's point, such an empirical study would certainly not tell us anything about how we can know about things from which we are causally isolated. That, I think, is a genuine philosophical problem (though, again, a poor formulation of it) but I don't see how the sort of intuition-checking Sharon asked us to perform provides any illumination.

oblomovitis said...

CPF, ironically I have found this exchange about whether talking about this weird case was helpful for phil of math distinctly helpful. Here’s how: If I am right about there being an ambiguity between mathematical knowledge as in having a proof vs. mathematical knowledge as in having sufficiently strong evidence for a mathematical proposition (and ambiguity between different ways of conceiving the question –simon- or different contexts in which the question might be asked – peter g- seems to be a common theme among respondents) then you might expect this ambiguity to bedevil attempts to account for mathematical knowledge.

For example, suppose I am trying to explain how we know some mathematical statement X. I might say that we know that X because we have proven it from such and such axioms using so and so methods of inference. You might then ask why possessing such a proof would give anyone reason to believe something: how do I know that the axioms are true, and the inference procedures are truth preserving? And suppose (quite optimistically) that I can then give you a story about you we have strong inductive or empirical reason to believe that the axioms are true and all instances of the inference procedures are truth preserving. You might then complain (if you weren’t distinguishing these two senses of knowledge) that such empirical considerations could not account for mathematical knowledge, since they merely constituted reason to believe the proposition not a mathematical proof.
In contrast, if we do distinguish the two notions of what might be required for mathematical knowledge then this kind of divided story looks much better. There is first the question of how we have ‘mathematical knowledge’ of a certain proposition and this is answered by providing a proof (for very fundamental propositions which are usually used in proving things this proof may be the trivial proof consisting of just writing down the proposition). Then there is the question of how we have knowledge in the ordinary sense of reason to believe P, which might be answered by showing how having mathematical knowledge (the right kind of proof) can give us ordinary knowledge. The theory alluded to above would (I think) answer both questions, but we can only see this if we keep them straight.
In short, a divide-and-conquer approach to mathematical knowledge is only possible once you have recognized the division so that you can remember what part you have conquered and don’t go around and around trying to re-conquer it like a dog chasing its tail.
[btw I have sort of come up with this divide and conquer thing off the cuff, so it likely doesn’t work but this is an example of how knowing a certain concept is ambiguous would be relevant to answering substantive questions.
p.s. I thought up the example as part of trying to evaluate Philip Kitcher’s argument that there is no a priori knowledge. And now that I think of it (and remember the whole *kant scholar* thing) that would probably be an application which would interest you more, so I will post it below if the procrastination fairy strikes again ;)]