Saturday, March 17, 2007

When is ‘seeming to see’ enough?

There are a lot of different cases where people claim to have an experience which amounts to simply seeing that some P is the case. Chess players ‘see’ the weakness of a pawn structure, potters ‘see’ that a certain pot will crack when fired, people having religious experiences suddenly ‘see’ that god is real and cares about them, intro math students ‘see’ that you can’t put 4 puppies into 3 boxes without putting more than one puppy into some box, and nearly all ordinary people can ‘see’ that you can’t pick a lock with a banana and, of course we can see that we have hands. This raises a some natural questions: how much justification do these experiences-as-if-of-seeing-that-p provide? And, are there natural divisions in the list of examples I just gave, or do they all have the same epistemic status?

At present I am torn between two opinions about the epistemic status of these ‘seeming to see’ experiences. The simple view would be to say that any such experience where it feels like you are sensing that P provides prima facie warrant for believing that P. The case of religious experience gives me some qualms though, and one can cook up even more implausible cases. Some people claim that they can feel via a sense of forboding in their heart that their twin or loved one is in trouble. Or, imagine looking northwards at the clouds in the direction of Canada and ‘feeling a great disturbance in the force’ as it were, which seems to let you feel that something terrible is happening in a certain small town in Canada.

Now I hesitate to say, given this kind of example, that seeming to see that P provides prima facie justification. It’s pretty clear that in these circumstances a rational person should not immediately believe what their strange experience seems to show them but check (say, by making the appropriate phone calls) whether this experience really does reliably track how their twin is doing or what is going on in Canada. And a supporter of the ‘prima facie warrant’ idea can agree with this. But what about cases where there is no practical possibility of checking – suppose you have these experiences when you are out in the woods, or suppose God tells you as part of your religious experience that you will only get normal empirical evidence for his existence after he is dead [ed: after YOU are dead :)]? Here I am inclined to think that you shouldn’t believe what you seem to see at all, until you have checked the reliability of your experiences as if of seeing – and that once you do this the amount of evidence which your seeming to see provides is proportional to the evidence that you have now accumulated that your experiences of seeming to see are reliable.

But what about the case of sense perception? You can’t check the reliability of your senses against something else, but surely seeming to see that there’s a table in front of you does give you reason to believe it. This leads to the second more complicated theory of the epistemic status of seeming-to-see-that-P.

On this (slightly Peacockian theory) most such experiences give one no reason to believe anything on their own. You are only justified in believing what such experiences seem to tell you if you are also frequently exposed to evidence that confirms the reliability of this supposed perception. So the chess player who ‘seems to see’ that his queenside pawns are weak only has as much reason to believe that the pawns *are* weak as he has evidence that these experiences of his are reliable (so e.g. if he is a chess master he will have strong reason to believe it while if he is infamously bad at chess like myself he will have very little reason to believe this).

BUT (here’s the Peacockian part) in some cases the experience of seeming to see that P is central to, or indeed nearly all there is to our practice of saying that P. In these cases the facts about when we ‘seem to see that P’ largely determine what we mean by P and hence what it would take for P to be true. Specifically, these facts determine the meaning of P in such a way that if we say P whenever we feel like we can ‘see that P’ we are quite likely to be right. So, for example if what tends to give us the experience of ‘seeming to see that there’s a table’ is tables then ‘there’s a table’ means there’s a table, if it is vat state T then ‘there’s a table’ means the vat is in state T and so on a la Putnam on the BIV. Thus, in these very special cases believing that P when you have the experience of seeming to see that P will be a reliable, and maybe even justified method of forming belief.

This proposal has the advantage of giving a motivated way of separating up the list of ‘seeming to see’ experiences I started with in a motivated way. We have other practices which give us an independent grip on what it would be for your twin sister to be in trouble or disaster to be striking in Canada. Thus your feeling of conviction that P remains just that until you have evidence that this sixth sense of yours is reliable. But, on the other hand, in the perceptual case we don’t have this kind of independent grip on the stuff which our five senses seem to show us. Thus your experience of seeming-to-see that there is a table plays a role in determining *what it would mean for there to be* a table there which ensures that you are justified.

So how does this sound? Any takers on the simple proposal or the split (not to say…shudder…disjunctive ;) ) proposal? New proposals? Obvious points in the philosophy of perception which I am missing?


logicnazi said...

Talk about (belief) rationality presupposes a model of the individual as directly receiving some sorts of perceptual/experiential input and then somehow reasoning with them. This requires that we somehow generate a model of the individual where certain sorts of things we chalk up as input (perceptual experiences) and other things we leave in the realm of judgement.

This division is a choice we make as determined by context and I think a simple example illustrates this point. Let's consider an eminent scientist reading a argumentatively flawed scientific paper concludes that the conclusion is correct. Is this a rational belief for him to have? Well if we consider his evidence to be merely the facts he read in the paper then no, the argument is flawed and they don't establish the conclusion. On the other hand if we allow the experience he has of judging this to be a convincing argument to count as part of his input than (assuming the details of the flawed argument have passed from his memory) his general accuracy may justify believing that the result is valid.

In other words we have a theoretical notion of does a certain set of facts justify (perhaps relative to some implied background knowledge/inference rules) some conclusion. When we assesses whether it is rational for someone to believe something or not we (from context) infer some set of data to be taken as the individuals given information. Then using this model we can then answer whether it would be rational for them to believe the proposition.

Really of course the individual just has a series of experiences, some having content but the notion of rationality asks us to model them a certain way and certain vague rules of thumb and contextual clues tell us what choices we should make in constructing the model. This, by the way, is the brilliance of Carnap's understanding of philosophy and what so many people don't follow in his debates with Quine. Carnap understood that rather than answering real questions about the world when we ask whether someone is rational we are merely asking questions about a certain sort of mental model and the choices we make in constructing that model affect our answers.

One more example might be constructive. Suppose it turns out that from the entire sequence of events that you could remember if asked (including childhood memories and the like) it would be possible to predict that contrary to meteorological predictions it will rain tomorrow. We still wouldn't say (in a normal context) that it would be rational for you to believe it would rain tomorrow. That's because we don't accept the model where you get to reason with EVERY event you could possibly call to mind as acceptable in this context. On the other hand it isn't what you actually remember that determines the information we include in the model. In truth when someone reads even a fairly short scientific paper at any one time they will not remember all the details but we don't let them get away with being rational on the virtue of having the experience 'I remember that the paper seemed right.'

Perhaps to be a little more specific I should say that when we ask if it is rational for someone to believe something in a given situation we really are looking to see if there is any reasonable model of their information that lets them validly reach the conclusion. It is not that we exclude any particular piece of a huge long sequence of memories they might have just that we don't find the model that takes all of those to be their information to be reasonable.

logicnazi said...

Ohh crap I forget to finally answer the question.

Ultimately the answer is just that it depends on context. When we ask about whether I am justified in believing there is a table in the room the background corpus lets us assume that the external world is not a masterful trick. When we ask about brain in the vat cases the context shifts and that is no longer part of the allowed corpus. Though in my opinion I don't think there IS anything to the belief in the external world than that almost everyone has experiences consistent with it's existence (no experiences of being in out of vat world).

Things like experiencing that god exists don't cut it as justification for god's existence because our normal context would demand that we model the info as 'I experienced the feeling that god existed' and we don't assume in the standard background corpus rules that take us from feeling that such and such to such and such when it isn't a traditional perceptual experience.

Ultimately though I'm not convinced there is a real question here and it isn't just terminology.

oblomovitis said...

wow this is definately a seductive view, and I like your reading of carnap (way to get around the no-objective-way-to-determine-what-someone's-framework-propositions-are problems in the carnap vs. quine debate!)

but a) are you really saying that religious people are only being irrational *relative to our arbitrary conventions about how to represent them, what inferences to allow*? and
b) I worry a little that this kind of radical conventionalism/skepticism about rationality might be self undermining,

p.s. did you hear that carnap went around doing drugs naked in the forest as a younger man? Or that quine made him cry by calling him a metaphysician? It seems like he was your kind of guy in a number of ways :)