Thursday, September 2, 2010

Carman on Merleau-Ponty on Perception

Hurley, Ross & Ladyman, and others, have objected to Adams and myself drawing attention to a distinction between causation and constitution that we take to be implicit, and sometimes explicit, in the EC literature.  Moreover, I have sometimes been taken to task for misreading Noë on the role of action in perception.


Reading through a new text on Merleau-Ponty, by Taylor Carman, however, makes me feel a bit better that we have not simply imagined this distinction and Noë's view on the matter.
Merleau-Ponty maintains that perception is not an event or state in the mind or brain, but an organism’s entire bodily relation to its environment.  Perception is, as psychologist J. J. Gibson puts it in The Ecological Approach to Visual Perception, an “ecological” phenomenon.  The body consequently cannot be understood as a mere causal link in a chain of events that terminates in perceptual experience.  Instead, it is constitutive of perception, which is the most basic—and in the end, inescapable—horizon of what Merleau-Ponty, following Heidegger, calls our “being in the world” (être au monde).  Human existence thus differs profoundly from the existence of objects, for it consists not in our merely occurring among things, but in our actively and intelligently inhabiting an environment.  (Carman, 2008, p. 1)
Obvious caveats apply.  Perhaps Carman has gotten M-P wrong.  Perhaps Noë doesn't buy this part of M-P's view.  My point is that plausibly conscientious readers can come to the view that a causation-constitution distinction is in play in at least some segments of the literature.  (I think that Mike Wheeler, for one, has moved away from this way of thinking about EC.)  Moreover, plausibly conscientious readers can come to the conclusion that Noë, under M-P's influence, thinks (at least at times) that perception requires bodily action.

Carman, T. (2008). Merleau-Ponty.  Routledge.

13 comments:

  1. This post made me go catch up on this coupling-constitution issue you have. This comment therefore wanders a little, sorry, but this seems like as sensible a place to put it as anywhere :)

    As I understand it, you think that noting that an external (non-brain) element might play a causal role in cognition, there's no reason to think that it constitutes a part of cognition? A couple of your earlier posts seem to settle on the idea that the notion of 'coupling, suitably defined' is where the problem lies: you don't buy any of the definitions. Is this about right? (My library doesn't have your book and it's nearly teaching time so it may be a while before I can spend any time on the details, sorry).

    There are numerous things that leap to mind for me.

    1. I'm quite happy with the possibility that something other than my brain can be a constitutive part of my cognition; you are right to insist, however, that not just any old thing counts. So I'm obviously not worried that a reading of Merleau-Ponty entails 'constitution'.

    2. I agree with you that many of the suggestions for the right kind of coupling are problematic. I think non-linearity is interesting and probably necessary, but I'm not entirely convinced it's sufficient, for instance.

    3. My first swings at thinking about how to do this have always been straight-forward. Any task (cognitive or otherwise) entails that some work happens: things have to occur to get me from the beginning to the end of the task. Step 1 is therefore do a task analysis, and identify what is required for an organism to do whatever it is you are trying to explain. Step 2 is to identify what is performing that work. If any of the necessary work is being done by something other than the brain, then it's surely game on as far as embodied cognition is concerned.

    My problem with representational, computational cognitive psychology has always been that it rarely does a task analysis, and even when it does it simply assumes that, for instance, perception couldn't possibly handle that bit (an assumption of poverty of stimulus). If you can show that perception can handle that bit (which Gibson and the ecological approach tries to do) then that work doesn't have to be done by a representation or piece of brain. To the extent that you succeed and that everyone agreed your task was a cognitive one, you've now got cognition occurring outside the brain (cheap move number 1 is to simply go 'oh, well then that's clearly not a cognitive task', which happens more often than I like). Obviously you can argue about how well we’ve succeeded, but that’s not an argument about a fallacy.

    So I guess the end result for me is that it’s weird to talk about this as a general fallacy (I think Chalmers was commenting to this effect on an early post); sure, you can make a mistake, but that’s not the same thing. I also don’t see what you achieve by pointing out the Merleau-Ponty connection, given that I see no in principle reason against extending cognition.

    One side note: I’ve seen this come up a few times on the blog. What is your problem with this idea that perception requires action (e.g. moving eyes, etc)?

    ReplyDelete
  2. So, for this I don't need to reread ...

    Yes, just because there is a coupling sort of relation between cognitive processing and tool use does not suffice to establish that the tool use constitutes part of one's cognitive processing.

    Re 1: I too am happy to admit that it is *possible* for things outside the brain to realized cognitive processing. I just don't think it happens very often, if at all.

    Re 2: Glad we can at least agree that this coupling idea needs work.

    Re 3: "Step 2 is to identify what is performing that work. If any of the necessary work is being done by something other than the brain, then it's surely game on as far as embodied cognition is concerned."

    Ok. So, here is where the rub comes. Take this case. Suppose you have the task of computing the first thousand prime numbers and you do this by pushing the return key on a computer that runs a program that prints out the first thousand prime numbers with a delay of 100 year. So, I would say that the computer uses non-cognitive processing to help you accomplish this task. The cognitive processing ends at about the time you finish pushing the return key. Doesn't it stop by the time you die, even though the computer does complete its run until after you die? (That's for provoking me to develop this version of the objection. I like the twist about dying before the completion of the task. But, maybe that doesn't move you.)

    Re "My problem with representational, computational cognitive psychology has always been that it rarely does a task analysis"
    I thought that this was doing Marr's level 1, or whatever. I'm not sure what you mean here.

    re: "What is your problem with this idea that perception requires action (e.g. moving eyes, etc)?"
    People can perceive under complete neuromuscular blockade. They cannot perform actions (in the Merleau-Ponty sense of moving their bodies) while under neuromuscular blockade, but they can hear, taste, answer questions, etc.

    Check out, for example, Topulos, G. P., Lansing, R. W., & Banzett, R. B. (1993). The experience of complete neuromuscular blockade in awake humans. Journal of Clinical Anesthesiology, 5, 369-374.

    I'd be interested in what M-P and ecological psychologists will say about this. I have not gotten much feedback on this. More of this appears in Chapter 9 of The Bounds of Cognition. It's also in a Journal of Philosophy paper from january of 2007.

    best wishes,
    Ken

    ReplyDelete
  3. I'll have a look at the Topulos paper; it sounds interesting, although I'm happy to bet it isn't the slam dunk you might think :) Sounds like there'll be a blog post in it, it is certainly the kind of counter-example that needs addressing.

    Re: computing prime numbers.

    First of all, this isn't really a very good example of the kind of behaviour people get up to. For some reason it's the kind of example philosophers jump right to, and I've always found it frustrating. It's also not the kind of work I have in mind; the outfielder problem stuff I referred you to is still a good example, where there is indeed a computational solution (calculus) but people actually implement that solution by moving so as to create an optical state of affairs (one of the two strategies, OAC or LOT) that then leads to the same result. The 'work' wasn't 'computing derivatives', that was simply one way of solving the actual problem (getting yourself to the right place at the right time to intercept the ball).

    That is closer to what I mean about a proper task analysis. If you assume the people are implementing computations then the task is to implement the correct computation (here, calculus). If you analyse the task initially more ecologically you see that the task is actually to intercept a ball and that computation is merely one way to do so. Computation may yet be the solution; but it might not and you won't see this if you start your task analysis too far along, which is what Marr did. Gibson's '79 book is his attempt to just take that one critical step back to see if it matters, and then him laying out how it does.

    Second of all, assuming it was a good example, if I die before the task is done then you've altered the organisation and composition of the system; even if there was a case to say it was an extended cognitive system to begin with, there's no reason not to acknowledge the change with my death; quite the contrary, it's a qualitative alteration in the system's composition (plus the return key won't be being pressed!)

    Your serve, I think :)

    ReplyDelete
  4. I think the neuromuscular blockade (NB)stuff is pretty good. The weakness, I see, is that there are not that many experiments of this type. So, you know you can taste lidocaine on NB, but can you taste banana? There are these inductive limitations. This is (roughly) the kind of line Alva Noe and Rob Wilson seem to take.

    Yes, most philosophers do jump to wild thought experiments in the way that psychologists jump to experimental results I haven't read! =) But, seriously, the philosophical method here is simply trying to draw out a prediction of a theory, one that is so absurd one does not want to trouble to do an experiment to check it.

    I have not read the actual outfielder problem literature, only some secondary literature (by philosophers) on this. But, here's my weak question. Grant that the 'work' wasn't 'computing derivatives'. Why isn't it the case that the work was done by computing something else? That would seem to be the obvious computationalist reply, right? But, the stronger question would be to read the literature and ask, Why isn't it the case that the work was done by computing BLAH? Here BLAH would be a rival computational account. (I haven't read the primary literature, so I don't have a guess about what BLAH would be.)

    Finally, "assuming [the prime number example] was a good example, if I die before the task is done then you've altered the organisation and composition of the system; even if there was a case to say it was an extended cognitive system to begin with, there's no reason not to acknowledge the change with my death; quite the contrary, it's a qualitative alteration in the system's composition (plus the return key won't be being pressed!)"

    Ok. Compare this case with a philosopher's sort of brain damage case. Assume a simplistic left brain language/ right brain emotion picture. So you have say a spark of emotion in the right brain that triggers (cf. hits the return key) the left brain to produce a sentence. So, the right brain emotes then hits the key, then is destroyed (hence dead, of course) by being hit by a bullet. In this case, I would say that what is going on in the left brain while trying to produce the sentence is cognitive processing. So, I give different answers to the prime number case and the language case. Why? Simply: there is a difference in the kind of information processing going on in the left brain and in the computer.

    ReplyDelete
  5. But, seriously, the philosophical method here is simply trying to draw out a prediction of a theory, one that is so absurd one does not want to trouble to do an experiment to check it.
    Oh I understand the principle, but for me, as a psychologist, embodied cognition should be a testable hypothesis about human behaviour and where we settle any differences in actualities and data. Demonstrating one potentially problematic case is a useful part of the process, but the reason I'm not a philosopher is that I became increasingly unable to draw the line there. Even your extended example in your reply means assuming a meaningless and entirely incorrect description of the brain: my point is, what does that case tell me, given the way things actually are? Very little, I would argue, because the differences you've introduced are entirely non-trivial. I really think this matters, and it's the root of my basic dissatisfaction with this particular philosophical method - I'm never convinced the example works because it doesn't contain the actual system of interest.

    Grant that the 'work' wasn't 'computing derivatives'. Why isn't it the case that the work was done by computing something else? That would seem to be the obvious computationalist reply, right? But, the stronger question would be to read the literature and ask, Why isn't it the case that the work was done by computing BLAH? Here BLAH would be a rival computational account.
    The reason is simple: whatever solution you implement, to count as a solution it has to get you to the right place at the right time to catch the fly ball. So computationally you are already quite restricted; there's only so much appropriate maths. Fine, go ahead and test those, no problems. But when you do, you will find that people actually act as if they are detecting certain optical information and controlling their behaviour with respect to that.

    The Fink et al paper I linked you to is useful because a) they explicitly test some model predictions and b) experimentally manipulate the visual information to see when behaviour breaks down and how that relates to the model predictions (Bill Warren's VR lab is excellent).

    ReplyDelete
  6. "Even your extended example in your reply means assuming a meaningless and entirely incorrect description of the brain: my point is, what does that case tell me, given the way things actually are? Very little, I would argue, because the differences you've introduced are entirely non-trivial. I really think this matters, and it's the root of my basic dissatisfaction with this particular philosophical method - I'm never convinced the example works because it doesn't contain the actual system of interest."

    Yes, I know that this is a simplistic rendering, but the idea is not to tell you anything about the brain, but to try to spell out a theory by way of hypothetical reasoning.

    I'm perfectly happy to concur that folks detect certain optical information and control their behavior with respect to that. I think that that is probably right. What I am sceptical about is the idea that this involves "direct perception" in the sense that no computation is involved at all.

    ReplyDelete
  7. "The Fink et al paper I linked you". Sorry, Andrew. I seem not to be able to find this link or the ref. Could you resend when you have the time? Thanks.

    ReplyDelete
  8. Fink, P.W., Foo, P.S., & Warren, W.H. (2009) Catching fly balls in virtual reality: A critical test of the outfielder problem. Journal of Vision, 9(13), 14:1-8.

    ReplyDelete
  9. And yes, I know your example isn't supposed to tell me anything about the brain. But the point is that sort of argument is no use, not really. Your hypothetical reasoning may reveal a problem; that problem, however,may only obtain for the hypothetical example, and it remains for you to show that the example reflects anything of interest given the way things actually are.

    As I mentioned, I think, it's this sort of thing that made me stick with science. I did a lot of philosophy courses during my degrees and I really do enjoy it, but I always end up unsatisfied and needing to test things :)

    Scepticism about the hypothesis of direct perception I can accept, though. Clearly it's up to us ecological types to show we're right. But that will come from hypothesis driven empirical research informed by theory, and it's that you philosophers need to contend with.

    ReplyDelete
  10. FYI: this thread made me write this:

    http://psychsciencenotes.blogspot.com/2010/09/assume-cow-is-sphere.html

    Comments welcome :)

    ReplyDelete
  11. Thanks for the ref. Fortunately, I can download it from JoV, which is great.

    ReplyDelete
  12. I'll stop by your place soon.

    Maybe to wrap up these comments, I don't think it is reasonable to lump all philosophers together in their attitudes toward empirical research informed by theory, or maybe even to extrapolate from what's in a blog to might go into print.

    So, the Rutgers PhD's who work in cognitive science typically know a fair amount about cognitive science. Those who graduate from Pittsburgh's History and Philosophy of Science department often have a MS in some science. For example, Carl Craver and Jackie Sullivan, who have complete their degrees in the past ten (or so) years both have Masters degrees in neuroscience. There are tons more.

    Philosophers do not typically do experiments (set aside the experimental philosopher for a bit), because they are not scientists. They are doing philosophy.

    To me, science has a boring part and an interesting part. The boring part is actually running the subjects. (I once did this and had a subject fall asleep during the experiment. I was pretty angry as this was an hour of my life I would not get back.0 But, what is interesting to me is interpreting the results. That, I think, a well-informed philosopher might do.

    ReplyDelete
  13. Some philosophers, it seems to me, do indeed not pay enough attention to the science they are working with. That happens. But, other philosophers will get so carried away in presenting the scientific work that they seem to stray into what one might call "science journalism". One has, I think, to strike a balance between presenting enough scientific information to makes one's case, but not so much as to distract from that case. In trying to strike this balance for a philosophical audience, I think that a philosopher will often provide less than what would be thought appropriate by a psychologist.

    ReplyDelete