Tuesday, December 14, 2010

Gibson's Rejection of the Retinal Image 4

It is not necessary to assume that anything whatever is transmitted along the optic nerve in the activity of perception. We need not believe that either an inverted picture or a set of messages is delivered to the brain. We can think of vision as a perceptual system, the brain being simply part of the system. The eye is also part of the system, since retinal inputs lead to ocular adjustments and then to altered retinal inputs, and so on, The process is circular, not a one-way transmission. The eye-head-brain-body system registers the invariants in the structure of ambient light. The eye is not a camera that forms and delivers an image, nor is the retina simply a keyboard that can be struck by fingers of light. (Gibson, 1979, p. 61).
"It is not necessary to assume that anything whatever is transmitted along the optic nerve in the activity of perception."  Really?!  What about action potentials? Maybe one could draw attention to the fact that Gibson throws in the qualifier "it is not necessary to assume", since that could make for a nice fig leaf.  But, the simple fact is that action potentials are transmitted down the optic nerve during perception.  There is at least a correlation between perceptual events and action potentials, right?
We can think of vision as a perceptual system, the brain being simply part of the system. The eye is also part of the system, since retinal inputs lead to ocular adjustments and then to altered retinal inputs, and so on. The process is circular, not a one-way transmission.
I'm ok with this.  I get this, but one does not have to deny that the eye is a camera, that there is computation in the retina and optic nerve to accept these conclusions.  Gibson is proposing a false dichotomy here.

23 comments:

  1. That sentence threw me too, but I think what Gibson is trying to do there is point out a coupling-constitution error. The establishment claim is that the eye and the brain are coupled and information must therefore be transmitted from the one thing to the other. If they are in fact constituents of a single (eye-head-brain-body) system, nothing needs to be 'transmitted' because the system (more broadly conceived) already has the information.

    It's an odd way to phrase it, I agree, but that's my reading - Gibson is treating 'transmission' as the word for 'communication between two separate things'.

    ReplyDelete
  2. I agree that Gibson does think of transmission in terms of communication between two separate things. Indeed, he often seems to think of it as communication between two separate agents or people. (Some place he complains that nature is not a person that tries to send us messages, or something like that.)

    And, I think you may well be right that Gibson is thinking that if the information is already inside an eye-head-brain-body system, then it does not need to be transmitted anywhere. Yet, I think that this is a mistaken conception. Again, computers illustrate the point. There may be information on the hard drive that needs to be transmitted to the CPU in order to have certain operations take place. Or, one has to transmit information in a JPEG file to some video controller to get the file transformed into an image on one's monitor.

    Here is a case where thinking of computational theories of cognition as just warmed over Associationism or warmed over sense datum theory is to ignore some of the useful conceptual clarity that computers enable.

    ReplyDelete
  3. If you think the brain is like a computer, then you might find clarity. For those of us who think that the brain isn't much like a computer at all, appeal to that metaphor does nothing but muddy the waters. Gibson is trying to articulate a way of talking about a system that has access to that information without using inappropriate analogies like hard drives and monitors.

    So you gain clarity, we think you miss the point :) And I don't find your computer example even a little compelling, from this side of things; it just sounds like an odd way to talk about a perceiving/acting organism.

    ReplyDelete
  4. But, one does not have to think that the brain is a computer to realize that within a computing system information is transmitted from one part of it to another. One just has to know how computers work to know that information is transmitted from one component to the next.

    ReplyDelete
  5. And this, of course, is the analogy Gibson is denying (and me too), so we're not convinced it's relevant.

    I think it's good to remember that the only reason we think the brain is like a computer is that the computer is currently our cleverest technology; 100 years ago the brain was a telephone exchange.

    ReplyDelete
  6. But, this gets the dialectic wrong. If Gibson is trying to provide a reason to reject computationalism, he's not doing it.

    Yeah, I've heard the lastest technology line before......

    But, here's something that is kind of intriguing to me about the EP appeal to systems, namely, there is a lot of it and there is an implicit assumption (it seems to me) that systems do not have components. So, there seems to be a lot of EC interest in saying that there is an organism-environment system. Ok. So, that still leaves open the idea that the organism is one component of the system, where the environment is another. So, not only do EP types want there to be organism-environment systems, they want it to be an unanalyzable system.

    ReplyDelete
  7. Well this is of course why dynamical systems is so handy. It lets you talk about the composition and organisation of systems (so the parts and how they are connected) while still preserving the focus on the system as the main 'unit' of interest.

    I'm not appealing to Tony's non-decomposable argument, because I don't think it quite works. But dynamics is where its at for a reason.

    I also don't think this section is about giving reasons against anything; that was the previous stuff, this section is clearly a segue into 'well what is it if not sensation based?'

    ReplyDelete
  8. But, you know. You don't have to use the example of a computer to talk about data transfer in a system. Think of your basic point and shoot digital camera with a memory card in it. That's a system, but data from the CCD is transferred to the card and data from the card can be transferred directly to many printers without passing through a computer. Gibson was just wrong about the nature of technology, not just computers.

    ReplyDelete
  9. Digital cameras are just computers too :)

    ReplyDelete
  10. Ok. How about a heater? Where information from the thermostat has to be transmitted to the heating elements?

    ReplyDelete
  11. But is a heater + thermostat system a system, or are they pieces which are coupled? I'd veer towards the latter because it's not compulsory to have a thermostat.

    ReplyDelete
  12. If I may intrude briefly to inject the perspective of a communication system engineer, this statement seems to me to capture the problem with this gerreral line of thought:

    But is a heater + thermostat system a system, or are they pieces which are coupled?

    From the perspective of a system engineer, this seems the wrong type of question. System engineering typically starts with functional specifications: what is the proposed "system" supposed to do and how well must it do it? In this example, if the "system" is merely required to provide some output entity (eg, air or water) at some constant rate and/or temperature, no external thermostat is necessary. If it is required to maintain a constant temperature in some defined space, one is necessary. (And from a system POV, the "thermostat" could be a super in the basement responding to resident complaints - ie, any "design" that meets the system requirements is an acceptable candidate).

    This perspective is why I am bothered by questions like "does Searles' Chinese room speak Chinese?" Give me the system specification for "speaking Chinese" and a test plan for determining whether or not those specs are met and I can give you an answer. And less trivially, "Is that system a person or a PZ?" Same response.

    Also, from the strict system engineering perspective (which I infer was roughly the perspective that motivated Gibson's ill-worded comment about transmission along the optic nerve) internal flows are irrelevant - or as system engineers tend to say, "that's just a design detail". Which is not to say such considerations are unimportant for system analysis, just that they need not be made for the system viewed as a "black box".

    ReplyDelete
  13. From the perspective of a system engineer, this seems the wrong type of question. System engineering typically starts with functional specifications: what is the proposed "system" supposed to do and how well must it do it?
    Actually that's a fair point.

    Your comment about 'irrelevance of internal flows' is interesting - it's basically what Gibson was trying to say here I think but, as in many cases, he lacked a formalism or technical language so he had to make one up. Is there a way in systems engineering to express that idea in a way that isn't just stating it, but that explains why it's effectively irrelevant? It might be a useful way to frame the point and avoid the confusion caused by 'transmission' etc.

    ReplyDelete
  14. Hi, Charles,

    Thanks for stopping by and commenting.

    It seems to me that from the systems engineering perspective you describe, there is some indifference to how things get done. I take this to be the moral of your comment that the thermostat "could be a super in the basement responding to resident complaints - ie, any "design" that meets the system requirements is an acceptable candidate."

    So, from a systems engineering POV, you wouldn't care if a system involves thinking to get a job done or not, right?

    But, what about reverse engineering? Suppose that you were reverse engineering how humans catch a fly ball, for example. From a reverse engineering POV, you would want to know, for example, what is going on in, say, area V1, right?

    ReplyDelete
  15. Andrew and Ken -

    Thanks for the replies.

    "Gibson ... lacked a formalism or technical language ,,,"

    So I inferred. FWIW, my translation of the lead-off quote in Ken's post would go something like this:
    ======
    We can think of vision as the product of a perceptual system which includes a sensory subsystem (the eye), a processing (in a very general sense) subsystem (including the brain), and a motor subsystem (that part of the musculature which implements ocular adjustments), all in a feedback loop. From a strictly system perspective, we need not address design details like what specifically flows along the optic nerve (eg, whether an inverted picture, a set of messages, or something else is transmitted from eye to brain).

    The system registers the invariants in the structure of ambient light. The eye is not a camera that forms and delivers an image, nor is the retina simply a keyboard that can be struck by fingers of light.
    =====

    Not that I agree with every assertion in my rewrite. In the second paragraph. I'm not sure what the first sentence means; it seems possibly to conflict with the fact that although the light input to the system may be invariant in some cases, the light impacting the retina is not invariant by design (as a result of the feedback). I agree that the output from the retina is not literally "an image", but presumably neither is the output of the sensor system in a digital camera (not knowing exactly how they work, I'm only guessing). The last sentence seems more poetry than technology.

    "From a reverse engineering POV, you would want to know, for example, what is going on in, say, area V1, right?:

    Yes, and my sense is that failure to recognize the synergistic relationship between top level system analysis (SA) and lower-level reverse engineering (RE) is can be an impediment. Being a naif in these matters, in order to learn I'm constantly seeking to engage people who actually know what they're talking about, but often encounter two mindsets: on the one hand, those who say there is no longer a place for philosophers (arguably analogous to SAs) because the remaining work must be done by scientists (REs); on the other hand would-be SAs who (IMO) take the "hard problem" label too seriously and declare RE hopeless without trying.

    In doing system engineering, I always found it necessary to understand to some extent a system's internal architecture and design a couple of levels below the top (ie, down where the "internal flows" and other design details are found). Consequently, I've tried to adopt both SA and RE perspectives as I learn about the area and have found doing so beneficial so far.

    Hence, my answer to:

    "from a systems engineering POV, you wouldn't care if a system involves thinking to get a job done or not, right?"

    is that in doing SA I might very well not care whether it is involved, but I would consider myself a poor system engineer if I didn't dig deep enough into the system design to find out.

    A couple of somewhat related questions:

    First, I've recently been reading Kevin O'Regan's papers on his - and one-time coauthor Noe's - sensorimotor approach, which seems related to this quote from Gibson and from my perspective seems promising. I'd be interested in knowing what you folks think of it.

    Second, it has seemed to me even from my first pass through Sellars' EPM essay - after which I had at most a vague idea what he was getting at - that it is a key piece of background for attacking the "hard problem". Yet I find people (admittedly, usually amateurs like me, but often fairly knowledgeable nonetheless) who never heard of him, and Chalmer's "The Conscious Mind" dismisses him with a single end note. I'm clearly missing something - what is it?

    ReplyDelete
  16. From a strictly system perspective, we need not address design details like what specifically flows along the optic nerve (eg, whether an inverted picture, a set of messages, or something else is transmitted from eye to brain).
    I guess I was looking for a 'why I'm allowed to do this' - how would you justify doing this? (Just curious, I like trying to triangulate on ways different disciplines talk about these things to help me try and address some of the communication problems people external to ecological psych have with Gibson's writing.)

    re: invariants - I will cheekily point you to my blog where what Gibson means by that comes up regularly :)

    I like talking to engineers about this stuff - we're stealing lots of things from dynamics, etc to try and explain our specific (perception-action) systems and I think it's our job to keep talking to the experts about whether we're using things correctly. A lot of dynamics stuff in psychology is fairly dreadful, and it's a real problem.

    ReplyDelete
  17. Andrew -

    Actually, I appreciate the pointer to materials by/about Gibson, even if provided "cheekily"! His name keeps coming up, so I presumably need some level of familiarity with his work. Furthermore, your description of your area of study as "perceptual control of action" sounds directly relevant to the sensorimotor-like approach I'm currently pursuing in which action - the response part of my view of humans as essentially stimulus-response systems - plays a crucial role. So, I look forward to reading about your work.

    To repeat, my objective was merely to rewrite the Gibson paragraph in a way consistent with my view of a systems engineering approach, while neither concurring with nor disputing any specific assertion. In any event, here's an example of how the general concept of "not caring" about internal flows can arise.

    Suppose you are writing a system spec for a satellite system. There are typically three major subsystems: the satellite(s), ground terminal(s), and the command and control system. There are in principle two ways one might approach writing the spec. First, you could consider the system as a whole to be a "black box" and specify only its functional requirements - what it's supposed to do and how well - and the system's human interfaces at the ground subsystems. The detailed design of the subsystem interfaces would be left to the system contractor (for example, choices of frequency, modulation, coding, etc, for each ground-space communications link). The second way is to also specify requirements for those interfaces (the ultimate in "caring" about internal flows). The latter approach is necessary if the subsystems are contracted separately, in which case from each subsystem contractor's POV their piece is "the system" and the interface(s) between the top-level subsystems (space, ground, command and control) become external rather than internal.

    As Ken's question suggested, much of what goes on in this area isn't a priori system design but rather a posteriori reverse engineering (what I might call "system analysis"), in particular, looking for the elusive NCCs. Rightly or wrongly, my impression is that skepticism about this approach is on the rise. So, I've been trying to take more of a system design approach and ask "from an evolutionary perspective, what benefits accrue to improved perception, the addition of phenomenal experience, and the emergence of other capabilites that are typically associated with the word 'consciousness'?" So far, I have found that approach helpful.

    Returning to the situation described in the quoted Gibson paragraph, I can see one caring - or not - about internal flows depending on one's objectives. For example, I'm now toying with viewing phenomenal experience as a consequence of the operation of a feedback system (of which the perceiving organism is only a part) and don't yet see a need to know a lot of detail about that system's internal flows. But again, I would nonetheless want to know as much as possible about them since additional knowledge can only help even if not strictly necessary.

    Note: I realized after posting my last comment that O'Regan's sensorimotor view seems architecturally consistent with Gibson's description of what is essentially a continuous feedback loop: sensory input flows to processors which, among other things, generate a flow of motor commands back to the sensory organ.

    ReplyDelete
  18. @Charles,
    Notice that there is an apparent discrepancy between what Gibson writes and how you parse it:

    Gibson
    "The eye is not a camera that forms and delivers an image".

    Wolverton
    "I agree that the output from the retina is not literally "an image""

    Now, as far as I can tell, all parties agree that the output from the retina is not an image (ultimately viewed by a homunculus). But, Gibson actually writes something that strikes me as just bizarre, namely, that the eye is not a camera. That's different, right? Read in a straightforward way that's just simply wrong, yes?

    So, maybe Gibson doesn't mean the straightforward thing. Maybe he means only that it's not important to EP that the eye is a camera. Or maybe he means that the output of the eye is not an image.

    My point here is not so much to point out a problem is your exegesis so much as to draw attention to an oddity in what Gibson writes.

    ReplyDelete
  19. re: O'Regan, I have read only one (I think) of his solo papers, but I am more familiar with Noe's view. I am pretty sceptical about those for reasons I try to spell out here:

    Consciousness: Don't Give up on the Brain. (2010). In Pierfrancesco, B., Kiverstein, J., & Phemister, (Eds.) The Metaphysics of Consciousness: Royal Institute of Philosophy Supplement, 67. (pp. 263-284)

    Understanding the Embodiment of Perception (2007) Journal of Philosophy, 104, 5-25.

    re: Sellars' EPM. That is a notoriously difficult paper, but I can't say that I am enough of an expert on consciousness or Sellars' take on it to comment helpfully on this.

    ReplyDelete
  20. Ken -

    Your observation about eye-as-camera raises two issues that constantly bother me while trying to learn about this area. First is the often careless - and therefore confusing - use of language in discussing material that is difficult enough without adding imprecision. So, I try to avoid analogies, in particular considering the visual system as analogous to a camera, instead thinking of it as merely a sensor that converts input spectral density distributions into neural activity patterns. That seems to help me avoid inadvertently wandering into the Cartesian Theater, and more important, helps in separating functions that can be supported by the visual system without phenomenal experience (PE) - by which I mean the image part of the camera analogy - from those that can't. Eg, in order to detect and analyze the motion of medium size objects within the visual field, PE doesn't seem to be required, while it does seem necessary for describing one's visual experience to another person.

    The second issue is more general. Since vision is the richest - and in a sense the most familiar - of the senses, it seems likely to be harder to abstract than are the other senses. Or in Sellarsian terms, it may be harder to avoid mistakes such as thinking of knowledge gained by vision as "given" rather than learned. For example, in distinguishing between merely having a sensory experience and knowing what sensory experience is being had, Sellars uses the examples of "seeing red" and "hearing C#". Now, almost everyone will be inclined to think that "I just know red when I see it", while only those few with perfect pitch will have any illusion that "I just know C# when I hear it". My guess is that if we all thought about this stuff in terms of aural events - or maybe even tactile - we'd be less prone to that kind of mistake. (BTW, what is "EP"?)

    I've read "Out of our Heads", but am a bit more comfortable with O'Regan's take on the general idea as presented here:

    http://nivea.psycho.univ-paris5.fr/Manuscripts/PBR_50005_reducedsize.pdf

    His approach incorporates the idea of sensation as exercising a skill, which resonates with me since skills are learned and I suspect that learning plays a role in more mental phenomena than is generally assumed. O'Regan assumes only "weak enactivism" as opposed to Noe's (IMO, strange) emphasis on "strong enactivism".

    It turns out that I have also read "Don't Give up on the Brain", but had forgotten since I'm trying to absorb so much new material in this area that even things read as recently as August are part of ancient history for me. I commented on it here:

    http://www.consciousentities.com/?p=586#comment-162982

    and commented on the "paralysis problem" here:

    http://www.consciousentities.com/?p=634#comment-163425

    If one thinks of paralysis ala Bauby as somewhat analogous to solitary confinement, my speculation about mental atrophy seems not so far off, as suggested in various parts of this (very long) post about the incarceration of Pvt Manning (of wikileaks fame/infamy):

    http://www.salon.com/news/opinion/glenn_greenwald/2010/12/14/manning/index.html

    Yes, EPM is indeed a difficult paper. I've read through it twice so far, aided by Brandom's study guide and DeVries and Triplett's 200 page explication (of the 100 page essay!) and would describe my understanding as still tenuous at best.

    ReplyDelete
  21. Hi, Charles,

    Sorry not to have picked up on your comments over at Conscious Entities. I'm not good at keeping up with discussions in the comments. But, I did post a bit regarding thought as a dispositional property.

    Regarding analogies, I take the eye to be pretty literally a camera. That's not an analogy. It's every bit as much a camera as is a camera obscura. I mean the eye, not the retina. The retina is a sensor, I would say.

    I would agree with you that vision is a more beguiling sense than is audition. (EP = ecological psychology, aka Gibsonianism.)

    Would you mind if I created a post here based on your comment on your interpretation of Noe? I think there is a fair amount to discuss about that.

    ReplyDelete
  22. "Would you mind if I created a post here based on your comment on your interpretation of Noe?"

    I'd be delighted. And don't worry about any harsh criticism - I have no reputation to preserve!

    ReplyDelete
  23. Hi, Charles,

    Thanks. It could be a few days before I get to this.

    ReplyDelete