This week I'll be leaving off with the Runeson posts, then turning briefly to some largely supportive (for me) comments on Krist Vaesen's "Knowlege without credit, exhibit 4: Extended Cognition". It is conveniently available for download here.
Next week I think I'll be turning to Sutton, Harris, Keil, and Barnier, (2010), "The Psychology of Memory, Extended Cognition, and Socially Distributed Remembering". Marko Farino and Gary Williams have chided me about this one. You can see the principal lines of my reply to Sutton, et al., over at Gary's blog.
After that, I'll have a few stray comments before, I think, returning to the TSRM paper. Billing to the contrary, I don't think that TSRM solve either Fodor & Pylyshyn's Trivialization Problem or the problem of illusions. The problems, indeed, interact.
Showing posts with label Runeson. Show all posts
Showing posts with label Runeson. Show all posts
Monday, November 15, 2010
Runeson on Sensory Deprivation Experiments
Inhibited practiceThe moral Runeson draws actually reminds me of an issue that I've not yet broached. So, what, according to Ecological Psychologists (EPs), is the explanation of the visual deficits of those who have congenital cateracts removed? What do they say about the deficits of the kittens in the Hein and Held experiment? Are they going to say anything more than something in the body has deteriorated through corrosion? (I know Noë's line on this and I don't think it works, but is there a "standard" EP line?)
If a person is not allowed to use his planimeter for a number of years, he may be unable to use it afterwards. Either the planimeter has deteriorated through corrosion or the user has grown too old to relearn its use.
Moral. The problems encountered by blind people who get their vision back through operation could be of the above kind. The same might be true for the practical blindness exhibited by kittens who have been moved around passively for a long time. (Runeson, 1977, p. 177).
Most importantly, what do they say about the deficits in experiments with monocular lid suture? These look to be cases in which the neurons of the visual system develop abnormally. But, EPs seem not to want to talk about the brain. I have not seen an outright prohibition or denunciation of research on the brain, but then again EPs don't seem to talk much about it anyway. Gibson, 1979, has no index entry for "brain".
Chemero apparently thinks Gibson rejects use of information about the brain, but Gary has rejected that rejection.
Friday, November 12, 2010
Runeson's Rejection of Assumptions
The argument above might suggest to some readers that perception is based on "assumptions" about the world. It would be a misleading idea for many reasons. For instance, to work by assuming something implies that one knows about or can imagine a more general situation which for the present purpose needs to be narrowed down. However, there is no reason why evolution (nor the resulting animal) should "know" anything about more general conditions. For the evolving animal its ecological niche is the universe. If a perceptual mechanism can pick up useful information in and about this universe, it is there to stay. The difficulties that we as scientists, trained in abstract geometry and theoretical physics, encounter in our attempts at understanding the preconditions for perception should not be ascribed to the perceptual system under study. (Runeson, 1977, p. 176).Take the claim that "to work by assuming something implies that one knows about or can imagine a more general situation which for the present purpose needs to be narrowed down". I don't see this implication. One might well hold the view that natural selection simply exterminated the visual systems without the right "assumptions". So, the visual system just has those assumptions, but not the ability to entertain alternatives. Indeed, a standard cognitivist assumption is that the assumptions might well be merely "implicit".
Nor does this account involve saying the evolution "knows" anything about more general conditions. That is only metaphorical at best.
The last sentences seems to me to be a little strong. I would say that "should not necessarily be ascribed to the perceptual system under study".
Thursday, November 11, 2010
Runeson on Perceptual Tools
In analogy with the planimeter and its user, our perceptual systems will be considered as a set of smart instruments which are (more or less actively) used by our intellect to get information about the environment.
The study of perception would then be the study of the perceptual instruments. This may be subdivided into the search for the principles behind the function of the instruments, and the discovery of the physical realizations of these principles, i.e. how these instruments are actually built. The former would be the psychological part of the enterprise and the latter would be the physiological part.
The relation between perception and cognition is modelled by the relation between the planimeter and its user. However, it is only the non-perceptual
functions of the user which are relevant to the model. Thus, our model does not contain a complete homunculus--only a cognitive, emotional, etc., homunculus. This should be a proper procedure when the focus of interest is on perception.
Sensory psychophysics
The study of relations between simple physical variables and experience is based on the implicit or explicit assumption that such relations are fundamental for the apprehension of "secondary" properties like causality and depth. Even when one finds the latter properties more interesting, one feels obliged to study the "primary" ones first. Mostly, such studies indicate that we are very bad at judging simple variables. This seems paradoxical when confronted with the delicate perceptual tasks we repeatedly perform in normal life.
I find the tool plus user story striking for its similarity to Fodor's later hypothesis of modules plus central system. This parallelism then invites the comparison between Runeson's tools and Fodor's modules. Right away, the principal difference appears to be that Fodor's tools begin with simple transductions, then performs computations to yield outputs that are "useful" to the organism. So, the vision module will provide something like an accounting of the 3D layout of objects in the world. It would be a rote mechanism, I assume. By contrast, Runeson tools will (likely) be ones that provide direct pick up of things like the 3D layout of objects in the world. These would be "smart" mechanisms.
If we think about Fodor's account of modules, Runeson's "paradox" goes away. What the visual module does is take physical stimuli plus "assumptions" to generate hypotheses about the 3D layout of objects in the world. It is because of the nature of module output that we are not so good at describing the inputs. The modules don't simply regurgitate the inputs; they give us, instead, things like the 3D layout of objects in the world.
Note as well that Fodor's modules can respect some version of the Gibsonian idea that vision is not for detecting points, lines, etc.; instead it is for perceiving "meaningful" things in the environment. It's just that Fodor's module yields the meaningful by indirect perception.
And, indeed, this seems to me to be a common way for tools to operate. Often tools measure things we do not care about in order to give us information about things we do care about. So, in fMRI, we do not really care about changes in blood oxygenation levels, we care about changes in brain activity and changes in blood oxygenation levels (at least putatively) just gives us part of the means to infer brain activity. IR spectroscopy gives us a measure of the absorption of infrared light at different frequencies. We don't really care about that absorption spectrum; instead we care about what it enables us to tell about the chemical structure of the tested compound.
Wednesday, November 10, 2010
Runeson's Variables of High Informational Value
Now, if the theory of physics cannot be claimed to have monopoly on descriptions of "what is really there" , there is no longer any reason to assume that the perceptual systems must necessarily begin by registering what is basic to physics. On the contrary, we should expect perceptual mechanisms which directly register variables of high informational value to the perceiver. (Runeson, 1977, p. 173).Why should we expect perceptual mechanisms that directly register variables of high informational value to the perceiver? I'm not sure exactly what "variables of high informational value" are, but maybe they are like affordances, such as something like an apple. Maybe the reason that we do not directly register affordances is that evolutionary constraints have made this impossible. (This is a theme I've mentioned before.) The early visual system (think back to fish) might have been constrained to detect light. So, the best that we can do today is detect things like apples by detecting the light that they reflect. We can detect apples and other fruits in trees by detecting their color. The shift from the rejection of reductionism to physics to the adoption of directly registering variable of high informational value to the perceiver seems to me hasty.
Tuesday, November 9, 2010
A second reason for thinking smart mechanisms likely?
For at least two reasons the likelihood of smart mechanisms in perception should be considerable. ...This second line of reasoning seems simplistic to me. Grant that when humans design things they often try to make them no more complex than necessary. But, humans often design things starting from a "clean slate". By contrast, when natural selection designs things it must begin with "what its got". So, when natural selection began to design bipedal walkers, it had to begin with a spine that was originally designed for quadrupedal locomotion. That means that there can end up being an ultimate design that is suboptimal. By parity of reasoning, for all we know, natural selection faces certain design constraints when it begins to develop visual systems using things like light-sensitive cells. It seems to me that it is our ignorance of possible design constraints that makes this kind of evolutionary speculation fraught with risk. This is why this does not seem all that convincing to me.
The second reason has to do with the principle that when designing something one does not normally make it more complex than necessary. The perceptual mechanisms were not designed by a human mind, however, and are therefore not subordinate to the same complexity scale(s) as man-made devices. Biological evolution might have arrived quite easily at solutions which require the utmost of capacity and sophistication of a human mind for their basic principles of operation to be understood. (Runeson, 1977, p. 174).
And I am willing to grant the possibility of what biological evolution might have done, but recall that at this point Runeson is supposed to be arguing about what biological evolution has likely done, or at least why "the likelihood of smart mechanisms in perception should be considerable." (I'm assuming that "considerable" means something like high, but I guess it could mean non-negligible. Kind of unclear really, if you ask me.)
Monday, November 8, 2010
One reason for thinking smart mechanisms likely?
For at least two reasons the likelihood of smart mechanisms in perception should be considerable. One is that the basic tasks of perception, and the information available for them, are stable properties of the organisms and the environment, respectively. It therefore seems appropriate that they have been solved through "invention" (evolution) of smart mechanisms. Many of the tasks require more or less continuous operation, which also favors smart solutions. (Runeson, 1977, p. 174).I don't get this. So, let's say that perception is for gathering information about the stable properties of organisms and the environment. (I think this is what Runeson means, but there appears to be some typo or something in the second sentence.) Why does that make smart mechanisms more likely? And let it be the case that many tasks require more or less continuous operation. Why does this make smart mechanisms more likely? I don't get the connections between these features of tasks and the nature of the mechanisms. It just seems like a non sequiter to me. Why are neural circuits any less probable a mechanism for handling this?
Friday, November 5, 2010
What's the "Target" of Runeson's Smart-Rote Distinction
In posts on this topic a week or so ago, I was harping on what appears to me to be the messiness of this smart-rote distinction. It is not clear to me how particular cases are supposed to be classified. There seem to me to be cases that have both features of rote mechanisms and features of smart mechanisms. So, this seems to me to be an expository issue.
Rote instruments consist of large numbers of a few types of basic components, each of which performs a rather simple task. The accomplishment of complex tasks is possible through intricate interconnections (programming) between the components. The important principles of operation reside in the program, and by changing the program the instrument can be put to different uses. New problems can be approached in a straightforward, intellectual, bureaucratic, "systems", manner. The solutions will be elementaristic and often a bit clumsy.But, I also am very unsure what Runeson takes to be the "targets" of his exposition. When I look at the anatomy and physiology of human beings, I see structures, namely, neural networks, that seem to me to consist of many parts that are "programmable". But, I don't see what structure Runeson takes to "consist of few but specialized components". This is not to say that there aren't or couldn't be. It's that I am unsure what he is talking about. Is the retina one of the few, but specialized components? Is the lateral geniculate nucleus another? And area V1 another? I don't know. This, too, is an expository issue.
Smart instruments are specialized on a particular (type of) task in a particular (type of) situation and capitalize on the peculiarities of the situation and the task, i.e. use shortcuts, etc. They consist of few but specialized components. For solving problems which are repeated very often, smart instruments, if they exist, are more efficient and more economical. They are also likely to be more reliable and durable. Solution of a new problem requires the invention of a new instrument. A straightforward and bureaucratic procedure is not likely to achieve that, since the task is creative and just as much intuitive as intellectual. (Runeson, 1977, pp. 173-4).
Tuesday, October 26, 2010
Runeson's EP and FAPs 3
The third thing I wonder regarding EP and FAPs is this idea that agents, such as fish, perceive affordances. But, it looks like what the fish perceive is red spot, not male stickleback. But, red spots are not "meaningful" to the fish; male sticklebacks are "meaningful" to the fish.
On this topic, tomorrow I'll jump to an example (or maybe two) of sharks from Turvey, Shaw, Reed, & Mace, 1981 ....
On this topic, tomorrow I'll jump to an example (or maybe two) of sharks from Turvey, Shaw, Reed, & Mace, 1981 ....
Monday, October 25, 2010
Runeson's EP and FAPs 2
FAPs are fixed action patterns. The wikipedia gives this description (which is about all I know of them):
Now, FAPs, to my mind, raise a couple of interesting questions vis a vis EP. Here's another.
Suppose that the sticklebacks using a "smart" mechanism in the sense of one that "capitalize[s] on the peculiarities of the situation and the task". That is, the sticklebacks rely on the fact that, by and large, the only things with red patches on them in the stereotypical stickleback environment is a male stickleback.
Yet, this is what cognitivists, I think, will often describe as relying on an "assumption" about the environment. It might be what cognitivists call an implicit assumption, one that is not coded as a line in a program or as a data structure, but an assumption nonetheless.
Now, I don't have text to cite, here, but I think at least one thing EPists don't like
"assumptions" is that they don't like them construed as representations or what cognitivists might call "explicit assumptions". But, perhaps non-representational, "implicit assumptions", are ok. Indeed, I take it that Runeson's account of the information a person uses in state Ames room viewing relies on what cognitivists might call "implicit assumptions".
Another example of fixed action patterns is the red-bellied stickleback (fish). The male turns a bright red/blue colour during the breeding season. During this time they are also naturally aggressive towards other red-bellied sticklebacks, another FAP. However anything that is red, or has the appearance of being red, will bring about this FAP. The proximate response to this is that due to the stimuli, a nerve sends a signal to attack that red item. The ultimate cause of this behavior stems from the fact that the stickleback needs the area in which it is living for either habitat, food, mating with other sticklebacks, or other purposes. This is an inherited behavior, but it is has been found that this behavior may be more flexible than scientists thought at first. This interaction was studied by Niko Tinbergen. The threat display of male stickleback (fish) is also a fixed action pattern triggered by a stimulus.
Now, FAPs, to my mind, raise a couple of interesting questions vis a vis EP. Here's another.
Suppose that the sticklebacks using a "smart" mechanism in the sense of one that "capitalize[s] on the peculiarities of the situation and the task". That is, the sticklebacks rely on the fact that, by and large, the only things with red patches on them in the stereotypical stickleback environment is a male stickleback.
Yet, this is what cognitivists, I think, will often describe as relying on an "assumption" about the environment. It might be what cognitivists call an implicit assumption, one that is not coded as a line in a program or as a data structure, but an assumption nonetheless.
Now, I don't have text to cite, here, but I think at least one thing EPists don't like
"assumptions" is that they don't like them construed as representations or what cognitivists might call "explicit assumptions". But, perhaps non-representational, "implicit assumptions", are ok. Indeed, I take it that Runeson's account of the information a person uses in state Ames room viewing relies on what cognitivists might call "implicit assumptions".
Friday, October 22, 2010
Runeson's EP and FAPs 1
FAPs are fixed action patterns. The wikipedia gives this description (which is about all I know of them):
Now, FAPs, to my mind, raise a couple of interesting questions vis a vis EP. Here's one.
Maybe FAPs are both rote and smart, i.e. "specialized". Does the red-bellied stickleback have a mechanism just for detecting males? Although the wikipedia doesn't mention a dedicated neural circuit, other accounts of other FAPs do:
Another example of fixed action patterns is the red-bellied stickleback (fish). The male turns a bright red/blue colour during the breeding season. During this time they are also naturally aggressive towards other red-bellied sticklebacks, another FAP. However anything that is red, or has the appearance of being red, will bring about this FAP. The proximate response to this is that due to the stimuli, a nerve sends a signal to attack that red item. The ultimate cause of this behavior stems from the fact that the stickleback needs the area in which it is living for either habitat, food, mating with other sticklebacks, or other purposes. This is an inherited behavior, but it is has been found that this behavior may be more flexible than scientists thought at first. This interaction was studied by Niko Tinbergen. The threat display of male stickleback (fish) is also a fixed action pattern triggered by a stimulus.
Now, FAPs, to my mind, raise a couple of interesting questions vis a vis EP. Here's one.
Maybe FAPs are both rote and smart, i.e. "specialized". Does the red-bellied stickleback have a mechanism just for detecting males? Although the wikipedia doesn't mention a dedicated neural circuit, other accounts of other FAPs do:
Fig. 2. Female goose behavior of picking eggs up. When it sees an egg outside the nest (key stimulus), it begins a repeated movement of dragging the egg with its beak and neck. However, if the eggs slides off or if it is removed by the researcher, the goose continues to repeat the stereotypic movements even if the egg is absent, until it reaches the nest, when then it does it all over again. FAP seems to correspond to a fixed neural circuitry elicited by the overall trigger stimuli. (italics added)So, FAPs could be examples of mechanisms that are both rote and smart.
Thursday, October 21, 2010
Runeson on Rote versus Smart Mechanisms 3
Rote instruments consist of large numbers of a few types of basic components, each of which performs a rather simple task. The accomplishment of complex tasks is possible through intricate interconnections (programming) between the components. The important principles of operation reside in the program, and by changing the program the instrument can be put to different uses. New problems can be approached in a straightforward, intellectual, bureaucratic, "systems", manner. The solutions will be elementaristic and often a bit clumsy.What started me on my post of yesterday was the observation that it looks to me at least as though the mechanisms of lateral inhibition in the retina (setting aside whether they are information processors or analogue computing devices) have most of the characteristics of rote mechanisms. (The last two sentences probably do not, however, describe the retinal mechanisms of lateral inhibition.) And, perhaps they are "specialized" for just lateral inhibition.
Smart instruments are specialized on a particular (type of) task in a particular (type of) situation and capitalize on the peculiarities of the situation and the task, i.e. use shortcuts, etc. They consist of few but specialized components. For solving problems which are repeated very often, smart instruments, if they exist, are more efficient and more economical. They are also likely to be more reliable and durable. Solution of a new problem requires the invention of a new instrument. A straightforward and bureaucratic procedure is not likely to achieve that, since the task is creative and just as much intuitive as intellectual. (Runeson, 1977, pp. 173-4).
I haven't had time to look into these examples, but if I had the time I would look at:
1) The neural circuitry for the vestibulo-ocular reflex would be a rote mechanism that is also smart. Rote, but "specialized" for stabilizing images on the retina during head motions. The VOR might be especially good as it might be adjustable to accommodate changes in head size during growth.
2) Spinal reflex circuitry. Perhaps that is rote, but "specialized".
3) Neural circuits in area V4. Perhaps they are rote mechanisms, but "specialized" for color processing. (Pick any of the regions of visual cortex for that matter.)
4) Regions of motor cortex. Perhaps rote, but specialized for initiating/controlling finger movements.
Andrew kind of invited this post when asking for an example of a wondrous sort of device that could be both rote and specialized, so I hard to pick the computer example to avoid trampling over today's post.
But, again, all of this depends on what is meant by being "specialized".
Wednesday, October 20, 2010
Runeson on Rote versus Smart Mechanisms 2
Rote instruments consist of large numbers of a few types of basic components, each of which performs a rather simple task. The accomplishment of complex tasks is possible through intricate interconnections (programming) between the components. The important principles of operation reside in the program, and by changing the program the instrument can be put to different uses. New problems can be approached in a straightforward, intellectual, bureaucratic, "systems", manner. The solutions will be elementaristic and often a bit clumsy.These categories are not that neat. So, for example, one could have a device that has a large number of simple components which performs a rather simple task (hence looks to be to that degree a rote instrument), but which is also specialized on a particular (type of) task in a particular (type of) situation (hence looks to be to that degree a smart instrument).
Smart instruments are specialized on a particular (type of) task in a particular (type of) situation and capitalize on the peculiarities of the situation and the task, i.e. use shortcuts, etc. They consist of few but specialized components. For solving problems which are repeated very often, smart instruments, if they exist, are more efficient and more economical. They are also likely to be more reliable and durable. Solution of a new problem requires the invention of a new instrument. A straightforward and bureaucratic procedure is not likely to achieve that, since the task is creative and just as much intuitive as intellectual. (Runeson, 1977, pp. 173-4).
Maybe this, however, is not what is important about the distinction. Maybe it is the use of shortcuts. Using a shortcut is a smart thing to do, right?
Tuesday, October 19, 2010
Runeson on Rote versus Smart Mechanisms
Rote instruments consist of large numbers of a few types of basic components, each of which performs a rather simple task. The accomplishment of complex tasks is possible through intricate interconnections (programming) between the components. The important principles of operation reside in the program, and by changing the program the instrument can be put to different uses. New problems can be approached in a straightforward, intellectual, bureaucratic, "systems", manner. The solutions will be elementaristic and often a bit clumsy.
Smart instruments are specialized on a particular (type of) task in a particular (type of) situation and capitalize on the peculiarities of the situation and the task, i.e. use shortcuts, etc. They consist of few but specialized components. For solving problems which are repeated very often, smart instruments, if they exist, are more efficient and more economical. They are also likely to be more reliable and durable. Solution of a new problem requires the invention of a new instrument. A straightforward and bureaucratic procedure is not likely to achieve that, since the task is creative and just as much intuitive as intellectual. (Runeson, 1977, pp. 173-4).
I don't much like this kind of speculative psychology. (I prefer my psychology experimental.) I also don't want to deny the possibility of smart mechanisms.
But, on the one hand, it seems to me that it would be pretty hard to eliminate an important role for rote mechanisms. It seems to me that neural circuits pretty neatly fit the description of rote mechanisms, provided one construes "programming" broadly enough to include changes in patterns of synaptic connection and synaptic efficacy. But, if that is the case, it's going to be hard to challenge the old-fashioned psychological theories that think that visual perception involves photoreceptors.
And, on the other hand, it would seem that the plausibility or pervasiveness (as opposed to the mere possibility of) smart mechanisms depends a long on how specialized "specialized" is. I take it that it is implausible to suppose that the visual system is specialized for finding fruit or specialized for finding food. Of course, if one says that the visual system is specialized for vision or for seeing things, then I don't see that one really needs to evolutionary argument for that. That appears to be close to a tautology.
Monday, October 18, 2010
EP and Levels
This is just a question post.
So, cognitivists typically (actually, without exception to my knowledge) hold that there are something like "levels". So, for example, there is a psychological level that is realized by neuronal processes, which are realized by chemical processes, which are realized by quantum mechanical processes. Sometimes the levels are, following Marr, input-output, algorithmic, and implementational. Now, there are differences of opinion about what levels are, how many there are, and what relations there are between them, but do EPists have some version of this kind of picture of reality?
Runeson has something like this with his anti-reductionism and Gibson seems to insist on something like the psychological being a molar kind of enterprise distinct from physics, but I have never seen them talk about things like levels and their relations. (Nor is it something I've seen in introductions to Phenomenology.)
So, cognitivists typically (actually, without exception to my knowledge) hold that there are something like "levels". So, for example, there is a psychological level that is realized by neuronal processes, which are realized by chemical processes, which are realized by quantum mechanical processes. Sometimes the levels are, following Marr, input-output, algorithmic, and implementational. Now, there are differences of opinion about what levels are, how many there are, and what relations there are between them, but do EPists have some version of this kind of picture of reality?
Runeson has something like this with his anti-reductionism and Gibson seems to insist on something like the psychological being a molar kind of enterprise distinct from physics, but I have never seen them talk about things like levels and their relations. (Nor is it something I've seen in introductions to Phenomenology.)
Runeson on "being better off"
In his "On the Possibility of "Smart" Perceptual Mechanisms" (to which Andrew drew my attention), Runeson comments
So, I think that this line of thinking is, as I've said before, a little dicey. Maybe it would be better to have someone with more experience with evolutionary biology or philosophy of biology comment on this. Old fashioned experimental work seems to me more reliable here.
The above mentioned tennis-player would be much better off if his perceptual systems were smart enough to make use of this geometrical invariant. (Runeson, 1977, p. 176).I think that is not such a promising line of argument for a smart perceptual mechanism. The problem is that even if we grant that an agent would be much better off performing a given task were she to be able to use a geometrical invariant, there are other considerations that might prevent the "smart" mechanism from being adopted. For one thing, solving this particular task may not have been evolutionarily significant. For another thing, there could be other selection pressures in play. For a third thing, there could be physical or biological constraints that inhibit the adoption of the smart mechanism. (To comment a bit on this last point, it might be smarter for human babies not to be born so immature, but being born immature seems to be a constraint imposed by having to get the baby's head out of the human birth canal before it gets too big to get out at all.) And the fact that we have relatively little knowledge of what those constraints might be encourages the view that there are no such constraints.
So, I think that this line of thinking is, as I've said before, a little dicey. Maybe it would be better to have someone with more experience with evolutionary biology or philosophy of biology comment on this. Old fashioned experimental work seems to me more reliable here.
Friday, October 15, 2010
Runeson's objection to starting with physics
Now, if the theory of physics cannot be claimed to have monopoly on descriptions of "what is really there" , there is no longer any reason to assume that the perceptual systems must necessarily begin by registering what is basic to physics. On the contrary, we should expect perceptual mechanisms which directly register variables of high informational value to the perceiver. (Runeson, 1977, p. 173).This seems to me a rash argument. Grant that physics does not have a monopoly on reality. That there are things, such as say trees, that are not a part of physics narrowly construed. There still might be reason to believe (forget about "assuming") that perceptual systems begin by registering (set aside the "must necessarily") what is basic to physics (set aside quarks and think of photons). I mean, Runeson seems to assume that something like reductionism to physics is the only possible reason to think that vision science should begin with entities of physics.
So, for example, why couldn't there be some experimental result is psychology or neuroscience that supports the view that perceptual systems begin by registering what is basic to physics? By this, I take it that Runeson is not merely challenging the idea that vision science should begin with fundamental physical entities, such as quarks. I'm assuming that he would not like a vision science that begins with, say, photons. Why couldn't there be an experiment that shows that people respond to flashes of light or patterns of light. That really is the mainstream view of vision. EP folks think this; they just disagree with it.
Wednesday, October 13, 2010
Future Posts 10/13/2010
Some weeks ago, Andrew Wilson sent me a link to Runeson's (1977). "On the Possibility of 'Smart' Mechanisms" Scandinavian Journal of Psychology, 18, 172-9.
I'll be working through some comments on this starting on Friday. The paper is available here.
After that, I have a number of comments on Chapter 4 of Gibson 1979. Basically, I take exception to Gibson's objection to retinal images. I have other scattered posts on Gibson, but it just occurred to me that it might be good to contrast Gibson's case against retinal images with Alva Noë's comments/ objections to retinal images in Action in Perception.
And to think that I worried that I would not have enough to say for a blog. It must be the brevity of the posts....
I'll be working through some comments on this starting on Friday. The paper is available here.
After that, I have a number of comments on Chapter 4 of Gibson 1979. Basically, I take exception to Gibson's objection to retinal images. I have other scattered posts on Gibson, but it just occurred to me that it might be good to contrast Gibson's case against retinal images with Alva Noë's comments/ objections to retinal images in Action in Perception.
And to think that I worried that I would not have enough to say for a blog. It must be the brevity of the posts....
Thursday, September 23, 2010
Runeson's Polemic
Most of the time, Runeson seems to be pretty fair in describing what is going on in debates over perception, but not here:
Personally, it seems a more middle of the road view that there is some ambiguity sometimes is a pretty likely view. And, if that is true, then it seems as though we would need a vision science framework that hypothesizes something like "presuppositions". That was what I was driving at in trying to find out what Gibsonians say about static viewing of the Ames Room.
At times it may seem that Gibson accepted static-view ambiguity (e.g., 1966. pp. 198-199) and even gave nodding recognition to the reasonableness of the invocation of assumptions (Gibson, 1979, p. 167). However, it would be wrong to take this as his definite position on static information. A circumspect reading reveals that Gibson's admissions of static-view ambiguity were of a temporary nature, made in the context of his all-out war against the dogma of universal equivocality in proximal patterns. Because, strictly speaking, the demonstration of a single counter instance would decide the basic issue in his favor, there is a premium in giving priority to nonstatic conditions, in which case specificity is less difficult to demonstrate (Runeson, 1988, p. 298)."dogma of universal equivocality"? "the demonstration of a single counter instance would decide the basic issue in his favor"? It seems that Gibson and Runeson are near the other extreme claiming that there is no ambiguity at all.
Personally, it seems a more middle of the road view that there is some ambiguity sometimes is a pretty likely view. And, if that is true, then it seems as though we would need a vision science framework that hypothesizes something like "presuppositions". That was what I was driving at in trying to find out what Gibsonians say about static viewing of the Ames Room.
This is so hard to interpret, it could be philosophy
Although I like Runeson's paper, there are times when it seems to me he can be equivocal. Here's a sample:
In the next sentence, he suggests that equivalent configurations have no necessary consequences for the nature of perceptual systems, which sounds kind of dismissive of the enterprise of studying them.
But, then, in the third sentence, he suggests that there could be some use for the analysis of equivalent configurations. Then, continues to suggest that sometimes there could be equivalent configurations, so that there could be somewhat erroneous perceptions and that, in fact, this is to be expected. This seems to me to run counter to Andrew's occasional hints that there is always univocal information.
It's tough.
Not even for static monocular viewing conditions does the notion of equivalent configurations capture the relevant conditions for perception. It is therefore without necessary consequences for the nature of perceptual systems. Granted, the analysis of equivalent configurations can help in constructing and analyzing illusory demonstrations. In such cases, perception can yield outcomes that are erroneous in at least some respects. This is to be expected from the view of perception as information-based and functioning through inherent compatibility with environmental constraints. (Runeson, 1988, p. 302).Now, in the first sentence, he could be implicitly limiting himself to "static monocular viewing conditions in the Ames Room", or not. It's clear he doesn't think that there are equivalent configurations in the Ames Room. Natural physical constraints, he proposes, rule that out. But, does he think this holds more generally? There are times when he suggests that we can't rule this out. He thinks it's hard to confidently conclude that "There exist no equivalence breaking evidence"; that would proving a negative existential.
In the next sentence, he suggests that equivalent configurations have no necessary consequences for the nature of perceptual systems, which sounds kind of dismissive of the enterprise of studying them.
But, then, in the third sentence, he suggests that there could be some use for the analysis of equivalent configurations. Then, continues to suggest that sometimes there could be equivalent configurations, so that there could be somewhat erroneous perceptions and that, in fact, this is to be expected. This seems to me to run counter to Andrew's occasional hints that there is always univocal information.
It's tough.
Subscribe to:
Posts (Atom)