Showing posts with label Adams. Show all posts
Showing posts with label Adams. Show all posts

Tuesday, September 28, 2010

Wilson's Premise (e)

In an earlier post, I did register the kind of objection one should probably explore against Wilson's premise (e), namely, that external cognitive resources do not often play the same or similar functional roles in the detection and creation of meaning as do internal cognitive resources.  I also noted some apparent shift of position between Wilson's statement of the premise and the accompanying text that would appear to be commentary.

Wilson adds this to his defense of (e) on the next page
Third, even those happy to make both of these concessions might well think that the final premise, (e), is indefensible, since there will always remain a crucial asymmetry between internal and external cognitive resources. Roughly speaking, the latter only gain purchase on cognitive activity via the former, and so internal resources remain fundamental to cognition in a way that vitiates the inference to externalism. (Wilson, 2010, p. 176).
Here I think that Rupert, Adams, and Aizawa have been pretty consistent in admitting that extended cognition is possible, hence that we do not maintain that there will always be a crucial asymmetry between internal and external resources.  Maybe there are other critics of EC who have maintained this.  The Rupert, Adams, and Aizawa view, at any rate, is that there will typically be an asymmetry.  (But, really, Adams and I don't really talk about this asymmetry kind of stuff anyway.  That's Rupert's, and others', spiel.)

Friday, August 20, 2010

"Defending the Bounds of Cognition" Revisited 6

Perhaps there are futuristic science fiction scenarios in which humans have sufficient access to brain states that this situation could change, but then maybe it will be the case that cognitive content can at times be socially controlled.  Maybe.  After all, can a mental image of Abraham Lincoln really mean George Washington? (Adams & Aizawa, 2010, p. 73).
*

This last sentence seems a little cryptic to me now, but the idea is this.  Suppose you have a mental representation of Abraham Lincoln, so that you are thinking of Abraham Lincoln.  Could it really be the case that we can set up a convention so that when you get this mental image you are really thinking about George Washington rather than Lincoln?  Maybe the convention could get the image to mean George Washington for lots of people who are party to the convention, but could the convention get the image to mean George Washington for you?  Recall our earlier discussion of Thompson and Dretske regarding what something means for the subject?  They have the idea that mental meanings have to be meanings for the subject.  But, how could a public convention get a mental image to mean what it does for the subject?

Thursday, August 19, 2010

"Defending the Bounds of Cognition" Revisited 5

We think that Jones wants to go to that restaurant in Philly because she said she wants to go to that restaurant and is looking up the address in the phone book.  Even when we know that Jones wants to go to that restaurant in Philly, we don’t know what specific syntactic item in the brain bears that content.  This is not how conventional meanings work.  (Adams & Aizawa, 2010, pp. 72-73).

Wednesday, August 18, 2010

"Defending the Bounds of Cognition" Revisited 4

DtBoC repeats what I now think is an insufficiently tight formulation of the non-derived content condition.  (I've posted on this before, but it is perhaps worth repeating.)  It has,
In Adams & Aizawa, (2001), we proposed that “A first essential condition on the cognitive is that cognitive states must involve intrinsic, non-derived content” (Adams & Aizawa, 2010, p. 69)
Andy Clark has observed (somewhere) that there our view is that there are mental representations in the head that apparently have non-derived content, so that when Otto manipulates his notebook that overarching notebook manipulating process does involve intrinsic, non-derived content.  

I think the better, and stronger, formulation requires that the vehicles of content must bear non-derived content.

Tuesday, August 17, 2010

"Defending the Bounds of Cognition" Revisited 3

Ok.  I must admit that there is a point where I now think I misinterpreted Clark's view.  In "Defending" we wrote,
If you are coupled to a rock in the sense of always having it readily available, use it a lot, trust it implicitly, and so forth, Clark infers that the rock constitutes a part of your memory store."  (Adams and Aizawa, 2010, p. 68).
Now, I still think this is kind of funny, but alas not correct.  Insofar as a rock is typically not an information resource, then by Clark's lights it's not something into which cognition can extend.  I've mentioned in a couple of posts already that it's Clark's view that cognition only extends into information resources.  I like using the recipe versus the oven story in baking a cake to make the point.

Nevertheless, I think that the principal point we were driving at is still correct.  One cannot extend cognitive processing into an information resource just by being coupled to it in a "trust and glue" kind of way.

Monday, August 16, 2010

"Defending the Bounds of Cognition" Revisited 2

In two replies to "Defending the Bounds" Clark complains about the unintelligibility of "cognitive objects":
When Clark makes an object cognitive when it is connected to a cognitive agent, he is committing an instance of a "coupling-constitution fallacy. (Adams and Aizawa, this volume, p. 67; my emphasis)
But this talk of an object's being or failing to be "cognitive" seems to me almost unintelligible when applied to some putative part of a cognitive agent or of a cognitive system. What would it mean for the neuron or the pencil to be, as it were, brute factively "cognitive"? Nor, I think, is this merely an isolated stylistic infelicity on the part of Adams and Aizawa. For the same issue arose many times during personal exchanges concerning the vexed case of Otto and his notebook (the example used, with a great many riders and qualifications, in Clark and Chalmers 1998). And it arises again and again, as we shall later see, in the various parts of their recent challenge to engage the issue of "the mark of the cognitive." (Clark, 2010, p. 83)
But this talk of an object's being or failing to be "cognitive" seems almost unintelligible when applied to some putative part or aspect of a cognitive agent or of a cognitive system. What would it mean for the pencil or the neuron to be, as it were, brute factively "cognitive"? This is not, I think, merely an isolated stylistic infelicity on the part of Adams and Aizawa. For the same issue arose many times during personal exchanges concerning the vexing case of Otto and his notebook.s And it arises again, as we shall later see, in various parts of their more recent challenges concerning "the mark of the cognitive." (Clark, 2008, p. 87)
Several things here.
1) I guess I don't have this intelligibility sensibility that Clark does, but had we known this, we would have avoided that way of developing the issue.

2) In our 2001 paper, "Bounds of Cognition," we didn't describe the issue in terms of "cognitive objects".  We wrote about cognitive processes:

To begin, we may observe that the mere causal coupling of some process with a broader environment does not, in general, thereby, extend that process into the broader environment. Consider the expansion of a bimetallic strip in a thermostat. This process is causally linked to a heater or air conditioner that regulates the temperature of the room the thermostat is in. Expansion does not, thereby, become a process that extends to the whole of the system. It is still restricted to the bimetallic strip in the thermostat. Take another example. The kidney filters impurities from the blood. In addition, this filtration is causally influenced by the heart’s pumping of the blood, the size of the blood vessels in the circulatory system, the one-way valves in the circulatory system, and so forth. The fact that these various parts of the circulatory system causally interact with the process of filtration in the kidneys does not make even a prima facie case for the view that filtration occurs throughout the circulatory system, rather than in the kidney alone. So, a process P may actively interact with its environment, but this does not mean that P extends into its environment. (Adams & Aizawa, 2001, p. 56).
3) All that is old hat, but I was surprised on rereading "Defending the Bounds" that we had done a reasonable job of not making object versus process an issue.  Here is the relevant text:
When Clark makes an object cognitive when it is connected to a cognitive agent, he is committing an instance of a coupling-constitution fallacy. This is the most common mistake that extended mind theorists make. The fallacious pattern is to draw attention to cases, real or imagined, in which some object or process is coupled in some fashion to some cognitive agent. From this, one slides to the conclusion that the object or process constitutes part of the agent's cognitive apparatus or cognitive processing. If you are coupled to your pocket notebook in the sense of always having it readily available, use it a lot, trust it implicitly, and so forth, then Clark infers that the pocket notebook constitutes a part of your memory store. If you are coupled to a rock in the sense of always having it readily available, use it a lot, trust it implicitly, and so forth, Clark infers that the rock constitutes a part of your memory store. Yet coupling relations are distinct from constitutive relations, and the fact that object or process X is coupled to object or process Y does not entail that X is part of Y. The neurons leading into a neuromuscular junction are coupled to the muscles they innervate, but the neurons are not a part of the muscles they innervate. The release of neurotransmitters at the neuromuscular junction is coupled to the process of muscular contraction, but the process of releasing neurotransmitters at the neuromuscular junction is not part of the process of muscular contraction.  (Adams and Aizawa, 2010, pp. 67-8)
 4)
"But this talk of an object's being or failing to be "cognitive" seems to me almost unintelligible when applied to some putative part of a cognitive agent or of a cognitive system." 

Is it really unintelligible to think that the left hemisphere of the brain is cognitive?

*

Friday, August 13, 2010

"Defending the Bounds of Cognition" Revisited 1

Fred and I wrote this paper a long time ago now (September of 2003, I think), so it was interesting to reread it.  Although I think I still agree with all the principal points, there are a few tweaks worth noting in blog posts.

First a boring a side story.  The paper begins,
Question: Why did the pencil think that 2 + 2 = 4?
Clark's answer: Because it was coupled to the mathematician. 
Fred made me tone it down (I used to be a lot worse than I am now) and take out, among other things,
Question: Why did the pencil think that multinational corporations are the greatest threat to world democracy?
Answer: Because it was coupled to Noam Chomsky.
Much has been made of the idea that it is not the pencil alone that is supposed to be cognitive but instead the "person + pencil + paper" system.  Ok.  But, that really doesn't help.  The point is that cognitive processing does not extend just in virtue of causal coupling.

Wednesday, August 11, 2010

Discrimination on the Basis of Underlying Causal Processes

Menary quotes us on this:
Adams and Aizawa stipulate that "the cognitive must be discriminated on the basis of underlying causal processes" (Adams and Aizawa 2001, p. 52). 
But, I think this misrepresents what we do.  I don't think we stipulate anything.  We point out that other sciences have worked this way, so we might plausibly assume that cognitive science will go this way.  Here's the broader context (perhaps a little too much):
The second necessary condition is a condition on the nature of processing. This point bears much more elaboration than did the preceding. The old saw is that science tries to carve nature at its joints. Part of what this means is that, to a first approximation, science tries to get beneath observable phenomena to find the real causal processes underlying them; science tries to partition the phenomenal world into causally homogeneous states and processes. Thus, as sciences develop a greater understanding of reality, they develop better partitions of the phenomenological. A range of examples will point out what we are driving at.
     In the Novum Organum, Francis Bacon proposed a set of methods for determining the causes of things. According to one of these methods, to find the cause of X, one should list all the positive instances of things that are X, then find what is common to them all. As an example, Bacon applies this method to the “form of heat.” On his list of hot things, Bacon includes the rays of the sun, fiery meteors, burning thunderbolts, eruptions of flame from the cavities of mountains, all bodies rubbed violently, piles of damp hay, quicklime sprinkled with water, horse-dung, the internal portions of animals, strong vinegar which when placed on the skin produces a sensation of burning, and keen and intense cold that produces a sensation of burning. Bacon conjectured that what was common to these was a high degree of molecular vibration and that the intensity of heat of a thing is the intensity of molecular vibration. Bacon clearly intended to carve nature at its joints, but it simply turns out as a matter of contingent empirical fact that the things that appear hot, or produce the sensation of being hot, do not constitute a natural kind. The rays of the sun, meteors, friction due to heat, body heat, and so forth, simply do not have a common cause. There is no single scientific theory that encompasses them all; the phenomena are explained by distinct theories. Friction falls to physics. Decomposition
falls to biology. Exothermic reactions to chemistry.
     As a second example, there are the late 19th century developments in the theory of evolution. By this time, Darwin’s biogeographical, morphological, taxonomic, and embryological arguments had carried the day for evolution and many biologists had come to accept the theory of evolution by common descent. Despite this, the majority of biologists were reluctant to accept Darwin’s hypothesis that evolution is caused primarily by natural selection. In this intellectual environment, biologists returned for a second look at Lamarckian theories of the inheritance of acquired characteristics. In support of their theory, neo-Lamarckians pointed to cases which, in retrospect, proved to be instances in which a mother would contract some disease, then pass this disease on to her offspring in utero. Phenomenologically, this looks like the inheritance of acquired characteristics, but, in truth, inheritance and infection
involve distinct causal processes. Inheritance involves genetic material in sex cells of a parent being passed on to offspring; infection is the transmission of an alien organism, perhaps via the circulatory system in isolation from the sex cells. To a first approximation, inheritance is a process in the germ line of an organism, where infection is a process in the soma line of an organism. It is only after the true causal differences between inheritance and infection are made out that one can conclude that we have one less instance of the inheritance of acquired characteristics than we might at one time have thought. Throughout the episode, Lamarckians were aiming to carve nature at its joints, but in the absence of a true understanding of the nature of the processes underlying inheritance and infection, these distinct processes had to appear to be the same, both as instances of the inheritance of acquired characteristics.
     The cognitive may, therefore, be assumed to be like other natural domains, namely, the cognitive must be discriminated on the basis of underlying causal processes. The point we have been driving at here might be approached in another way, namely, we believe there is more to cognition than merely passing the Turing test. Some of the mechanisms that might be used to pass the Turing test will count as cognitive mechanisms for doing this, while other mechanisms that might suffice will not count as cognitive mechanisms. A computer program might pass the Turing test by having a listing of all possible sensible conversations stored in memory. Such a program, however, would not   constitute a cognitive mechanism for passing the test. This is presumably because we have sufficient ground for saying that the look-up table process is not of a kind with the complex of processes that go into enabling a normal human to carry out the same sort of conversation. The look-up table may, for example, answer questions in a constant amount of time for each sentence. Computer chess provides another famous sort of case where behavior can be carried out by both a cognitive and a non-cognitive process. In chess, there is a combinatorial explosion in the number of possible moves, responses, counter responses, and so forth. As a result, it quickly proves to be impractical to examine all the logically possible moves and countermoves. The most powerful chess playing programs, therefore, use special techniques to minimize the number of possible moves and countermoves they have to consider. Nevertheless, there is pretty strong reason to believe that the chess-playing methods currently employed by digital computers are not the chess-playing methods that are employed by human brains. Based on observations of the eye movements of grandmasters during play, it appears that grandmasters actually mentally work through an extraordinarily limited set of possible moves and countermoves, far fewer than the millions or billions considered by the most powerful chess-playing computer programs. The point is not simply that the computer processes and the human processes are different; it is that, when examined in detail, the differences are so great that they can be seen not to form a cognitive kind. The processes that take place in current digital chess playing computers are not of a kind with human chess playing.  (Adams & Aizawa, 2001, pp. 51-52, italics added for emphasis).
I think this is a pretty important idea.  I think that rejecting it in favor of things like "cognitive behavior" leads to all kinds of bad consequences for, for example, Rupert and, sometimes, Clark.

Friday, August 6, 2010

The Point of Menary's Cognitive Integration

Menary writes,
This is also the point of Menary's cognitive integration: we need to understand how bodily processes and the manipulation of external vehicles are coordinated in such a way that they jointly cause further behavior (see Menary 2006, 2007, this volume). (p. 13).
*

Now, I don't think this can be right about the point of Menary's cognitive integration.  Menary does not merely want to understand how bodily processes and the manipulation of external vehicles are coordinated in such a way that they jointly cause further behavior.  He also wants to champion a particular way of understanding how bodily processes and the manipulation of external vehicles are coordinated in such a way that they jointly cause further behavior, namely, that the entirety of the processing is cognitive.  Right?  By contrast, Adams and Aizawa and Rupert encourage a different way of understanding how bodily processes and the manipulation of external vehicles are coordinated in such a way that they jointly cause further behavior, namely, it's a matter of cognitive processes in the brain interacting with non-cognitive processes outside the brain.