This is also the point of Menary's cognitive integration: we need to understand how bodily processes and the manipulation of external vehicles are coordinated in such a way that they jointly cause further behavior (see Menary 2006, 2007, this volume). (p. 13).
* |
Now, I don't think this can be right about the point of Menary's cognitive integration. Menary does not merely want to understand how bodily processes and the manipulation of external vehicles are coordinated in such a way that they jointly cause further behavior. He also wants to champion a particular way of understanding how bodily processes and the manipulation of external vehicles are coordinated in such a way that they jointly cause further behavior, namely, that the entirety of the processing is cognitive. Right? By contrast, Adams and Aizawa and Rupert encourage a different way of understanding how bodily processes and the manipulation of external vehicles are coordinated in such a way that they jointly cause further behavior, namely, it's a matter of cognitive processes in the brain interacting with non-cognitive processes outside the brain.
* From Paintings Galleries
Ken,
ReplyDeleteThis seems rather like a matter of definition. For example, if we define cognition as that which aids in the achievement of goal-directed behavior, then anything which helps fulfill such behavior would be considered "cognitive". Under this definition, "external vehicles" would be a part of the cognitive system insofar as some goal-directed behaviors could not be achieved without them. But one could also take Clark's approach and say that cognitive systems are "brain centered" but not "brain bound". This approach would recognize that most of what allows us to achieve goal-directed behavior is dependent on genuine internal machinery alone, but it would also acknowledge the role that external vehicles play in aiding some kinds of behavior.
However, if we define cognition in terms of one (or several) types of internal brain computation alone, then we risk losing a sense of how some external vehicles offer new varieties of (perhaps linguistically mediated) computation, which open up new vistas of goal-directed behavior.
So I don't really think that extended mind theory is necessarily committed to the notion that "the entirety" of the use and manipulation of external vehicles is cognitive in the same way that working memory or executive control is cognitive. But under the above definition of cognition as goal-directed behavior, we can see that extended mind theory allows us to be pluralists in regards to how exactly goal-directed behavior is achieved. Sure, a large amount of behavior is grounded by internal cognitive machinery, but there is also a large amount of behavior made possible only through the use and manipulation of external vehicles. We should strive to have our cake and eat it too. Perhaps you have already addressed these points in your book (which I admit I have not read), but I would be interested in your thoughts.
Hi, Gary,
ReplyDeleteThanks for stopping by. You raise many issues in your comment.
===
So, suppose "we define cognition as that which aids in the achievement of goal-directed behavior".
I've often written that articulating the view that cognition extends should include some explication of what one means by "cognition". So, it would be nice to have this kind of account up front.
But, it seems to me that there are plenty of problems for this theory of what cognition is.
I. It looks like plants and slime molds could have cognitive processes.
II. It looks like those who suffer from total locked in syndrome will not be cognizing.
III. It looks like those under complete neuromuscular blockade will not be cognizing.
====
"most of what allows us to achieve goal-directed behavior is dependent on genuine internal machinery alone, but it would also acknowledge the role that external vehicles play in aiding some kinds of behavior."
I agree that tools help us get things done. Tools are great for that. That's common ground. The difference between me and lots of EC'ers is that they want the whole of the tool using process to be cognitive, but I don't. They want more than just tools are helpful.
===
So, I don't see this at all:
"However, if we define cognition in terms of one (or several) types of internal brain computation alone, then we risk losing a sense of how some external vehicles offer new varieties of (perhaps linguistically mediated) computation, which open up new vistas of goal-directed behavior."
I agree that some external vehicles offer new varieties of computation, but I don't think that cognitive processes extend into the vehicles. How/why is EC supposed to help me in recognizing what I already recognize?
===
"So I don't really think that extended mind theory is necessarily committed to the notion that "the entirety" of the use and manipulation of external vehicles is cognitive in the same way that working memory or executive control is cognitive."
I think that many EC'es have backed off the strong claims of equivalence found in Clark & Chalmers, 1998, circa Inga and Otto in favor of some "coarse grained similarities" (whatever those are). But, I have written stuff about this as well. Maybe the shortest thing would be my reply to Fisher, which you can get in manuscript here:
http://www.centenary.edu/philosophy/aizawa/publications
"I. It looks like plants and slime molds could have cognitive processes.
ReplyDeleteII. It looks like those who suffer from total locked in syndrome will not be cognizing.
III. It looks like those under complete neuromuscular blockade will not be cognizing."
I think it would be possible to define "behavior" and "goal-directed" in such a way as to avoid these results. I'm not sure exactly how to do this to avoid I (largely because I am unfamiliar with plant and slime mold physiology), but I do know how to do it to avoid II and III. Following autopoietic theory, it seems plausible to claim that simply maintaining one's homeostatic processes in equilibrium is a type of behavior, indeed, the most primordial of all behaviors. Accordingly, people who are locked-in or under neuromuscular blockade would still be "behaving" in at least one respect (albeit at a very minimal level). Moreover, it seems plausible that, ontologically speaking, neural activity is itself a kind of behavior, so as long as there is sustained neural activity, there is behavior in the system (even if the person is not "moving about" in the world).
"I agree that some external vehicles offer new varieties of computation, but I don't think that cognitive processes extend into the vehicles."
I'm actually fine with this since I think it is largely a matter of how we define "extension" and "cognition" in the first place (and I'd rather apply the principle of charity than assume someone doesn't know what they are talking about). But consider language as an external vehicle. When thinking about inner speech as a computational strategy for behavior regulation, where does the cognition end and the vehicle begin, particularly in respect to ontogenetic development? Moreover, cyborg technology illustrates the same principle: once a cognitive system has "taken up" a vehicle such that the success of its embodied mastery becomes dependent on the vehicle integration with the system, the question of whether this is true extension or not becomes largely irrelevant in light of the behavioral facts, namely, that new possibilities of behavior are available by means of technology. Presumably there are differing levels of how much a vehicle can be "taken up" (with language being on one end of the continuum and hammers on the other). So EC theorists on the whole need to be more sensitive to the difference between using forks and knives and learning linguistic strategies for behavioral self-regulation in childhood.
"How/why is EC supposed to help me in recognizing what I already recognize?"
I think it's largely a matter of clarity in phenomenological description and ontological clarity in the the definition of "cognition". If we examine the phylo and ontogenetic history of our interaction with language use, EC becomes very useful insofar it helps us clarify the way in which cultural entities (speech artificats) can be taken up into the internal cognitive machinery (in virtue of the kind of developmental plasticity that Noe talks about). I take this emphasis on the social-linguistic construction of some cognitive processes to be nontrivial and significant in light of previous models of how language and cognition interact. For this reason, EC offers more than just restatements of what cognitive scientists already knew, it offers the possibility of understanding cognitive systems in a different (albeit highly complementary) way than orthodox theories afford.
Hi, Gary,
ReplyDeleteIn my experience, autopoeitic folks are completely happy with saying that plants and slime molds are cognitive systems. At that point, however, I think that they and I have to part company. They are not interested in what I am. Clearly there are some differences between what I do and what plants do and it is something that has traditionally been described as they do not think, where I do.
Regarding homeostasis and locked-in/blockade cases, the idea I have is that the cortical processes that normally aid in homeostasis are now "off-line" so to speak. Maybe thinking normally helps you acquire food, or whatever, but not so in cases of locked-in syndrome or neuromuscular blockade.
I've never seen the view that, say, action potentials are a type of behavior, but so be it. That still does not solve the problem of cortical processing being out of the homeostatic loop.
===
"I agree that some external vehicles offer new varieties of computation, but I don't think that cognitive processes extend into the vehicles."
I'm actually fine with this since I think it is largely a matter of how we define "extension" and "cognition" in the first place (and I'd rather apply the principle of charity than assume someone doesn't know what they are talking about).
Well, in my experience with Clark, Rowlands, Sprevak, and Wheeler, they don't want to make the move you do. They are trying to save the idea that this tool manipulation is really cognitive processing of the sort we find inside the brain by appeal to "coarse functional similarities". And, they have to resort to this sort of move that seems somewhat desperate to me, since they want not to be forced to say that they are introducing a new concept/usage of "cognition" and maintaining that that new kind of cognition is what extends.
===
But consider language as an external vehicle. When thinking about inner speech as a computational strategy for behavior regulation, where does the cognition end and the vehicle begin, particularly in respect to ontogenetic development?
Inner speech is all cognitive and not like vocalized speech. It's just like visual imagination is not like visual perception. But, that is just a statement of my view without defense.
===========
So EC theorists on the whole need to be more sensitive to the difference between using forks and knives and learning linguistic strategies for behavioral self-regulation in childhood.
Well, I think that Clark is on to at least some of this when he maintains that cognition extends into information processing tools, but not non-information processing tools. I've tried to point to the distinction by appeal to the use of a recipe and an oven in baking a cake. Clark would say that, in the scenario I describe, use of the recipe would make for extended cognition, but not the use of the oven.