Showing posts with label MotC. Show all posts
Showing posts with label MotC. Show all posts

Monday, February 28, 2011

Calvo on Plant Neurobiology

In preparation for the upcoming "Systematicity and the Post-Connectionist Era" workshop, I have been reading some papers by one of the organizers, Paco Calvo.  (Incidentally, I have seen a draft of the program and it looks great.)
Put bluntly, an information‑processing system counts as computational insofar as its state‑transitions can be accounted for in terms of manipulations on representations. The relation of representation refers to the standing in of internal states of a physical system for the content of other states. Cognitive activity is thus marked by the processing of representational states. We need nonetheless a more stringent definition of ‘representation’; a principled way to decide when a system manipulates representational states, beyond the somewhat trivial observation that one internal state ‘stands in’ for the content of another state. For present purposes, I propose to consider the following two principles. First, according to a principle of dissociation, for a physical state to become representational, the state must be able on occasions to stand for things or events that are temporarily unavailable. And second, according to a principle of reification, a system state can only count as representational if it can be detected and a parallel drawn between the state in question and the role it plays in the establishment of a connection between the system’s input and output states. That is, we must be able to identify specific physical states with the computational roles they are supposed to play.
     This framework can serve to assess the cognitive capacities of any information‑processing system whatsoever. Notice that it does not rely upon the existence of any specific brain tissue to perform computations. A physical state is contentful if it can be spatiotemporarily identified as causally efficacious in the connection of the system’s input and output states in such a way that the state in question ‘hangs in there’ while the input state it is tuned to decays or is no longer present.v That’s all that is needed. No restrictions in terms of implementation, neuronal or what may, are imposed. I propose therefore to adopt these two principles, taken together, as a condition on the possession of a cognitive architecture, and consider plants as candidates for its satisfaction. (Garzon, 2007, pp. 209-10).
So, Garzon is an embodied cognitionist of a representationalist stripe.  Nicely muddies the water about what embodied cognition people think.  I take it that there is a fair diversity of opinion among embodied cognitionists.

Now, I've long been keen to get on the table a "mark of the cognitive" for various reasons, but one is simply so we can at least get in the ballpark of what we are talking about.  Now, it seems to me that Paco has informed us what he is talking about.  So, given that, I can see how he can maintains that plants are cognitive systems.

But, I don't see that we are necessarily talking past one another.  It seems to me that we can have common ground in the view that the plant cognition he is talking about differs from the human cognition that I am talking about.

Wednesday, December 15, 2010

Ramsey on the A&A Criterion of the Mental

You know, the idea of a review of a review is kind of strange, but I take it that this blog is mostly a collection of philosophical snippets, typically things I would never publish.  So, here it goes.
To bolster their claim, Adams and Aizawa propose their own criterion for mentality: non-derived intentionality, which is lacking in external symbol systems like Otto's notebook.
Now, technically speaking, A&A do throw in the condition that not just any sort of use of non-derived representations counts as cognitive processing.  (That probably did not come out very clearly or explicitly in the papers in Menary's collection.)  Maybe the spiny lobster ganglia that Clark, 2005, describes have non-derived content, but A&A do not expect those representations to be manipulated in the way that representations in typical cognitive processes are manipulated.  This condition on manipulation also seems to me to separate the A&A view from, for example, Rowlands' view in "Extended Cognition and the Mark of the Cognitive".

Sunday, June 20, 2010

A non-species-specific, non-bio-chauvinistic definition of cognition?

In recent developments, the enactive perspective has started to advance on the intimate connection between the concept of autonomy and sense-making, the normative engagement of a system with its world (Varela 1991, 1997; Weber and Varela 2002; Di Paolo 2005; De Jaegher and Di Paolo 2007; Thompson 2007; Di Paolo et al. 2008). The latter is nothing less than a strong candidate for a widely applicable, non-species-specific, non-bio-chauvinist definition of cognition. (Di Paolo, 2009, pp. 11-12).
This last sentence raises a number of issues.

I. Why doesn't cognitivism fit the bill as a non-species-specific definition of cognition?  (Set aside worries about definitions, for the present.)  Cognitivists have regularly been interested the cognitive capacities of non-human animals, e.g. chimpanzee abilities with natural language, animal capacities for self-concepts, animal capacities for tool use. Cognitivism seems to offer a non-specifies-specific "definition" of cognition and, in fact, includes this as a part of its active research program.

II. Why doesn't cognitivism fit the bill as a non-bio-chauvinistic definition of cognition?  After all, many cognitivists think that it is possible to produce computers that think and presumably these could be computers that are not autonomous (i.e., not robots).

III. And, why isn't it that some forms of enactivism are bio-chauvinistic?  Consider the versions of enactivism that claim that "life = cognition".  (Cf. Di Paolo, 2009, p. 12).  Or what of versions for which being cognitive entails being alive.

Wednesday, June 16, 2010

Di Paolo on Cognitive Systems

So, last time, it seemed that Di Paolo should say that a cognitive system is an autopoeitic system that operates according to potential future states, but that he does not just say this.

Instead, Di Paolo offers a theory of what it is to operate according to potential future states. He writes,
Only of the subset of autopoietic systems that are not just robust but also adaptive can we say that they posses operational mechanisms to potentially distinguish the different virtual implications of otherwise equally viable paths of encounters with the environment. (Di Paolo, 2009, p. 14).
By "robust" he means:
can sustain a certain range of perturbations as well as a certain range of internal structural changes without losing their autopoiesis. These limits are defined by the organization and current state of the system (ibid.)
By "adaptive" he means:
a system’s capacity, in some circumstances, to regulate its states and its relation to the environment with the result that, if the states are sufficiently close to the limits of its viability,
1. tendencies are distinguished and acted upon depending on whether the states will approach or recede from these proximal limits and, as a consequence,
2. tendencies that approach these limits are moved closer to or transformed into tendencies that do not approach them and so future states are prevented from reaching these limits with an outward velocity.  (ibid.)

Tuesday, June 15, 2010

Di Paolo on Bare Autopoesis

Like much of the work on autopoeisis, I find Di Paolo's exposition a bit difficult to follow.  I just don't have the sense of the problematic here.  But, here's my take on the upshot of section 4.1.

Nothing can be a cognitive system simply in virtue of being an autopoeitic system.  Why?  Cognitive systems operate according to potential future states, but an autopoeitic system does not necessarily operate this way.

Now, one might expect the solution to this problem would be to say that a cognitive system is an autopoeitic system that operates according to potential future states.  But things do not appear to be that simple...

Monday, June 14, 2010

Di Paolo on Extended Mind 3

[The] organism is linked with an external entity in a two-way interaction, creating a coupled system that can be seen as a cognitive system in its own right. All the components in the system play an active causal role, and they jointly govern behavior in the same sort of way that cognition usually does. If we remove the external component the system’s behavioral competence will drop, just as it would if we removed part of its brain. Our thesis is that this sort of coupled process counts equally well as a cognitive process, whether or not it is wholly in the head (Clark and Chalmers 1998, p. 7).

This seems to suggest a practical and operational way out of the problem. Perform a causal analysis of the coupled system and work out what processes contribute to cognitive performance. But of course, without a measure of relevance, a causal analysis will inevitably invite an unbounded spread of causes (e.g., isn’t oxygen obviously crucial for a human to solve math problems?). It is clear that what counts as cognitive (the second boundary) should be the measure that determines the relevance of the causal contribution of a given process. But this leads us again to the problem already stated. The only test of the cognitive offered by EM is whether we intuitively would call something cognitive were it to happen in the head. (Di Paolo, 2009, p. 10).
And, I have to agree that the way to separate what merely causally influences cognition and what constitutes cognition is to have a mark of the cognitive.  We need a theory of what distinguishes cognitive processes from non-cognitive processes.

Sunday, June 13, 2010

Di Paolo on Extended Mind 2

Before asking where it is we must first say what it is. This is the single major problem with the way EM theorists have approached the genuine question of whether extra-neural, extra-bodily material processes are a constitutive part of what we intuitively recognize as cognitive processes. Relying solely on those intuitions is the problem. (Di Paolo, 2009, p. 10).
I've got to agree with the need for a mark of the cognitive.

Saturday, April 10, 2010

Problems for Rowlands' MotC

1. Chatterbots seem to satisfy Rowlands' conditions but are not cognitive agents.
2. Pure look-up tables seems to satisfy Rowland's conditions, but are not cognitive agents.
3. CD players seem to satisfy Rowland's conditions, but are not cognitive agents.

So, it looks like Rowlands needs some additional restriction on what kinds of manipulations are performed on non-derived representations.  Of course, that would make his view even more similar to the Adams and Aizawa view.

Friday, April 9, 2010

Rowlands' MotC and Cognitive Science Practice

The idea underpinning the criterion is that if we want to understand what cognitive processes are, then we had better pay close attention to the sorts of things cognitive scientists regard as cognitive. That is not to say that we must restrict ourselves to the pronouncements or determinations of cognitive scientists, or that we should regard these as decisive, but merely that we had better be prepared to use these as our starting point. A significant part of the criterion I shall defend can be extracted from a careful examination of cognitive-scientific practice. When we examine such practice, I shall argue, what we find is an implicit mark of the cognitive ... (Rowlands, 2009, pp. 7-8).
Here we find an approach much like that in Adams and Aizawa, and similar in spirit to Chemero (who resists the idea of a MotC).

Thursday, April 8, 2010

Rowlands buys non-derived content

I earlier mentioned how Mark Rowlands embraces a "mark of the cognitive approach" to adjudicated matters regarding extended cognition.  In fact, here is his account:
A process P is a cognitive process if and only if:
(1) P involves information processing—the manipulation and transformation of
information-bearing structures.
(2) This information processing has the proper function of making available either to the subject or to subsequent processing operations information that was
(or would have been) prior to (or without) this processing, unavailable.
(3) This information is made available by way of the production, in the subject
of P, of a representational state.
(4) P is a process that belongs to the subject of that representational state.  (Rowlands, 2009, p. 8)
Then, regarding (3) he writes,
I shall assume that the type of representational state invoked in (3) is one that
possesses non-derived content. Derived content is content, possessed by a given state, that derives from the content of other representational states of a cognizing subject or from the social conventions that constitute that agent’s linguistic milieu. Non-derived content is content that does not so derive. A form of content being nonderived is not equivalent to its being sui generis: non-derived content can, for example, derived from, and be explained in terms of, the history or informational carrying profile of the state that has it. It is what content is derived from that is crucial. Non-derived content is content that is not derived from other content – it is not content that is irreducible or sui generis.  (pp. 9-10).
So, where Adams and Aizawa have been non-committal regarding what theory of non-derived content to invoke, Rowlands take the plunge.

Tuesday, April 6, 2010

Refinining "We need a MotC" 3

So, here is a more careful take on the idea that we need a MotC.  It is common ground between EC and its critics that there are causal processes that pass through the brain, body, and world.  So, to resolve this debate, what we apparently need is some way of typing these causal processes.  So, on the Machery strategy of individuating cognitive processes by appeal to visual processes, we have this.  (Provided we have a way of identifying visual processes.)  Further, on the Allen, Grau, and Meagher strategy of individuating cognitive processes by appeal to Classical conditioning we have this.  So, the more careful state of the idea that we need a MotC is that we need a way to type the processes that are under debate.  Saying that we need a MotC seems to be a minor simplification of this.  Right?  Wrong?

Monday, April 5, 2010

Mark Rowlands' "Extended Cognition and the Mark of the Cognitive"

I earlier mentioned how Mike Wheeler embraces what I would call a "mark of the cognitive approach" to adjudicated matters regarding extended cognition.

But, Mark Rowlands has also embraced a lot of the approach as well.

Gotta like it.  There are at least some points where the advocates and critics of EC are not talking past each other.

Sunday, April 4, 2010

HEFC and HESC are only prima facie different

Maybe that's right.  (In correspondence, Clark suggests something like this.)  And maybe the difference between HESC and HEAMC is only apparent.

A cognitive system is a system whose organization defines a domain of interactions in which it can act with relevance to the maintenance of itself, and the process of cognition is the actual (inductive) acting or behaving in this domain.  Living systems are cognitive systems, and living as a process is a process of cognition.  This statement is valid for all organisms, with and without a nervous system. 
(Maturana & Varela, 1980, p. 13).

Saturday, April 3, 2010

Chemero on cognition

I take it that cognition is the ongoing, active maintenance of a robust animal-environment system, achieved by closely co-ordinated perception and action.  Radical Embodied Cognitive Science, p. 212.
these brief remarks are not intended to supply a set of necessary and sufficient conditions, or criteria for what Adams and Aizawa call the "mark of the cognitive". (ibid.)
Much of my commentary on this footnote from Chemero addresses what appears to me to be methodological misdirections.  For example, in an earlier post, I noted that A&A don't take it that providing a mark of the cognitive is providing a definition.

But, note now that there seems to be a fine line between giving the account that Chemero gives and actually giving a set of necessary and sufficient conditions.  So, for example, on Chemero's account, it looks as though cognition must involve an animal.  It's a necessary condition on a cognitive process that it involve an animal.  It also looks to be necessary that cognition involves perception and action.

Moreover, on Chemero's account, the ongoing, active maintenance of a robust animal-environment system, achieved by closely co-ordinated perception and action would appear to be sufficient for cognition.

Now, maybe there is a sense in which Chemero's account does not amount to giving necessary and sufficient conditions.  Maybe Chemero can get off this hook.  But, then why can't Adams and Aizawa get off the hook in the same way?

Refining "We need a MotC" 2

Here's another way that "We need a MotC" is a bit too strong.  Allen, Grau, and Meagher (2009) argue that processes of classical conditioning are realized in the spinal cord.  On the assumption that classical conditioning is a type of cognitive process, they are able to give a plausible argument for cognition outside of the brain without a MotC.

Adams and Aizawa are ok with cognition in the spinal cord.  (Cf. Bounds, p. 18).

Both this case and Machery's "MotV" case apparently dodge the need for a MotC by way of appeal to more restrictive cases.

Still, these seem to be rather technical refinements.

Friday, April 2, 2010

Refining "We need a MotC" 1

Edouard Machery pointed out to me a sense in which "We need a MotC" is too strong.

Suppose you have a "mark of the visual", i.e. an account of the difference between a visual process and a non-visual process.  Then, if visual processes are cognitive processes and you have visual processes that extend, then you have a good case for extended cognition without a MotC. (Of course, one still has some work to do to show that visual processes extend ...)

So, a "technical" refinement of the idea seems to me to be in order.

Thursday, April 1, 2010

Correction to: "Cognitive processes involve representations with non-derived content"

Adams and Aizawa often write that what distinguishes cognitive processes from non-cognitive processes is that the former involve representations with non-derived content.

But as Clark points out somewhere (help wanted on this ref), this is probably too loose.  When Otto uses his notebook, there are presumably mental representations in his brain that have non-derived content, so that his use of the notebook in some sense involves representations with non-derived content

Yet, an appropriate clarification of the target concept is ready to hand in terms of the idea of a vehicle of content.  (For its use in the context of the extended cognition debates, see, e.g., (Hurley, 1998).)  The idea is that cognitive vehicles of content must bear non-derived content, so that the vehicles of content in cognitive processes must bear non-derived content.  That seems to work to rule out Otto's use of his notebook involving representations in the relevant sense.

Wednesday, March 31, 2010

Maturana and Varela on Cognitive Systems

A cognitive system is a system whose organization defines a domain of interactions in which it can act with relevance to the maintenance of itself, and the process of cognition is the actual (inductive) acting or behaving in this domain.  Living systems are cognitive systems, and living as a process is a process of cognition.  This statement is valid for all organisms, with and without a nervous system. 
(Maturana & Varela, 1980, p. 13).
I don't think M&V are speaking my language here.  Plants are cognitive systems?

(This also looks to be at odds with Chemero's account, which is apparently limited to animals and which invokes perception and action.  But, I've not read Chemero's account of perception and action in RECS.)

Tuesday, March 30, 2010

Oops. So, A&A did talk about a definition of the cognitive

Doing a word search over Bounds I found the following:
One way to think about our strategy for demarcating cognitive from non-cognitive processes is to begin with paradigms of cognitive processing, those involving normal humans.  We have been drawing on features of human cognition as a first step towards demarcating the cognitive from the non-cognitive.  But, surely the category of the cognitive encompasses more than this.  Surely a definition of the cognitive exclusively in terms of normal human cognition is too parochial. (Bounds, p. 70).
It would have been better to write "account" rather than "definition" in that last sentence.

So, against this.  There is no use of "definition" in Adams and Aizawa, (2001), and the following disclaimers in Bounds.
In Chapters 3 and 4, we develop and defend in more detail our positive approach to the mark of the cognitive, namely, that cognitive processes differ from non-cognitive processes in terms of the kinds of mechanisms that operate on non-derived representations.  We offer this as part of a theory of the cognitive, rather than as (part of) a definition of the term “cognitive.”  We do not mean to stipulate that this is just what we mean by “cognition.”   (Adams & Aizawa, 2008, pp. 12-13) 

Here we think it is perfectly reasonable for us to stand by the view that these Martian representational states are not cognitive states.  We have a theory of what cognition involves.  The Martians in Clark’s thought experiment do not satisfy the conditions of that theory.  So we must either reject the hypothesis that the Martians have cognitive processing or the hypothesis that cognition involves non-derived representations.  Why can we not rationally choose to stand by our theory?  Our theory is an empirical conjecture about the nature of cognition, not a definition of cognition.  Thus, future scientific developments could undermine our theory and force revisions.  Or, our theory could turn out to be so successful and well-confirmed that we determine that Martians are not cognizers. (Adams & Aizawa, 2008, p. 49)

In this chapter we have offered an empirical hypothesis concerning what all cognitive processes have in common, namely, that they all involve non-derived representations.  We do not take this to be part of a definition of the cognitive.  Nor do we mean to stipulate what we shall mean by the word “cognitive.”  (Adams & Aizawa, 2008, p. 55)

Evidently, the dispute must be joined by a substantive theory of the cognitive.  This is why we offer the conjecture that cognitive processes involve non-derived representations that are embedded within (largely unknown) cognitive mechanisms.  This is not a definition of the cognitive, let alone a stipulative definition of the cognitive.  It is a theory that we think is implicitly at work in a lot of cognitive psychological research.  (Adams & Aizawa, 2008, p. 84)
So, there is a little infelicity.

Thursday, March 25, 2010

Is Rupert Doing Conceptual Analysis?

In his recent NDPR review of Rupert's Cognitive Systems and the Extended Mind, Wilson rejects the search for a "mark of the cognitive" roughly on the grounds that it is a bit of conceptual analysis, hence that the search a dubious enterprise.  (Wilson's text below the fold.)

But, when Rupert cites such things as the "generation effect" in order to argue that Otto's notebook does not constitute memory, it does not look like Rupert is doing conceptual analysis.  That normal human memory displays a generation effect is an empirical discovery.