Saturday, May 8, 2010

Chemero vs Thompson & Stapleton

Thompson and Stapleton write,
Without autonomy (operational closure) there is no original meaning; there is only the derivative meaning attributed to certain processes by an outside observer.  (Thompson and Stapleton, 2010, p. 28).
So, T&S believe in non-derived content and Chemero has written that he is pursuing an enactivist theory of cognition of the sort developed by Thompson, Varela, and others.  Yet, one of Chemero's core ideas is anti-representationalism.  I'm not sure how that all works out, but it looks to me as though one should read Chemero as developing his own views, rather than pushing those of others.  (In fact, I have this vague sense from having read an earlier draft of his book manuscript that Chemero's "Gibsonianism" doesn't look 100% like Gibson's views.  But, that was a long time ago and I'm getting a little long in the tooth.)


  1. I haven't read Chemero's book yet, but I think this is going to come down to what you're willing to call representation. From what I understand, the closest thing to representation that the enactive approach requires is that the system can be perturbed by external forces. If the same force (say, some external object impinging on the system in an exactly similar manner) perturbs the system twice, the system needn't instantiate a correspondingly similar internal state from one time to the next. This is because the effects the perturbations have on the system are highly contingent on the system's initial state at the time of the perturbance. A perturbance can still have meaning for the system insofar as it has implications for the system's internally established organizational norms. If this is representation, then enactivism isn't anti-representationalist. However, I think that this notion of representation is so weak that it won't do much of the work that it is supposed to in the cognitive sciences. If that's right, then we may as well call this view anti-representationalist.

  2. I am somewhat reluctant to say what the enactive approach does or does not require regarding representations. It seems to me that enactivists can choose to be representationalists, or not.

    As I read, Thompson, 2007, pp.51-54, Thompson is claiming that we need to distinguish between what is meaningful to the organism and what is meaningful only as an attribution of an outside observer. Yet, this sort of distinction (derived versus non-derived) is perfectly standard and widely accepted in the information-semantics literature. (See It is perfectly standard assumption in old-fashioned cognitivism. So, it looks like Thompson is a representationalist, only he has a different sort of theory of the basis of non-derived content.

    And Thompson is also committed to information being processed in a context-independent way. But, that is not enough to make him an anti-representationalist.

    Maybe ultimately he is, but he writes things that at least suggest he is some sort of representationalist.

  3. I guess I'm uncomfortable with calling Thompson a representationalist because I'm not certain that we can equate Thompson's use of 'meaning' with the cognitivist's use of 'content'. In particular, I think someone like Thompson is more likely to throw out the whole notion of operations over symbols (representations) as generally unworkable than to try to fix it with an account of non-derived content. But I'm just spitballing here. Perhaps I'm uncharitably attributing something like an overly simplistic language of thought hypothesis to all cognitivists, and taking representations to be (too narrowly) symbols in the language of thought. Thoughts?

  4. Well, one can surely draw a distinction between "meaning" and "content", but they are often used interchangeably by cognitivists. So, if Thompson wants to avail himself of such a distinction to resist cognitivism or representationalism, he should put that distinction on the table. Maybe he does. I have not read everything he has written.

    And, I would agree that Thompson is likely to want to don the label "anti-representationalist", but one should wonder what that amounts to. If all he means is that we should not confuse assigned and unassigned meaning, then that's hardly what I would have expected from an "anti-representationalist" thesis.

    What is an overly simplistic LOT? I have a pretty simple view of that.

    But, finally, one has to distinguish being anti-representationalist and being anti-LOT. LOT is a species of representationalism. Representationalists believe there are mental representations, but then LOT adds to this that these representations have a combinatorial syntax and semantics. (One see this in the Fodor & Pylyshyn, 1990, paper, where they take Connectionists to task for being representationalists, but not supporters of LOT, aka Classicism.)

  5. So, look at the first two paragraphs on p. 52 of Thompson's Mind in Life. He gives a perfectly decent explication of representation in cognitivism and connectionism, then raises the following objection:

    This objectivist notion of information presupposes a heteronomy perspective in which an observer or designer stand outside the system and states what is to count as information (and hence what is to count as error or success in representation).

    So, the objection is that cognitivist mistake assigned content for unassigned content. But, that simply ignores the pretty well established tradition of naturalized semantics within cognitivist research. This ignores the tradition of Fodor's asymmetric dependency theory of meaning, Dretske's teleoinformational theory, Millikans functional semantics, conceptual role semantics. All of these theories implicitly assume that we want an account of unassigned meaning, rather than assigned meaning.

    So, it looks to me as though Thompson is attacking a straw man.

  6. cont...

    This is not to deny that the system has an environment (I've heard some people call enactivism idealism, and this can't be right), only that what counts as information for the system does not require a specification of a relation between features of an objective environment and features of the system as apprehended by a third-person perspective (however, see Thompson on p.56-57 on the heuristic value of such a perspective). The system's environment causes many of the sensory perturbations, but the system's reaction to those perturbations is not to build up an accurate model of an objective environment. Rather, some of those perturbations will threaten or assist in the continued organizational coherence of the system, and it is on this basis that the system looks out onto a world of meaning. Meaning is constituted by relations internal to the system and not between elements of the system and elements of an objective environment (this can't be quite right if we are supposed to include extended sensorimotor dynamics, but it is fine as a first pass to emphasize the distinction). The maintenance of the relations of processes within the system that constitute a high-order organizational constant (this is a reference to Varela and Maturana's theory of autopoiesis) constructs a world for the system. There's no mystery here about how the organism 'represents' the world, or how these representations get their meaning, or how, even with a semantic story in place, manipulations of representations can lead to mindedness and understanding. Rather, the world constructed by the system on an ongoing and on-demand basis is intrinsically meaningful.

    I fear that this may have made enactivism sound decidedly internalist. This is a point I'm wondering about myself, but I think the usual line is supposed to be that certain of the processes that go on within the system (such as sensorimotor processes) cannot be explanatorily separated from their extended dynamic that reaches out beyond the self-maintained boundaries of the system. Hurley has something like this going on in her “Varieties of Externalism”. So Thompson writes:

    “Therefore, if we wish to continue using the term *representation*, then we need to be aware what sense this term can have for the enactive approach. Representational 'vehicles' (the structures or processes that embody meaning) are temporally extended patterns of activity that can crisscross the brain-body-world boundaries, and the meanings or contents they embody are brought forth or enacted in the context of the system's structural coupling with its environment... Instead of internally representing an external world in some Cartesian sense, [autonomous systems] enact an environment inseparable from their own structure and actions.” p.59

    So this permissive use of the term 'representation', largely understood to be heuristic, may be why it is difficult to pin Thompson down on the matter. But to reiterate a point I made earlier, I suspect that using 'representation' in this way is not easy to reconcile with the work representations are supposed to be doing for cognitivist theories of cognition. I take it that, on the standard cognitivist view (but at least on the LOT view), it is the syntactic properties of the vehicles that are operated on in cognition. I can't see how Thompson's vehicles would have syntactic properties, or at least syntactic properties that would allow for a standard computationalist story... although I've been meaning to delve a bit more into some of Rowlands' stuff on the matter, as he seems to be trying to walk some sort of middle path.

  7. I don't think that the distinction that Thompson makes between the objectivist notion and the enactivist/dynamicist notion of information is a distinction between derived and non-derived content respectively. I read Thompson as criticizing the naturalized semantics tradition, and therefore not attacking a straw man at all.

    It seems to me that the 'heteronomy perspective' characterizes the methodology of all of the naturalized semantic theories you list. Either they are causal theories of some sort, where an outside observer or theorist identifies (at least in principle) the event or object that causes the tokening of a representation and the representation tokened (and, importantly, the relation between them), or they are theories where an observer or theorist identifies the role that a representation plays within a system from a third-personal perspective such that the theorist makes reference to the relation between the roles and the behaviors of the system (as observed from the outside) or objects and events in the system's environment. So even those cognitivist theories that give an account of non-derived content can only specify what information is supposed to be by reference to the relation between (some element of) the system and specific elements of the environment.

    What's the problem with this? I think it's supposed to be something like this: While these naturalized semantic theories may give an account of non-derived or natural meaning, they do not give an account of how the meaning is meaning *for* the system. The cognitivist might wonder if there is a legitimate distinction to be made here. So long as the operations on the representations lead to behaviors appropriate to environmental or problem-solving demands, this is all the meaning we need. I am probably not doing the motivations for enactivism any justice at all, but what comes to my mind as to why this distinction may be legitimate is in the first case concerns such as those raised by Searle's Chinese room (do stories such as those offered by naturalized semantics really add anything to the defense against Searle's contention that a symbol manipulation device does not really have a mind/understanding?), and in the second case (and probably more to the point) concerns raised from within the phenomenological tradition.

    What's the alternative? On p.53, Thompson is talking about the so-called neural representations (e.g., features that the brain seems to encode such as “edges, lines and moving spots”). He says:

    “From an autonomy perspective, it is crucial to distinguish between information about stimuli as they are defined by an observer and information in the sense of what meanings the stimuli have for the animal. Only the latter play a significant role in the brain's operation. The notion of an object 'feature' is defined by an observer who stands outside the system, has independent access to the environment, and establishes correlations between environmental features and neuronal responses. The animal's brain has no access to features in this sense (and a fortiori has no access to any mapping from features to neuronal responses)... assemblies of neurons make sense of stimulation by constructing meaning, and this meaning arises as a function of how the brain's endogenous and nonlinear activity compensates for sensory perturbations.”

  8. Several points:

    I. I do think that Thompson is on about the derived/non-derived distinction. He pretty clearly is in this passage, right?

    Without autonomy (operational closure) there is no original meaning; there is only the derivative meaning attributed to certain processes by an outside observer. (Thompson and Stapleton, 2010, p. 28).

    II. Regarding:
    Either they are causal theories of some sort, where an outside observer or theorist identifies (at least in principle) the event or object that causes the tokening of a representation and the representation tokened (and, importantly, the relation between them), or they are theories where an observer or theorist identifies the role that a representation plays within a system from a third-personal perspective such that the theorist makes reference to the relation between the roles and the behaviors of the system (as observed from the outside) or objects and events in the system's environment.

    Consider the first condition of the Fodorian theory is that "X" means X if 1) it is a law that X's causes "X"s.

    The condition is not, "X" means X if some outside observer says that it is a law that X's causes "X"s. The latter would defeat the whole purpose of the naturalized semantics project, since it would rely on the meanings of the observer to specify the meanings in the thing using "X".

    III. Fodor's theory just is that if, say, some neurons in a brain satisfies the conditions of his theory, then there are meanings for that brain. One can think that Fodor's theory is wrong (which I do), but that is his theory.

    IV. One can also adopt a Millikanian theory of naturalized information semantics where something might get to be a meaning for a system based on how a consumer uses the representation.

  9. V. Now, it is true that it often happens that folks like, say, Hubel and Wiesel see that certain neurons fire in response to oriented lines, then infer that the firing means a line of a particular orientation.

    But, the naturalized semantics tradition does not say that it is in virtue of Hubel and Weisel's postulation that those neurons means what they do. Instead, the tradition says that "X means X if it is a law that X's cause "X"s, ..., or whatever.

  10. Ok. So, I found an online-accessible paper from Fred Dretske, "Misrepresentation", wherein he says he wants a theory of what a thing means to a system. Check out the middle paragraph on p. 303:,+dretske&ots=4Cka_hz07B&sig=xGcJuz-5W9519Fcdw74QB_GW8VU#v=onepage&q=Misrepresentation%2C%20dretske&f=false

  11. Or maybe the first paragraph of p. 68 here:

  12. Hi Ken,

    I am not sure this ties in the discussion so far, but it does to the original post.

    Claiming that there's original meaning (or non-derived content) in virtue of an autonomous organism doesn't commit you to the claim that there are distinct subpersonal vehicles of content which are usefully referred to as mental representations (and you might deny the latter because, say, the internals of the organism is just a big undecomposable mess of spaghetti). I haven't read Chemero's book but assume that he wants to avoid relying on descrete subpersonal representations, not avoid the claim that organisms have "global" states with content.

    Cheers olle