Thursday, February 24, 2011

Wheeler 2005 on representation and computation 2

The representational theory of mind and the computational theory of cognitive processing are empirical hypotheses. However, they are empirical hypotheses whose truth has been pretty much assumed by just about everyone in cognitive science.  (Wheeler, 2005, p. 8). 
Now, I suppose that if your aim is to undermine opposing views, it's easiest just to say that they are simply assuming something.  But, when you have major, explicitly acknowledged empirical hypotheses--such as that cognition involves rule and representations--then it seems unlikely that such hypotheses will be mere assumptions.  So, if you want to understand the major tenets of an opposing view, you should probably dig around and find out why they hold them.

So, I don't know why Gibsonians are so interested in the direct perception of affordances, but I figure there must be some experimental result or something that drives this.  This is not just something they assume.  And, I assume that there is some reason that Maturana and Varela (and it seems Evan Thompson following them) think that life and mind are very intimately related.  I have no idea what that is, but I'm not going to go out on a limb and say they are just assuming that there is a connection.

Now, of course, finding out why some group holds a view takes a lot of time.  I've been rooting around trying to find out what drives EP.  I don't think reading things like Gibson, 1979, or TSRM are really doing it for me.  I think I need to go back to some of the earlier experimental work.  I've read some Maturana, Varela, and most of Mind in Life, but I still don't get it.
Wheeler, M. (2005) Reconstructing the Cognitive World.  Cambridge, MA: MIT Press.

17 comments:

  1. The enactivist mind-life continuity thesis has, from what I can ascertain, two strands that have been explicitly stated in the literature: 1) that the concepts required to understand life are the same concepts required to understand mind, and 2) that life is sufficient for mind. 1 and 2 are interrelated, but can be treated separately. Here's Clark on 1, quoted in Thompson, p.128-9: "the thesis of strong continuity would be true if, for example, the basic concepts needed to understand the organization of life turned out to be self-organization, collective dynamics, circular causal processes, autopoiesis, etc., and if those very same concepts and constructs turned out to be central to a proper scientific understanding of mind."

    For 2, the sufficiency claim, see Thompson p.126-7. The story in broad strokes is that life is autopoiesis, and autopoiesis entails a kind of control over coupling with the environment that is cognition in its minimal form.

    While I don't recall seeing any arguments to this end, I believe there is a claim that some enactivists may want to make (although against this, see Di Paolo on habits in robots), and that is that life is necessary for mind. The latter claim could be true if life is necessary for original/immanent/intrinsic teleology. It's not clear on the enactive view if this is the case, but it's possible that Varela may have thought so once he looked again at Kant and read Hans Jonas.

    ReplyDelete
  2. But, anon, it is not so much that I am don't know what Thompson and M&V claim. My perplexity is why they think this, or why I should think this.

    So, for example, once I read that is sufficient for mind, my first conclusion is that this is just wrong. Slime molds don't have minds. Sponges don't have minds. But, then, my next thought is that, well, they just must not mean by "minds" and "cognition" what I mean by "minds" and "cognition". And, sure enough, reading through the bibliography of Thompson's book, one does not find a lot of references to articles in cognitive psychology journals. So, explicating what is going on in cognitive psychology journals does not seem to be the primary target for Thompson. (Same, it seems for M&V.)

    But, then, at the bottom of p. 126 of Thompson's book, he notes that "Cognition is behavior or conduct in relation to meaning and norms that the system itself enacts or brings forth on the basis of its autonomy." Now, I can see how if this is what one thinks cognition is, then it's not a far trip to thinking that slime molds and sponges have it. But, this just invites the further question of why one would think that this is what cognition is. When I say I don't get it (which is a pretty vague claim), I mean that I don't see why one would have these views. I don't presume that Thompson is just assuming this, so what is the underlying rationale? I don't get it.

    By way of contrast, it might be worth noting that cognitivists don't think that cognition is behavior of any sort. Instead, cognitivists take cognitive processes to be among the factors that produce behavior. So, there is a fundamental issue here regarding how we are to understand cognition.

    ReplyDelete
  3. Me again.
    For what it's worth, I disagree with Thompson's characterization of cognition. Where it might work is if we take cognition to be something that is only ascribable at the personal level. But then identifying cognition with sense-making (which is defined in terms of subpersonal processes) is some kind of category mistake. For enactivism, cognition may be evidenced in behavior, but cognition cannot be a species of behavior. I think enactivism ultimately has to go along with cognitivism to a point: Sense-making as the maintenance of autonomy provides 1) norms for the system, and 2) intrinsic purposiveness (teleology). This can ground intentional content, but does not exhaust it. This means that cognition, even for enactivists, must be part of the explanation of behavior (insofar as behavior is individuated by its intentional content), but cannot itself *be* behavior.

    As for why you should believe the claims of enactivism, I have to admit: While I find enactivism very attractive, I find it hard to provide a satisfying argument against cognitivism, and for enactivism. However, your point is correct that I don't just *assume* the premises of the view. So far as this was your argument, I agree absolutely.

    However, in the spirit of a friendly examination of the reasons we hold the views that we do, I'll throw out a couple of considerations. If you see fit to show how these are not considerations against cognitivism, I welcome the challenge. Of course, you are always free to reject these considerations, some of which are no more than mere intuitions. However, the intuitions don't arise ex nihilo, but from years of trying to jingle around all of the various bits of evidence until they form some kind of coherent picture. If any enactivists are reading this, I would appreciate assistance.

    You say that slime molds don't have minds. I obviously understand what you mean by this, but I find it hard to imagine a clear dividing line in the phylogenetic tree, before which there is no mind, and after which there is mind fully formed. I think rather that there is great variation in mindedness, and that even in these lowly little creatures we see the precursors to the kinds of minds that we have. I think that this is all that enactivism claims. What are those precursors? Immanent teleology, interiority, autonomy, agency, unity, adaptivity, etc..

    Correlatively, it seems plausible on phenomenological and empirical evidence, that what we call "mind" is a cluster of processes, abilities and phenomena (none of which belong to non-minded things). Traversing the phylogenetic tree we may see a variation in 1) the type of processes, abilities and phenomena, and 2) degrees of each of these processes, abilities and phenomena. Some of these may be well described in computational terms, some may not be. Enactivism attempts to be inclusive in this respect.

    Cognitivism is often associated with the idea that thought is a process of well-defined operations over well-defined representations. Those with enactive intuitions I think would disagree: Both phenomenological and phylogenetic considerations speak against a generally logical basis for thought. Reason may be a spectacular ability predicated on having a complex mind, but it is not essential to mind. Emotionality, on the other hand, seems to be essential to rationality and therefore more mentally primitive than rationality. Enactivism holds that emotionality is partly constitutive of mindedness all the way down the tree. There are basically two points here: 1) the dissatisfaction with the idea of thought as essentially linguistic/logical and 2) the role of emotions. Cognitivists would likely have an easier time defending against 1.

    ReplyDelete
  4. cont...
    Enactivists, I think, also have the intuition that the structure of consciousness (as discovered through phenomenological investigation) ought to have some analogue in the structure or processes of the mechanisms that support it. Such considerations are built into enactivism and form the basis for Varela's neurophenomenology. Phenomenal structure is different than phenomenal content, insofar (at least) as the latter is specifiable in public language.

    Enactivists generally hold that there is a very important difference between mental content as ascribed with public language, and the cognitive mechanisms that are responsible for mindedness. A lovely quote from the new book Enaction (p.41): "The very blurring of distinctions between levels that the enactive approach criticizes of cognitivism has allowed the latter paradigm to connect personal and subpersonal levels with indiscriminate ease. The properties of higher levels are thus explained in terms of lower-level ones, because they are already magically present there." One way to look at this is in terms of propositional attitudes: A cognitivist might hold that S believes that p if p is represented in the brain in some way. An enactivist would probably hold that the truth conditions (if p is indeed truth evaluable) for p will extend beyond the brain/body and into the social/contextual sphere. There are Wittgensteinian considerations here, alongside considerations about the actual use of folk psychology, and a dissatisfaction with attempts at naturalized semantics (and the aversion to thought-as-language mentioned earlier).

    There are also phenomenological considerations about the constitution of the world in consciousness as opposed to the representation of the world in consciousness. Enactivists wouldn't want to deny that we sometimes represent things, but this may be done in memory, imagination, or in drawing a picture, etc.. Perception is not about representation, but about the constitution of a world based on what environmental perturbations mean to the viability of the autonomous system. There is no denial of materialism here, just a denial that we have direct access to the mind-independent world (though we might have indirect access, e.g., through physics). The point is that whether or not there are honest-to-goodness objects in the external world has nothing to do with the fact that we perceive objects. Our task as cognizers is not to represent the world as it is, but to survive in a non-random environment. From an observer's point of view, this "surviving" may be done because every time we encounter an x, we do y. However, this relationship between an object constituted in the observer's phenomenal world and the behavior that observer sees in us has no operative role in the mechanisms that enable our own survival. As far as arguments against cognitivism go, this last point can be seen in Maturana and Varela again and again.

    Finally, there seems to be an intuition on the part of enactivists that cognitive mechanisms are better described in terms of complex systems than in terms of computation. My own ignorance prevents me from pointing to specific empirical evidence that might buttress this claim, but I imagine it's not hard to find. There may be a level of description in which computational concepts are appropriate, but the intuition seems to be that this will be an abstract enough level that it will fail to capture all of the relevant effects of the cognitive mechanisms.

    I expect that there are better ways to put these points, and maybe even fashion some of them into arguments. But hopefully they'll suffice for now in the spirit of a friendly examination of why enactivists might believe what they do.

    ReplyDelete
  5. Edit: "An enactivist would probably hold that the truth conditions (if p is indeed truth evaluable) for p will extend beyond the brain/body and into the social/contextual sphere."
    Should read: "An enactivist would probably hold that the truth conditions (if it is indeed truth evaluable) for "S believes that p" will extend beyond the brain/body of S and into the social/contextual sphere."

    ReplyDelete
  6. Hi, Anon,

    Yes, in the first instance, I was taking Mike to task for what I perceive to be some bad history of science. The hypotheses of rules and representations didn't just appear on the scene by assumption. Nor were they simply an unquestioned inheritance from Descartes. There was the behaviorism that was pretty important just prior to the emergence of cognitive science and it was anti-representational (in some sense).

    And, next, I wanted to indicate that I think that good history of science should exercise caution in claiming that some group of scientists just assume some central empirical hypotheses.

    So, I take it that enactivism, for example, has some empirical motivations somewhere, even though I find them elusive.

    So, what of the empirical motivations for enactivism. The first you mention is that there is no clear dividing line between the minded and the non-minded organisms within the phylogenetic tree. That's here:
    I find it hard to imagine a clear dividing line in the phylogenetic tree, before which there is no mind, and after which there is mind fully formed. I think rather that there is great variation in mindedness, and that even in these lowly little creatures we see the precursors to the kinds of minds that we have. I think that this is all that enactivism claims.

    But, M&V, 1980, p. 13 disagree. They say that all living things have minds. There is this guy John Stewart who apparently claims life = mind. And, in the Preface to Mind in Life Thompson suggests, but immediately backs off, the idea that where there is life there is mind. So, at least some enactivists at least some times appear to put forth a pretty sharp dividing line, namely, it is at the bottom of the phylogenetic tree.

    Now, maybe this is not your view, but it's a view that's out there. Why would one put the dividing line between the minded and the non-minded at the bottom of the tree?

    ReplyDelete
  7. Correlatively, it seems plausible on phenomenological and empirical evidence, that what we call "mind" is a cluster of processes, abilities and phenomena (none of which belong to non-minded things). Traversing the phylogenetic tree we may see a variation in 1) the type of processes, abilities and phenomena, and 2) degrees of each of these processes, abilities and phenomena. Some of these may be well described in computational terms, some may not be. Enactivism attempts to be inclusive in this respect.

    But, here again, cognitivists believe that there are differences among animals in cognitive capacities. Indeed, claims about the differences between chimps and humans regarding linguistic capabilities have been a staple of cognitivism from nearly its inception. And, there has been a lot of empirical research on this within the scope of cognitivism.

    This does not, however, give us reason to embrace (one strand?) of enactivism that appears to maintain that life is sufficient for mind.

    ReplyDelete
  8. But, M&V, 1980, p. 13 disagree. They say that all living things have minds. There is this guy John Stewart who apparently claims life = mind. And, in the Preface to Mind in Life Thompson suggests, but immediately backs off, the idea that where there is life there is mind. So, at least some enactivists at least some times appear to put forth a pretty sharp dividing line, namely, it is at the bottom of the phylogenetic tree.

    Right. I didn't meant to distance myself from that view. The idea was just that minds come in different flavors, some quite different than our own. There are certain properties (e.g., immanent teleology) that only minds have, and that this can be found in the simplest life forms. When I said I couldn't imagine a sharp dividing line, I should have been more specific: I couldn't imagine it somewhere mid-tree, but I'm on board with the idea that the line is at the tree's base. So when I say that in the lowly little creatures we see the precursors to the kinds of minds we have, I don't mean to deny them minds: I just mean that their minds have some, but not all, of the properties that our minds have.

    The claim that life is identical with mind seems absurd to me. Less absurd (plausible, in fact) is the idea that the class of living things is co-extensive with the class of minded things. The life/mind identity claim would have to do more violence to the concepts of life and mind than even most enactivists I've read are prone to :-)
    Do you recall where Stewart makes this claim? I've only recently found his work (introduced to him by the new book on Enaction).

    But, here again, cognitivists believe that there are differences among animals in cognitive capacities. Indeed, claims about the differences between chimps and humans regarding linguistic capabilities have been a staple of cognitivism from nearly its inception. And, there has been a lot of empirical research on this within the scope of cognitivism.

    This does not, however, give us reason to embrace (one strand?) of enactivism that appears to maintain that life is sufficient for mind.


    Right. This is just meant to buttress the claims that 1) there are different flavors of minds (so when we traverse the phylogenetic tree, we shouldn't be so hasty to say that simpler life forms have no mind on the basis that they are different from us), and 2) that plausibly at least some of the capacities that constitute mentation may be more profitably described in terms of chaotic systems than computational systems. But this may be little more than an intuition pump: I don't expect the observation to support much weight. As you say, cognitivists don't deny it!


    Another edit for the Feb 25th post at 11:33am: "Phenomenal structure is different than phenomenal content, insofar (at least) as the latter is specifiable in public language."
    Should read:
    "Phenomenal structure is different than phenomenal content, insofar (at least) as the latter is exhaustively specifiable in public language."
    Even with the edit the point is still contentious.

    ReplyDelete
  9. Stewart J (1995) Cognition = life: implications for higher-level cognition. Behav Processes 35(1–3):311–326

    (I got this from Di Paolo, 2009, Extended Life. Topoi (2009) 28:9–21)

    ReplyDelete
  10. But, return to this
    I find it hard to imagine a clear dividing line in the phylogenetic tree, before which there is no mind, and after which there is mind fully formed. I think rather that there is great variation in mindedness, and that even in these lowly little creatures we see the precursors to the kinds of minds that we have. I think that this is all that enactivism claims.

    As philosophers are wont to say, what's hard for you to imagine isn't very much evidence. Second, it is one thing to say that, say, prokaryotes have precursors to the kinds of minds we have. (That could just mean they have biochemical precursors.) It's another thing to say that the living and the cognitizing are co-extensive. (And, at least some enactivists do say this.)

    I'm going to set aside arguments from your imaginative capacities to focus on this last bit about precursors.

    Grant for the sake of argument, the claim that, say, prokaryotes have the precursors for minds. How does that help you with the conclusion that you seem to hold, namely, that prokarotes actually have minds?

    ReplyDelete
  11. The argument doesn't look like this:
    Prokaryotes have the precursors of minds.
    ...
    Prokaryotes have minds.

    My claim is not that they have the precursors of minds simpliciter, but rather that their minds are the precursors to the kinds of minds we have. But this claim is not a premise from which you can derive the claim that prokaryotes have minds. Rather, the order is reversed. It must first be shown that prokaryotes have minds, and then we can hypothesize that their minds are the precursors to the kinds of minds we have. A simplified argument for the former might go something like this:
    1. Non-derived meaning <-> minds
    2. Prokaryotes->non-derived meaning
    3. Prokaryotes->minds

    The premises would have to be argued for independently, but enactivism seems to do most of its work for #2 (normativity and purposiveness, arising out of autopoiesis, as the basis of sense-making and original meaning).

    Then we could say that our minds exhibit non-derived meaning, and a whole lot else besides, and this, along with some suitable evolutionary story, would justify the claim that the minds of prokaryotes are precursors to the kinds of minds that we have. And if "precursor" isn't an acceptable word, we can find one that better captures the idea. I don't mean precursor to indicate material preconditions or historical contingencies, but rather something like this: There is some x in the set S of properties that our minds have that is, itself, sufficient for mindedness. Any set of mental properties that includes some subset of S containing x may be considered a precursor for the kinds of minds we have.
    I just spun this definition off the top of my head, so I don't expect it to be airtight, but rather to give the gist.

    ReplyDelete
  12. P.s. Thanks for the reference. I'll check that out.

    ReplyDelete
  13. Hi, Anon, sorry to be slow. My day job calls.

    Ok. This is new and at least looks like a plausible argument:

    1. Non-derived meaning <-> minds
    2. Prokaryotes->non-derived meaning
    3. Prokaryotes->minds


    I am on board with part of premise 1., at least read a particular way, namely, if something is a cognitive process, then it is the manipulation of non-derived representations.

    But, why think that 1. is a biconditional?

    More subtly, in reading the literature, I'm not sure that what the enactivists mean by non-derived meaning is what I mean by non-derived meaning. So, for example, it appears to me that enactivists will say that, say, NaCl has non-derived meaning for humans, but that does not mean the same thing as there being some state in the brain-body-world that has non-derived meaning. "Meaning" sometimes means something like "has value to" versus something like "bears semantics content".

    ReplyDelete
  14. And, the distinction matters, since it is much more likely that prokaryotes have non-derived meaning construes as "has value to" than it is that prokaryotes have (much) non-derived meaning in the sense of "bears semantic content".

    But, Dretske did take a stab at single-cell organisms in "Misrepresentaton".

    ReplyDelete
  15. You're right about the distinction, so let's consider non-derived meaning here to be non-derived-value-meaning in the sense of "has value to". I take it that if we accept this interpretation, you find 2 plausible, but then find 1 implausible.

    You're right that the biconditional is stronger than the argument requires. The most plausible claim is that if x has a mind, then something y in x's environment will have non-derived-value-meaning (NDM) for x. However, the argument requires the reverse, and prima facie less plausible: If there is some y with NDM, then there is some x with a mind.

    The argument then becomes:
    1. NDM -> mind
    2. Prok -> NDM
    3(C). Prok -> mind

    The most plausible way I can think of (at the moment) to argue for #1 is this:
    i. NDM -> agent
    ii. agent -> intentional content (IC)
    iii. IC -> mind
    iv(C). NDM -> mind

    This means that (i) if y has non-derived-value-meaning, there is an agent x for which y has non-derived-value; (ii) all agents can have intentional content truly attributed to them; (iii) if intentional content can be truly attributed to x, then x has a mind; (iv) therefore, if y has non-derived-value-meaning, then there is some x with a mind.

    I take it that in this argument, i and ii require the most additional support. I think I may have some ideas about how to support these.

    ReplyDelete
  16. An enactivist might take a shortcut, leaving aside non-derived meaning entirely:
    a. Prok -> Agent
    b. Agent -> IC
    c. IC -> Mind
    d. Prok -> Mind

    Barandiaran, Di Paolo and Rohde have a paper Defining Agency: individuality, normativity, asymmetry and spatio-temporality in action (Adaptive Behavior, January 2009), that would go some way toward supporting a.

    We'd still need a story about intentional content that would buttress b. If you accepted a, you might not accept b if you thought that intentional content requires proposition-like representations on the part of that entity for which the intentional content can be truly attributed.

    ReplyDelete
  17. I have to make a correction to the logic above.
    I didn't want to imply that agents were sufficient for intentional content. As is evidenced by my explanation, agents PLUS attribution practices by an observer community are both necessary for intentional content. I think the details can be glossed over if we adjust every claim that says
    Agent -> IC
    to read
    (Agent & O) -> IC
    Where O is the condition about the observer community.
    NDM does not imply O.

    Note also that where I use non-derived meaning (NDM) in the last comment, it should be understood as non-derived-value-meaning only. I understand that the standard understanding of intentional content is that it is non-derived. Here, NDM would be necessary for IC, but not sufficient. IC would inherit its non-derivedness from NDM.

    Hopefully the logic works now and is in line with my intentions.

    ReplyDelete