Monday, November 8, 2010

One reason for thinking smart mechanisms likely?

For at least two reasons the likelihood of smart mechanisms in perception should be considerable. One is that the basic tasks of perception, and the information available for them, are stable properties of the organisms and the environment, respectively. It therefore seems appropriate that they have been solved through "invention" (evolution) of smart mechanisms. Many of the tasks require more or less continuous operation, which also favors smart solutions. (Runeson, 1977, p. 174).
I don't get this.  So, let's say that perception is for gathering information about the stable properties of organisms and the environment.  (I think this is what Runeson means, but there appears to be some typo or something in the second sentence.)  Why does that make smart mechanisms more likely?  And let it be the case that many tasks require more or less continuous operation.  Why does this make smart mechanisms more likely?  I don't get the connections between these features of tasks and the nature of the mechanisms.  It just seems like a non sequiter to me.  Why are neural circuits any less probable a mechanism for handling this?

Runeson, S. (1977).  "On the Possibility of 'Smart' Mechanisms" Scandinavian Journal of Psychology, 18, 172-9.

21 comments:

  1. So, let's say that perception is for gathering information about the stable properties of organisms and the environment.
    Not quite right; he's saying that 'the information available' is a stable property of the environment. What's the typo?

    And yes, the point is under-justified. But I think the suggestion is that smart devices solve reliable problems more efficiently than rote devices; the problems of perception are reliable; evolution favours efficiency; evolution should favour smart perceptual devices.

    There's plenty of work required to justify all this in the particular case of percpetion, of course, but nothing in there leaps out as flat out false. The planimeter is an efficient solution compared to the rote alternative, for instance.

    ReplyDelete
  2. "One is that the basic tasks of perception ... are stable properties"

    or is it

    "the basic tasks ... and information available for them .. are stable properties"

    The first is ungrammatical and the second doesn't make much sense. A basic task is a stable property? I would have thought the information would be a stable property.

    This doesn't strike me so much false as it does simplistic. Runeson seems never to entertain the possibility of there being trade offs in evolution. He always seems to focus on efficiency in one task. So, it might well be beneficial to be good in one task, but specialization for that task might be detrimental to the performance of another task.

    So, for example, humans might not be optimally designed for walking, because of a trade off. The pelvis might have to be designed, not just for efficiency in walking and/or running, but for enabling children's bodies to pass through the birth canal.

    So, here you have an easy reply. Point out where Runeson entertains the possibility of these kinds of trade offs.

    ReplyDelete
  3. Ok. The first is grammatical. I don't know what I was thinking. Still doesn't make much sense.

    ReplyDelete
  4. It's not the best sentence, no :) The 'respectively' is key, but I had to read it a couple of times too :)

    Trade-offs are a good point. However, we're talking about perceptual systems, and not skeletal structure here; perceptual systems are much more dynamic and shapeable by experience within even the lifetime of the individual. Local minima might be escapable with, say, training because that training can be a sufficiently stable and strong pull (and actually the ecological learning literature goes along that idea, I think).

    (So, to be more accurate, what I should have said is that evolution should favour the capacity to form smart devices.)

    I'm not disagreeing that the paper is thin; I just think that as an example-of-concept paper designed to get people thinking along a particular line, it does the business.

    ReplyDelete
  5. Ok. So, we can agree that the sentence is less than perfect. (And, that was not really the main issue for me.)

    I'm glad we can agree that the matter of trade offs is an issue.

    I'm also glad that you accept the skeletal structure example.

    So, let me offer an example of what appear to be trade offs in the human visual system. Mainstreamers often say that the human visual system is duplex. It involves one set of components for photopic, high-resolution color vision and one set of components for scotopic, low-resolution monochromatic vision. There appear to be trade off here, since to get high resolution one needs densely packed cells with small receptive fields having little fan in to the next layer. By contrast, to get low light vision, one needs lots of fan in. There are many facets of this duplex system. So, even with vision we have to be careful about tradeoffs.

    (I can't think of examples in other areas of perception because I don't know much about vision and I know a lot less about other areas.)

    ReplyDelete
  6. That's a fair example of a trade off, and let's face it, anatomy and physiology are riddled with them.

    But it's not where I was heading; I was thinking visual perceptual learning (changes in ability to discriminate information). It's a fairly straight forward empirical fact that this sort of change is ubiquitous and driven by experience. While there is certainly a global 'duplex' structure to the retina, functionally the broadly conceived visual perceptual system (of which the retina is just a part) is dynamic and flexible. Tradeoffs in on place can be offset in another; if I learned anything useful from studying neural networks, it was that the best way to get out of a local minimum defined in an n dimensional system was to move to an n+1 dimensional system. The perception/action system is in principle high dimensional; these get temporarily frozen out to perform specific tasks to make the problem solvable but the system has a lot of wiggle room.

    ReplyDelete
  7. Ok. So, more agreements regarding trade offs. Good. Let me try to push you a bit further.

    Neural networks seem to me to be like rote systems, if you allow the units to be "programmable". So, Runeson's distinction and the case for it seem to be breaking down a bit.

    "Rote instruments consist of large numbers of a few types of basic components, each of which performs a rather simple task. The accomplishment of complex tasks is possible through intricate interconnections (programming) between the components. The important principles of operation reside in the program, and by changing the program the instrument can be put to different uses. New problems can be approached in a straightforward, intellectual, bureaucratic, "systems", manner.

    ReplyDelete
  8. By neural networks, do you mean biological ones or the toys psychologists like to play with?

    I think these are mostly rote. You might lock a formed network down so those neurons/units can't be recruited into something else, and thus you might end up with a smart device that does one thing and nothing else. But you get that if you remove the programmability of the units; while that remains, they do indeed seem rote.

    ReplyDelete
  9. Actually, I think real neural circuits are "programmable" in a metaphorical sense at least. And the typical toys are programmable, though as you say they can be "locked".

    But, even the locked toy networks still show how messy the Runeson smart/rote dichotomy is, sinve even these have features of both. They could be non-programmmable and "specialized" on one task, but "consist of large numbers of a few types of basic components each of which performs a rather simple task".

    Zhu and Bingham look to me to have made the smart move of making "smart" a simpler, cleaner concept.

    ReplyDelete
  10. But the suggestion is that once locked down, they are no longer programmable and thus you should evaluate their function at the level of the device, not the individual parts. The device has ceased to be rote and has become smart.

    Being one thing at one time, and then being changed into another thing at another time, doesn't mean you're still the first thing. Essentialism isn't trendy any more, I don't think.

    ReplyDelete
  11. "But the suggestion is that once locked down, they are no longer programmable"

    Agreed.

    "and thus you should evaluate their function at the level of the device, not the individual parts."

    I don't see that this follows.

    "The device has ceased to be rote and has become smart." Well, that might be something of an overstatement. It has become more "smart-like", having lost one property of rote mechanisms. One still has to have a locked neural network that is "specialized". I think my basic point still stands. The smart/rote distinction, having many separable features, is not clean.

    ReplyDelete
  12. The only reason to lock a network's parameters is to make it specialise in the one thing enabled by those parameter settings, and to allow it to stay specialised. There's then no point in analysing it in terms of it's rote components, because their individual properties no longer contribute to the system's behaviour.

    ReplyDelete
  13. "The only reason to lock a network's parameters is to make it specialise in the one thing enabled by those parameter settings, and to allow it to stay specialised."

    I don't see this. Locking connection weights, or whatever, only stops it from changing. It doesn't specialize it. So, if the net is doing something very general before locking, it's going to do something very general after locking, right? Of course, if it is specialized before locking, it will be specialized after locking.

    So, locking versus specialization seem to me to be orthogonal properties.

    ReplyDelete
  14. If you can only do one thing, you're specialised for that one thing. Specialised doesn't entail being any good.

    I'd need an example of doing something general that you could lock down before I thought that was much of an actual counter example.

    ReplyDelete
  15. I guess this goes back to what it means to be specialized. I would never have supposed that the human hand is specialized. Maybe if we could only perform a precision grip, rather than a power grip, then I could see it.

    You might say that the hand is specialized for grasping, but heck one can use the hand for many other things. One can flip someone the bird, one can snap one's fingers, one can scratch one's back, one can strum a musical instrument, play the bongos, use them in swimming,....

    What is this human hand specialization?

    ReplyDelete
  16. There's two aspects here: what the hand evolved to do, and what besides that primary function the evolved form affords. So the human hand did not evolve to allow bird flipping, but it does afford this by virtue of design features specialised for prehension. In fact, it's important to remember flipping the bird and playing the guitar came second: they are the way they are because the hand is the way it is, not the other way round.

    ReplyDelete
  17. So, is the hand specialized for prehension?

    And is the idea that X is specialized for Y means that X is adapted to do Y?

    ReplyDelete
  18. Yes the hand is specialised for prehension. I think that means it has successfully adapted to do prehension; that seems unproblematic although I'm sure there's some classic counter-example floating in the wings.

    ReplyDelete
  19. Ok. So, this is some progress, if "specialized for" means something like adapted for. But, I will want to read through the Zhu and Bingham stuff to see whether this pans out.

    This reading papers to find things out is a lot of work. Every one you read invites two others to read.

    ReplyDelete
  20. Is there some account of what vision is specialized for?

    ReplyDelete
  21. This reading papers to find things out is a lot of work. Every one you read invites two others to read.
    Yes, I know, sorry :) There's a bunch I want to blog in some detail but it all takes time to do properly.

    Vision is, of course complicated. Hands are simpler because they are much more obviously specialised hardware solutions to the various problems entailed by having to control something this flexible. Ecologically, a starting point might be that vision is specialised for the differentiation of information about affordances from light; but that's obviously contentious. Maybe a more neutral version is 'extraction of information about the world from light'? Then the ecological one is a specific implementation of that specialisation.

    I can't think of any specific accounts of what vision is specialised for. But of course there's different ways of conceiving vision: is it the transduction of light? The active exploration of the optic array? etc

    ReplyDelete