Wednesday, November 04, 2009

Sequences: linear and differentiaing

A sequence of instructions typical in, say, a computer "howto", allows for a few values to be set, and choices to be made, but is essentially one long instruction with parameters:

1. do this
2. do that
3. name this
4. choose between this and that
...

Let's call this a "linear sequence".

Let's contrast this with the sequence that a living organism undergoes during its growth and development: i.e. morphogenesis.

Each step in a sequence of this sort sets the stage for multiple parallel results that follow, and for each of these steps, the number of effects multiplies. This allows the cells or parts of an organism to differentiate further at each step, along a morphogenetic gradient that increases the complexity of the structure, and moves it closer to a specific purpose in the general scheme.

Note that each step's description is small, at any stage, relative to the size of its effects. This effect, while not infinite in the case of a human body, is still vastly disproportionate, and reminiscent of human language, which generates infinite variety from finite means. The same is true for the variety of life generated by a small set of DNA.

I assume that Chomsky is right that the small but powerful definitions of recursive enumeration make up for this difference, in these three case (morphogenesis, diversity, language).

The question for computing is: if this is so clearly effective, why do we do anything else? Why do we use "linear sequences" and simple instructions when "differentiating sequences" and recursion are massively more effective? And, it seems, more natural.

Part of the problem is that, based perhaps on indoctrination, we don't automatically see recursion as natural, as it is generally represented notationally. Although it is possible that this is because we are the products of recursion, and we use it physically, unconsciously, to perform mental and linguistic tasks, I think that deep down, we think recursively. We can often see patterns where recursion would apply, but our initial reaction (unless we do a lot of work with LISP or YACC) is to approach the problem as a linear path. This gets very complex for hard problems, and so we lean on solutions, provided by others, who have often found the patterns and reduced them to recursive procedures in toolkits and frameworks. But we still don't actually approach a problem as a gradient, differentiating sequence of recursive, grammar-driven resolutions.

If this is right, the solution would be to create an artificial language whose recursive representations are closer to the way we think innately. And then to work on problems with that language, so we learn to approach a problem in such a mindset and sensibility, that we begin to automatically spot the recursive principles and switching parameters required to cast our imagination into a software product.

I don't think this is lambda calculus i.e. LISP, or attribute grammars i.e. YACC, or anything very much like formal logic in its current state. In some sense, when Frege made his point -- that effective problem-solving can be done without concern for the way we think -- he set us up for a century of cognitive pain. That said, it's hard to see how he could have done anything else, because discovering the "way we think" is still a woefully distant scientific goal today.

A method I use that helps can be found in Alexander's work -- and when we worked together he always asked me to use, explicitly, feeling as a shortcut to good results in the realm of computing. This also, perhaps not strangely, also works in the sphere of approaching the structure of innate cognition -- in my work, starting in the early 1980's, designing artificial languages, I listened carefully to engineers expressing their thoughts within the problem domain, and created computer-recognizable notations to provide limited transformation effort on their part: they could write what they thought, wanted the machine to do, within this limited domain.

Finding examples of natural, cognitively natural patterns of recursive enumeration in computing, patterns that feel good, is what I would like from people. I'm finding some of my own, but we need lots of examples, before we can tell what this new, more human language of computation should look like.

No comments:

Post a Comment