Saturday, January 12, 2013

The Science of Common Sense

How do 'things' really work? When people begin to answer that question, on various topics, for themselves, it should be uncontroversial to say that they are using their minds. Let us, for the purpose of this discussion, sidestep the common-sense term mind (more on that later), and say that we're discussing the brain, part of our biological endowment. Since people are not conscious of most of what goes on in their body, they are similarly not conscious of what goes on in their mind. So when they begin to think "How does the mind really work?" most of the actual work going into forming and answering this question, is something they are doing naturally, work they are unaware of -- it is unconscious work. In the same way that you are not conscious of the mechanism that allowed you to digest breakfast, you are not conscious of how you learn, how you think, how you speak, how you communicate. These things are inaccessible to introspection (thinking that they are is an ancient mistake, mostly associated with John Locke). This is not because you're being insensitive. It is because these are obscure mechanisms, and uncovering them is a hard biological problem, harder than understanding, say, how we digest breakfast, which is itself only roughly understood. This conversion of something we think we know, into real questions about how things work, and the struggle to know the answers, and even consider what form the answers may take, is science. It is obviously very far from common sense, on any topic. Science turns everyday, commonsense things into mysteries.

So, we may think that, say, we communicate by "agreeing on the meaning of terms", because that's a high-level description of something commonsense that we do everyday. But if you try to apply this to the actual mechanism with which we communicate, our minds, it is simply not true. We can communicate only because we are born with essentially identical mechanisms that see the world and assign meaning to it in nearly identical ways. The assignment of a term to these meanings in the head happens naturally, not by agreement, and the method of assignment is so highly determined that the network uncovered by fMRI's of our brain, for the meaning an isolated idea or word, is essentially the same across languages and individuals. We did not learn our language by "agreeing on the meaning of terms", sensible as this sounds, but instead by the natural biological development of our language faculty as we grew up. The "assigned terms" are almost never consciously "assigned", and those which we do assign are the most trivial part of what we are doing when we are thinking or speaking in a language. Meaning is, instead, highly determined by our biology. Not only does every language have noun phrases, verb phrases, nouns and verbs, etc., but when we say screwdriver, in any language, the same network in our brain lights up, which means that we really can only understand a screwdriver in one way. Our interaction with the word is highly species-specific (the conceptual overlap with other animals is still unknown) and this is what allows us to communicate.

It is also what allows us to learn. So, we do not learn by "repeating the same thing over and over". That's "common sense" because this is something we do to force ourselves to remember something under pretty unusual circumstances. But actual learning and memory do not operate in this way. There are parts of our brain which are biologically structured to unconsciously learn certain things in particular ways, and that's most of what we do. When we learn something about an "object", our brain has somehow already divided the world into "objects" and "predicates" so that we could learn something. An that's just the tip of the iceberg.

While these ideas on cognition have recently started to stare us in the face with current neural imaging technology (for example, the work of Marcel Just) they resurfaced in the linguistic work of Noam Chomsky and the cognitive science revolution in the late 1950's, which was inspired partly by all the talk from the 40's and 50's of the computer as a "giant brain" ... which went hand-in-hand with discussion of the brain as a "giant computer". From this perspective, "common sense" was about as useless in helping us determine what "memory" is, as it is in helping us figure out what an "atom" is.

These issues have been discussed for thousands of years -- Plato knew there was innate knowledge (see the dialogue known as "Meno"); Galen knew that the brain was the seat of consciousness, and Nemesius that different parts of the brain provided different functions; Proclus knew that ideas had some kind of fundamental structure; Basil of Caesarea knew that the meanings of words were notions in the mind rather than the structure of things in the outside world.

These views of the mind as highly-structured are often pushed aside for a "common sense" or "pragmatic" views of the mind as a Tabula Rasa, a blank slate or a network of associations among identical elements, which mysteriously takes impressions from the outside world. Nothing like this has ever been found in a living creature, and yet the idea recurs many times, sometimes for engineering reasons (see the current state of computer engineering and its machine learning sub-field) and sometimes because the theories, of obscure qualities underneath common sense, were often badly invented and not too useful. Celsus reviews the state of medical theories at about the time of Tiberius, and says that while you need to think hard about what you're doing, and talk to as many other doctors as possible about underlying causes, experience seems to trump the theoretical side of medicine. In the 18th century, the great Scottish Enlightenment physician William Cullen humbly proclaims that, while we must search for the causes of disease, we must also realize that we are very badly equipped to do so, and our imagination can often leap well beyond any grounding in reality. In physics, this was well known: Newton "frames no hypothesis" because he'd reached the limit of what could be reasonably said about gravity. He had a theory, of course: he had modeled what he could. But anything else would be invention.

Unfortunately Tabula Rasa is a theory too. And a fanciful one at that. It has been demonstrated, repeatedly, to be only useful as an engineering tool, not as a biological model. It may be common sense, but, as we've seen, that doesn't make it science.

Beyond being discarded as a tool for understanding, though, there may be another role for common sense in science.

The fact that we, and other creatures, have a common sense, is critically important. The structure of common sense, while complex, seems to have patterns, which we can begin to examine empirically. In the same way that modern linguistics used natural evaluation ("is this sentence grammatical?") as the basis for experiment, we can perform non-invasive tests upon ourselves to find the structure and contour of the faculty of common sense. This can be backed up by fMRI studies on the structure of common sense, so that we don't get lost in fanciful theoretical discussions (including those dead-end Tabula Rasa ones ... the fMRI could have stopped behaviorism at a very early stage).

We could, in this way, also study what we think of as mind, in the common sense notion, including not just the structure in our brain that represents this word 'mind', but also the way we think about our abilities, methodologies, et cetera.

Beyond that, of course, is one of the central concerns of this blog, namely, what part of common sense is closely tied to parts of the laws of nature? Well, although on a simple level the answer is "everything", what I mean is: which part is genetic, which part is epigenetic, which part was shaped by selective pressures, which by biophysics, and which part of our understanding, feeling, common sense and other natural endowment, is related to structure in nature that we are just beginning to study (for example, in the Evo-devo movement in biology). Although we know something complex is behind all this, and that we don't understand much of it, we can't really approach those questions in an organized way until we start to understand  the structure of common sense in our mind, along with our other faculties such as imagination, memory, feeling etc.

Christopher Alexander's work represents a pragmatic waypoint in this scientific project: a point where we can acknowledge that something is going on (i.e., the evident structure of things with feeling is different than things which lack it) and that we can use it (use feeling and a good unfolding sequence to create something with feeling) and acknowledge that we need to outline a scientific program to, someday, actually uncover what is going on.

Sunday, December 23, 2012

Mechanical Philosophy and The Nature of Order

Noam Chomsky writes often that the mechanical philosophy of the 17th century, i.e. "the universe as clock", i.e. "contact mechanics", obviously influenced by clocks and automata and strides in industrial technology, inspired late renaissance science, but was destroyed around the end of the 17th century by Newton. He could not find a mechanical explanation for the forces of gravity, and we have not bettered him to this day. Beyond describing the mathematical laws -- which gravity, for some reason, obeys -- Newton says he would "frame no hypothesis". He didn't want to invent anything or idly speculate. This means that mechanical explanations, perhaps among the most natural for humans, are insufficient to explain the underpinnings of the world. Any first year physics student learns this today. Our modern models are like Newton's laws of gravity: mathematical and abstract when we cannot even begin to model the underlying mechanism.

Christopher Alexander has gone to great effort to uncover a geometry, a morphology, a developmental pattern of life (as a technical term he introduces) -- this is explained, to his current level of research, within the book The Nature of Order. This book does not use a mechanical worldview, nor a mechanical criteria for intelligibility and explanation. It is 'natural', but it's not the naturalness of the human mechanical view, which is innate and easy to understand, but ultimately wrong. Alexander presents a harder-at-first-to-grasp natural criteria for the quality of life, but one which, with training, it is possible to get agreement on in brand new cases ... which means that it is objective to humans, and so taps something innate within us. I say "innate" and not "genetically determined" because it's unclear what the role of genetics is in Alexander's perception of this quality of life. Chomsky would say that these qualities are just as likely to come from a "third factor", i.e. laws of nature, and possibly unknown laws. That is, the coherence that Alexander describes might be something that natural law, rather than genetics, determines for the organism.

But of course, we don't know where human 'life-recognition' abilities lie. They may be partly or mostly epigenetic; they may not be. This is a scientific question, once Alexander's qualities of life are well-enough understood to pose specific, partial experiments to determine where in the human organism these perceptions might originate. It is a very difficult project, well beyond current frontiers of research. But it will be import to consider these qualities in human biological and cognitive simulations in the future, and I have a feeling that, if one considers these to be born of 'third factors', they could change the structure of human simulations, of the hypothesized computational abstraction of the human machine, so that Alexander's unfolding mechanism, and structure-preserving operations, may be central.

Sunday, March 21, 2010


After many less interesting attempts to make practical engineering use of Chomsky's rationality and Alexander's results, in late 2009 I stumbled upon a real winner. The accompanying posts can be found at the grogix blog.

Monday, March 15, 2010

Depth and Innate Judgements

I moved this essay here from the Grogix Blog.

Which mental faculties are involved in programming? If we knew more about them, could we invent programming languages that were both easier to use and more powerful?

This is not a well-studied area, and very few aspects of programming languages, old or new, are evaluated formally for the quality of their "human factors" or "user experience". For example, among the features of artificial languages, there has been no attempt to separate experimentally those features that are cultural artifacts, from those that reflect our innate, genetically-endowed mental capacities.

Clearly, much of the mental functionality required to program is shared by our faculty of language, a statement that appears to be also true of our capacity for symbolic manipulation within mathematics and music. To find out more than this, however, we need to construct some investigative theory.

Linguists, in order to investigate a native speaker's unconscious knowledge of language, will construct experiments that use the speaker's unconscious judgment. For example, a speaker (who could be the linguist) is presented one sentence, and asked whether or not it is grammatical. The subject doesn't usually know why, but only knows that it either is grammatical, or isn't, to some degree. In another case, the speaker may be presented two sentences and asked if they mean the same thing.

Throughout our daily lives, we make judgments about quality. Sometimes we judge something "by itself" for "some quality", perhaps relative to our experience, perhaps with our innate judgment. Sometimes we compare things to each other, as in visual preference surveys, to determine whether one has "more" of "some quality" than another.

We also rank things: and I'd like to differentiate these temporarily into two additional categories, because it points to an abstraction:

1) A rank relative to some quality
2) A rank relative to a sense of order, or form

One can imagine a linguistic study of the construction of a line in a poem, where two words are compared with each other to see which is "more profound", say, or "more juicy". This substantive judgment is of type (1). One can also imagine a question like: "is it better to put the first part of the sentence in front of the second, or to reverse them?" This is an appeal to an innate judgment about order, and so would seem to fall into category (2), however, it is also a comparison between two sentences, or possibly among multiple lists of items, and so, really, falls into category (1), using "good order" or "good form" as a judgment quality. This judgment is suspiciously grammatical in nature, even though it could easily apply to order outside the realm of human language, say, for the order of the steps in building a table -- a point we'll return to later.

The action in (2) gives us an inductive procedure to build something larger, in pairs, lists, forms, etc. It sound like Aristotle's use of a hierarchy of forms, or formal cause, to resolve the question of whether the "Ship of Theseus", whose boards are all gradually replaced, is still the same ship. Aristotle's answer was that the form is all that is actually important ... although we have a constitutional habit of stopping the subdivision of form at certain 'substances'. If you believe that you are the same person you were ten years ago, then you agree with Aristotle, because your body's materials are completely replaced in that time.

For reasons I won't go into right here, (1) is also really (2). All judgment is a judgment of form. Let's accept this Aristotelian position for a moment. This allows Christopher Alexander to comes into our discussion of evaluation procedures, which we'll need to discover better forms of programming languages.

In A Pattern Language, which inspired the Design Patterns movement among programmers, Alexander puts patterns into a natural order, and also marks each pattern's quality, according to its level of profundity. In the first book published in the same series, The Oregon Experiment, he also made maps of the University of Oregon campus to determine which areas worked best. At the time, he was simply using intuition, feeling and group agreement to determine these qualities. But he was pursuing more.

He was pursuing a procedure of innate judgment, an experimental procedure, and a creative tool, that would allow someone to pursue greater depth of quality in form.

Published over 20 years after A Pattern Language, Alexander explained his findings at length in the series The Nature of Order.

He describes 'depth of quality' in many ways, and, humans being linguistic communicators, these descriptions are themselves part of the teaching the method of finding increasing depth. To boil down his descriptions: things that are truly good have a living quality which the possess because they unfolded through an ordered, stepwise differentiation, producing a field of strong centers (itself a center) that are ordered and mutually reinforcing.

Clearly, biological organisms fit this description. And human artifacts can too, if people follow a natural process to create something complex, coherent and full of life.

The use of feeling as an evaluation procedure has the effect, Alexander finds, of eliciting profound work. Most human creative work of a high caliber already uses this procedure. He believes it produces certain identifiable properties in the results (see The 15 Properties in the first volume).

In the fourth volume, called The Luminous Ground, he goes even further to describe levels of identification with centers, which act as evaluation procedures that will produce increasing depth of quality:

1) I find it pleasing.
2) I feel related to it.
3) I feel this relationship in me.
4) It touches the core of me.
5) I begin to identify with it.
6) Its spirit and mine are intertwined.
7) Its spirit and mine are of one substance.
8) I am in it, and it is in me.
9) I am it, and it is me.
10) We are, together, part of the universal I.

(Note that these are my quick summaries of Alexander's work, but he and I worked together for a few years, so I don't think I'm too far off. Still, I highly recommend studying the original books, whose nuance is quite important.)

The point here is to obtain some descriptive adequacy of the phenomena of "comparative quality" used by people. I have not begun to explain it, and so there is no explicit theory here with any explanatory adequacy. One does not need to, for example, "believe" in the "universal I" to understand that there are innate human capacities revealed by these procedures.

Alexander uses these evaluation procedures not just for good design, but to create good tools for good design. His primary tool in the past few decades is the sequence, a set of ordered steps which one follows in order to design something good -- if the sequence is good. Each step applies to everything in the design that is appropriate, and so the sequences tend to elicit critical boundaries and large structures first, followed by steps that create increasingly small detail.

Each step takes a lot of work to get right: it needs to make sense in itself, it needs to relate to the steps around it in the sequence, and it needs to be in the right place in the sequence. It must be evocative, inspiring, but not overwhelming. It must be declarative, rather than instructive: "There are beautiful, alternating spaces in the fence" rather than "put alternating spaces in the fence". Most importantly, the sequences need to be tried and debugged, until they produce wonderful results.

In this paper, I tried to use this application of evaluation procedures on the smooth unfolding of a single program, working on it over and over, until I had some consistent satisfaction that the unfolding of the sequence, for this one program, was good.

But this doesn't easily or immediately help the actual writing of programs. It is instructive to build a program-unfolding sequence, and I built a tool that makes these sequences easier to write, test and edit. It's a useful collaboration and research exercise, similar to analysis by design patterns. Unfortunately, it is one step removed from the necessary engineering. One writes a program to unfold a program, rather than my preference: to simply program using some kind of sequence that actually defines a system, and re-writing it until one's evaluation procedures are satisfied, and the system works properly.

I needed a new kind of programming language for that.

Now, programming languages are human artifacts. We make them. Some are better in some respects than others. For some qualitative comparisons among them, there is near universal agreement. Since we already make qualitative comparisons about artificial languages, and since our use of artificial languages is only possible because of our use of unexamined human faculties, I believe there may be some advantage to using Alexander's procedures of quality evaluation to create a better artificial language.

Earlier, I said that my goal is to bridge the gap between the expressive power of the genetic code, and the expressive power of the best modern programming languages. The chasm is vast. And it is not clear that it can be bridged: there is no reason to accept an a priori dogma that such a powerful notation "must" exist. We are not ribosomes ... and ribosomes are not human. The 3 billion ordered base pairs (essentially bits) that become a human being are not necessarily structured in a way that will be useful to our comprehension of complex systems. But, it's a little early to give up. After all, the problem has barely been recognized.

Also, the gap between the expressive power of genetic and man-made formalisms, is best characterized by the gap in the resulting complexity and coherence. These are exactly those qualities that characterized the challenge Alexander faced in the design of cities and living spaces, complex problems that I believe he has tackled with increasing success.

When I loosen my mind and open my heart, in order to use Alexander's evaluation procedures, I begin to whittle down my basic questions about programming languages, further and further. I first discard all of my thoughts about axiomatic reduction, logical consistency, derivation and denotation. I simply look for the natural geometry that underlies all good programming, something that keeps it all coherent, keeps it all together, and gives it life. At the end of my whittling, I have one question: what organization of symbols, in any system, gives me the most expressive power?

The symbolic system needs self-reference: that is, recursion or feedback. It needs to reflect the ordered unfolding of living organisms. It needs to stay coherent no matter how complex it becomes. It needs unity, and wholeness.

The answer that came to me seemed strange at first: a formal grammar. My internal critic pointed out that grammars are great for pattern-matching and transformation, but that's a kind of parsing, really, and not exactly fully expressive of all computation. Right? Well, my internal response was that we really don't know what grammars are ... sets of pattern-transform-action rules are only the major use of grammars in computer science today. There may be others. In Noam Chomsky's half-century of study, syntactic structures are in the middle, mediating all expression of thought, giving form to the connections among memory, innate ideas, and the sensorimotor interfaces. Aristotle also says that form is everything. So, maybe my result from the use of the evaluation procedure is correct. Anyone going through the same process should be able to corroborate this, if the evaluation procedure itself is effective. My conclusion is that any good program must be a kind of grammar. And like any program, the 'meaning' of these grammar-like structures would then be identical with their mechanical function, as we make use of them in the outside world.

Given the idea of a grammar-program, my follow-up question in October of 2009 was: how do I do this? L-systems of productions create a static structure, and I'd used this idea here to generate code, but I didn't really know how to make a grammar do anything besides parse, re-write, and map actions to patterns ...

Then I looked at the original point of the generator I'd built before, for a kind of wiki, and thought, wait, there's a hierarchically unfolding structure in the function of this program. It does very simple things to start, and then the increasingly complex things it does are built of the earlier simpler things it does. It's basically a grammar augmented by a secondary state machine, that is, it does real-time evaluation of productions, which are selected depending upon the state.

The formal grammar acts as a kind of "pump". The system flows through the core grammar. You could think of the productions a being like functions, but a group of functions rarely conveys a sense of a whole, in the way that a single grammar does.

Outside of the grammar are interfaces to the world. In the case of this first working program, the grammar passes HTML pages to the interface, which passes them to the outside world, which in turn passes responses back to the interface and the grammar. Parts of the grammar are used at different times based on states and needs, but they are part of one grammar representing the whole system.

Since they helped me to uncover a good primary organizing principle for 'blooming logic', I should use these Alexandrian evaluation procedures to develop good secondary features in the language. I'm not completely satisfied with the ones I developed rapidly in order to make the primary idea work. But I'll use the procedures to repair and improve the situation, and to create simpler interfaces, so that the power of the grammar is increasingly clear to whichever human faculties are involved in expressing thought, and expressing desired functionality, with this notation.

Wednesday, November 04, 2009

Sequences: linear and differentiaing

A sequence of instructions typical in, say, a computer "howto", allows for a few values to be set, and choices to be made, but is essentially one long instruction with parameters:

1. do this
2. do that
3. name this
4. choose between this and that

Let's call this a "linear sequence".

Let's contrast this with the sequence that a living organism undergoes during its growth and development: i.e. morphogenesis.

Each step in a sequence of this sort sets the stage for multiple parallel results that follow, and for each of these steps, the number of effects multiplies. This allows the cells or parts of an organism to differentiate further at each step, along a morphogenetic gradient that increases the complexity of the structure, and moves it closer to a specific purpose in the general scheme.

Note that each step's description is small, at any stage, relative to the size of its effects. This effect, while not infinite in the case of a human body, is still vastly disproportionate, and reminiscent of human language, which generates infinite variety from finite means. The same is true for the variety of life generated by a small set of DNA.

I assume that Chomsky is right that the small but powerful definitions of recursive enumeration make up for this difference, in these three case (morphogenesis, diversity, language).

The question for computing is: if this is so clearly effective, why do we do anything else? Why do we use "linear sequences" and simple instructions when "differentiating sequences" and recursion are massively more effective? And, it seems, more natural.

Part of the problem is that, based perhaps on indoctrination, we don't automatically see recursion as natural, as it is generally represented notationally. Although it is possible that this is because we are the products of recursion, and we use it physically, unconsciously, to perform mental and linguistic tasks, I think that deep down, we think recursively. We can often see patterns where recursion would apply, but our initial reaction (unless we do a lot of work with LISP or YACC) is to approach the problem as a linear path. This gets very complex for hard problems, and so we lean on solutions, provided by others, who have often found the patterns and reduced them to recursive procedures in toolkits and frameworks. But we still don't actually approach a problem as a gradient, differentiating sequence of recursive, grammar-driven resolutions.

If this is right, the solution would be to create an artificial language whose recursive representations are closer to the way we think innately. And then to work on problems with that language, so we learn to approach a problem in such a mindset and sensibility, that we begin to automatically spot the recursive principles and switching parameters required to cast our imagination into a software product.

I don't think this is lambda calculus i.e. LISP, or attribute grammars i.e. YACC, or anything very much like formal logic in its current state. In some sense, when Frege made his point -- that effective problem-solving can be done without concern for the way we think -- he set us up for a century of cognitive pain. That said, it's hard to see how he could have done anything else, because discovering the "way we think" is still a woefully distant scientific goal today.

A method I use that helps can be found in Alexander's work -- and when we worked together he always asked me to use, explicitly, feeling as a shortcut to good results in the realm of computing. This also, perhaps not strangely, also works in the sphere of approaching the structure of innate cognition -- in my work, starting in the early 1980's, designing artificial languages, I listened carefully to engineers expressing their thoughts within the problem domain, and created computer-recognizable notations to provide limited transformation effort on their part: they could write what they thought, wanted the machine to do, within this limited domain.

Finding examples of natural, cognitively natural patterns of recursive enumeration in computing, patterns that feel good, is what I would like from people. I'm finding some of my own, but we need lots of examples, before we can tell what this new, more human language of computation should look like.

Sunday, September 20, 2009

A Synthesis of Grammar and Life

In 1996, Christopher Alexander and I were sitting in his Berkeley kitchen, talking about "latent centers", those ripe places in the world from which certain things will naturally emerge. At some point, I said something like "well, latent centers are like non-terminals in a formal grammar ... ".

We'd been talking about creating computer tools that would guide people through generative sequences, that is, tools which would let people generate physical objects in a natural way, like the unfolding of a living organism. In that context, I said something like "... if latent centers are non-terminals, organic unfolding can be reasonably represented by a formal grammar: productions, terminals and non-terminals". In fact, a rather primitive form of this analogy was already famous: L-Systems or Lindenmayer re-writing systems ... which admittedly only make rather dead-looking approximations to organic forms.

I said something like "Chomsky intended these grammars to be generative, but we rarely use them this way in computing ... we use them for the recognition of artificial languages. Anyway, let's create a Sequence Description Language -- I can write a parser for it very quickly, using tools that were themselves inspired by his linguistic work ..."

Chris paused and said, pretty gravely, "That's not the right direction." I should say, this was in the context of creating an application, a design tool, intended to inspire people to create geometry with more life. In that context, I think he was right.

We only revisited the topic once, later in the year, sitting in a train in London, where I mentioned that the description languages of digital geography might be useful to us. Chris said something like "Yes, we should look into that. It might have the geometric depth we need." I remember thinking this was a way to pull Chris back into generative grammars, which, with the appropriate attribute passing, up and down, really can describe anything about natural geometry that humans can understand cognitively.

The difficulty that Alexander had with Chomsky reflected quite a deep history for him. In fact, A Pattern Language (APL) is named for, and clearly structured by, an analogy between generative grammar and the generation of living structure ... in APL, Alexander had already jumped to the analogy of the morphogenetic sequence, describing APL as a "base map". He spelled out a grammar-like use of APL to generate ontogenetic sequences as if they were sentences:

"... when we use the network of a language, we always use it as a sequence, going through the patterns, moving always from the larger patterns to the smaller, always from the ones which create structures, to the ones which embellish those structures, and then to those which embellish those embellishments ..."

Given this history, and the emerging synthesis regarding language and biology, what problem could Alexander possible have with Chomsky?

Well, for Alexander, the most important thing is an appreciation for the fullness and depth of life's geometry, and a devotion to understanding the richness of the real world.

We can describe a living thing logically, and yet, without the intervention of a person with artistic sensibilities, the resulting geometry from any automatic generation using this logical description is woefully and obviously incorrect. The "dead-like" appearance of L-systems, and consequently much Hollywood CGI, is evidence that grammar and automatic generation are simply not enough.

That's a fair critique: it takes people, or living organisms, to make a full, rich tapestry. And Christopher Alexander wants to know why. It clearly has something to do with geometry, with physics, with proportion, and with some fundamentally unknown thing. But Chomsky doesn't answer that question for Alexander. It took Alexander four large volumes of The Nature of Order to explain his current thinking on these qualities of life and feeling.

This is a bit unfair to both of them, but I'm going to quote from an interview with Christopher Alexander in Stephen Grabow's biography:

"Chomsky's work on generative grammar will soon be considered very limited ... it does not deal with the interesting structure of language because the real structure of language lies in the relationship between words -- the semantic connections. The semantic network -- which connects the word "fire" with "burn", "red" and "passion" -- is the real stuff of language. Chomsky makes no attempt to deal with that .. in that sense, pattern languages are not like generative grammars. What they are like is semantic structure, the really interesting part of language and which only a few people have begun to study ... is much more like the structure which connects patterns in a pattern language ... the real heart of language which has hardly been described yet."

So, to paraphrase Alexander: What is missing from the mathematics of analytic biolinguistics, is life.

Well, Chomsky is interested in life's richness ... in fact, most of his work has been aimed at thinking clearly about what we know about our own creative powers. And Chomsky's opinion is that we know "almost nothing", and that some of it may not be knowable, in the same way that we don't know where the forces of gravity, electro-magnetism, chemical attraction, or free will, actually come from. I'm sure these two would not disagree about that. And, obviously, we all share a desire to understand more about these fundamental mysteries.

In Chomsky's "Cartesian Linguistics", he emphasizes that Descartes sees human language as free from the control of stimulus, providing a faculty for the expression of thought. Language is not just a mechanical survival tool. From Descartes, Chomsky traces a thread through 18th century philosophy (most of which was lost until the late 20th century) that lands squarely on Christopher Alexander's doorstep -- quoting Goethe, Wilhelm von Humboldt, A.W. Schlegel and, below from 1818, S. T. Coleridge on natural form:

"The form is mechanic, when on any given material we impress a predetermined form, not necessarily arising out of the properties of the material ... the organic form, on the other hand, is innate; it shapes, as it develops, itself from within, and the fullness of its development is one and the same with the perfection of its outward form. Such as the life is, such is the form."

The paragraph above could be a summary of Christopher Alexander's The Nature of Order, published almost 200 years later.

Completely independently, The Nature of Order opens with a praise of Descartes, with a berating of the mis-interpretation of his work by subsequent generations:

"The mechanistic idea of order can be traced to Descartes, around 1640. His idea was: if you want to know how something works, you can find out by pretending that it is a machine. You completely isolate the thing you are interested in -- the rolling of a ball, the falling of an apple, the flowing of the blood in the human body -- from everything else, and you invent a mechanical model, a mental toy, which obeys certain rules, and which will then replicate the behavior of the thing. It was because of this kind of Cartesian thought that one was able to find out how things work in the modern sense."

"However, the crucial thing which Descartes understood very well, but which we most often forget, is that this process is only a
method. This business of isolating things, breaking them into fragments, and of making machinelike pictures (or models) of how things work, is not how reality actually is. It is a convenient mental exercise, something we do to reality, in order to understand it."

"Descartes himself clearly understood his procedure as a mental trick. He was a religious person who would have been horrified to find out that people in the 20th century began to think that reality
itself was actually like this. But in the years since Descartes lived, as his idea gathered momentum, and people found out that you really could understand how the bloodstream works, or how the stars are born, by seeing them as machines -- and after people had used the idea to find out almost everything mechanical about the world from the 17th to 20th centuries -- then, sometime in the 20th century, people shifted into a new mental state that began treating reality as if this mechanical picture really were the nature of things, as if everything really were a machine."

In other writings, Chomsky observed that educated people today typically think the mechanical model of the universe was laid to rest by Quantum Mechanics in the 20th century -- when, for working scientists, Newton had done this explicitly in the 18th Century. That said, clearly the new mechanistics Alexander alludes to, propagating the life-draining technocratic organization of society, is on the increase, something we should all be fighting against. Chomsky and Alexander are clearly in agreement about this.

So since, in my modeling of their opinions, I can find no remaining disagreement between them, I've embarked on the road Alexander and I did not take in 1996. The cognitively understood portions of the real world can be completely described by a generative grammar, and the emotionally understood portions can be described by the same when they are written by people -- especially with the addition of a notation for morphogenetic sequences, which describes gradients of emerging structures across a whole developing system.

I call this project "Blooming Logic" for obvious reasons. It's expressed first as an artificial language for expressing the human-driven process of "growing" a bit of engineering or software. The generator, and tool, I call "Grogix". I'm hoping that this can help us to tap nature's successful and coherent developmental methodology, and bring holistic thinking explicitly into computing.

Monday, December 20, 2004

The stack

Noam Chomsky, talking about the social responsibility of the intellectual, said that most facts about US foreign policy are, even when in official records, not open for discussion with scholars, for example, in political science.

By contrast, he mentions some papers that he presented to computer people in the late 1950's. He wasn't in their field at all. But they listened to his findings, analyzed them, and used what they could.

Well, these findings revolutionized the creation of computer languages. Chomsky's most influential result was the one-to-one mapping between the capabilities of pushdown automata, what we call a "stack", and languages describable by context-free grammar productions. He also noted that finite state machines mapped to regular expressions.

Within twenty years, all significant computer languages, and much data processing, were implemented using these tools. Human-readable computer languages were recognized by stack-based parsers, generated automatically from descriptions shaped by context-free productions. The compilers were, and still are, functionally compartmentalized into the regular-expression and grammar components.

The effect this had on language design & development is incalculable. At the very least, it leads to compiler-compilers such as Lex & Yacc, developed with Unix at Bell labs, along with the movement towards software tools, and the endless applications and extensions of regular expressions. At most, one might say that it crystalized context-switching, the basis for all multi-process computing, bringing on the heyday of the stack, whose use was now easily defined due to Chomsky's work.

Note that this is the way it appears to me, having entered the field in 1974. I'll amend this speculation, as people who were there provide me with more material.

The Approach

Exhaustive listing of the influences of two influential people would be incredibly boring, for you and for me. And I simply don't want to ask anyone "how have you been effected by Noam Chomsky or Christopher Alexander?" Nonetheless, when I come across a good story, of course I'll tell it.

Chomsky & Alexander

The computer was the invention of curious people. Not computer scientists.

It's your choice where to start the history of computing. Use any point along the evolution of counting or mathematics. Or start with Euclid's logic or Blaise Pascal's calculator. You can begin with Descartes methodology or Charles Babbages' difference engine or Alan Turing's models of computation.

But note: these weren't "computer people". They were just people.

Fast forward, past the explosion of business & government use of systems of advanced calculation, past philosophical & practical contributions from the generation of Norbert Weiner & John von Neumann. There are now millions of contributors to computing, who are computing people.

But there are two very odd professors who are not. They are not particularly interested in computers. But their profound influence on computing is an amazing story.