From:  Bret Victor <>
Subject:  Re: [cdg] computational thinking, school math, NY Times article
Date:  March 1, 2016 at 10:29:54 AM PST
To:  
Cc:  

On the other hand, 7±2 says that things, once really understood, need to be used as "things" -- so this is the "understanding"-cum-"performance" double learning activity. Marvin used to say that "The trouble with New Math is you have to understand it every time you use it!" (and this is too much).

One of my favorite articulations of this is John Mason's paper "When is a Symbol Symbolic?"

He reinterprets Bruner's enactive-iconic-symbolic not as inherent properties of a representation, but a person's relationship to the representation

and describes a kind of spiral where "symbols" eventually become "enactive" so they can be used as components in some larger symbolic expression.

A couple of more complex examples that I like to think about are, say, a wave equation and a low-pass filter:

You need to be able to recognize and think about these things as "things" with a higher-level meaning; you need to be able to see a wave or a filter, not just a collection of low-level parts.  At the same time, the representation is not a black box -- while thinking about the filter as a filter, you simultaneously see the opamp and the capacitor etc, and "feel" the roles that these components play.  

(This is like looking at a word in an alphabetic language; you recognize it as a meaningful word while simultaneously seeing individual letters and understanding their roles, which allows you to vary words or invent new ones.)

When looking at a more complex circuit, say, the implementation of an op-amp:

an experienced person's eye will automatically carve out "words" -- high-level meaningful blocks -- these components act as a differential pair, these act as a current mirror, etc.  (I guess whoever drew the picture above already did that for you with colored boxes!)  

Once you've recognized these things, you can run on your prior understanding -- you already know how current mirrors work, you don't have to derive that from scratch every time, you just recognize them as old friends and trust them.  (They are the red boxes above.)  At the same time, again, they are not black-boxed, so if some assumptions are being violated, that can jump out at you too -- "wait, I don't think this mirror is going to work, it's not being biased properly".

I think that it also may provide a path to learning in that, as a beginner, you can copy a current mirror out of a book and use it as a "macro" without fully understanding it, but as time goes on and you become more fluent, you start to see "into" it and understand what it "means".  (Especially as you run into situations where you need a variation on it, or where it fails because assumptions don't hold.)  I see this as the same process that Mason described with 1,2,3 going from symbolic to enactive.


This sort of simultaneous-part-and-whole doesn't show up as much in software, I think, because in our current languages, the components are so weak.  The wave equation can describe such rich behavior in such a compact form because the language of partial differential equations describes a relationship between things instead of a procedure.  (Likewise with electronics; you can think of electronic components just as terms in an ODE.)  In both PDEs and electronics, a small number of components can pack a big punch.  So you can have a compact representation where you recognize both the high-level punch and the low-level components simultaneously.

In most programming languages, you need a huge pile of terms in order to do anything interesting, so it's hard to see a meaningful "whole".  So you sweep the pile into a procedure definition, which you can invoke with one line, and now you can't see the "parts".