References for "The Future of Programming"

Bret Victor / July 30, 2013

I gave a talk at the DBX conference called The Future of Programming. Below are links and quotes from some primary sources I used, as well as links to wikipedia and elsewhere where you can learn more.

Introduction

Much of the overall message and style of the talk was inspired by Alan Kay.

For more talks with a similar message, I highly recommend:

For a broader overview of some of the systems and ideas in this talk, see:

Moore's law

IBM 650

Reactions to SOAP and Fortran
Richard Hamming -- The Art of Doing Science and Engineering, p25 (pdf book)

In the beginning we programmed in absolute binary... Finally, a Symbolic Assembly Program was devised -- after more years than you are apt to believe during which most programmers continued their heroic absolute binary programming. At the time [the assembler] first appeared I would guess about 1% of the older programmers were interested in it -- using [assembly] was "sissy stuff", and a real programmer would not stoop to wasting machine capacity to do the assembly.

Yes! Programmers wanted no part of it, though when pressed they had to admit their old methods used more machine time in locating and fixing up errors than the [assembler] ever used. One of the main complaints was when using a symbolic system you do not know where anything was in storage -- though in the early days we supplied a mapping of symbolic to actual storage, and believe it or not they later lovingly pored over such sheets rather than realize they did not need to know that information if they stuck to operating within the system -- no! When correcting errors they preferred to do it in absolute binary.

FORTRAN was proposed by Backus and friends, and again was opposed by almost all programmers. First, it was said it could not be done. Second, if it could be done, it would be too wasteful of machine time and capacity. Third, even if it did work, no respectable programmer would use it -- it was only for sissies!

John von Neumann's reaction to assembly language and Fortran
John A.N. Lee, Virginia Polytechnical Institute

John von Neumann, when he first heard about FORTRAN in 1954, was unimpressed and asked "why would you want more than machine language?" One of von Neumann's students at Princeton recalled that graduate students were being used to hand assemble programs into binary for their early machine. This student took time out to build an assembler, but when von Neumann found out about it he was very angry, saying that it was a waste of a valuable scientific computing instrument to use it to do clerical work.

coding -> direct manipulation of data

Sketchpad (Ivan Sutherland)

procedures -> goals and constraints

PLANNER (Carl Hewitt)

Prolog (Alain Colmerauer, et al)

SNOBOL (Ralph Griswold, et al)

regular expressions (Ken Thompson)

ARPANET

Communicating with aliens

see also

text dump -> spatial representations

NLS (Doug Engelbart, SRI)

GRAIL (T.O. Ellis et al, RAND Corporation)

Smalltalk (Alan Kay et al, Xerox PARC)

PLATO (Don Bitzer et al, University of Illinois)

sequential -> concurrent

von Neumann computer architecture

von Neumann bottleneck
John Backus (1978) -- Can Programming Be Liberated from the von Neumann Style?

Surely there must be a less primitive way of making big changes in the store than by pushing vast numbers of words back and forth through the von Neumann bottleneck. Not only is this tube a literal bottleneck for the data traffic of a problem, but, more importantly, it is an intellectual bottleneck that has kept us tied to word-at-a-time thinking instead of encouraging us to think in terms of the larger conceptual units of the task at hand. Thus programming is basically planning and detailing the enormous traffic of words through the von Neumann bottleneck, and much of that traffic concerns not significant data itself, but where to find it.

semiconductor integrated circuit

Intel 4004 microprocessor

semiconductor memory

massively parallel processor array

see also

Actor model (Carl Hewitt)

referenced in passing

Closing

"We don't know what programming is."

Gerry Sussman -- We Really Don't Know How To Compute! (video)

[intro] I think we're in real trouble. I think we haven't the foggiest idea how to compute real well... I think that most of the things we've been talking about, even here [at this conference], are obsolete.

[40:30] I'm only pushing this idea, not because I think it's the right answer. I'm trying to twist us, so we say, "This is a different way to think." We have to think fifty-two different ways to fix this problem. I don't know how to make a machine that builds a person out of a cell. But I think the problem is that we've been stuck for too long diddling with our details. We've been sitting here worrying about our type system, when we should be worrying about how to get flexible machines and flexible programming.

[1:01:30] We have to throw away our current ways of thinking if we ever expect to solve these problems.

See also Alan Kay -- The Computer Revolution Hasn't Happened Yet (video)

"The most dangerous thought you can have as a creative person is to think you know what you're doing."

Richard Hamming -- The Art of Doing Science and Engineering, p5 (pdf book)

In science if you know what you are doing you should not be doing it.
In engineering if you do not know what you are doing you should not be doing it.
Of course, you seldom, if ever, see either pure state.

Danny Hillis -- Richard Feynman and The Connection Machine

In retrospect I realize that in almost everything that we [Hillis and Feynman] worked on together, we were both amateurs. In digital physics, neural networks, even parallel computing, we never really knew what we were doing. But the things that we studied were so new that no one else knew exactly what they were doing either. It was amateurs who made the progress.

"Why did all these ideas happen during this particular time period?"

There may be a number of reasons.

The story I told in the talk -- "they didn't know what they were doing, so they tried everything" -- was essentially that programming at the time was in the "pre-paradigm phase", as defined by Thomas Kuhn in The Structure of Scientific Revolutions. This is the period of time before researchers reach consensus on what problems they're actually trying to solve. The establishment of distinct programming paradigms (e.g., functional, logic, etc.) led into Kuhn's "normal science" phase (or as Sussman put it, "diddling with details") where the foundations of the subject tend to be taken for granted.

But there's another story, which has to do with funding models. Much fundamental research at the time, including Engelbart's NLS and the Internet, was funded by ARPA, an agency of the US Defense Department which had been given significant resources due to the cold war.

Stefanie Chiou, et al -- The Founding of the MIT AI Lab

ARPA ushered in an era of abundant funding for university projects, offering far more in terms of funding than any other research funds at the time. Where institutions such as the National Science Foundation and the Three Services Program provided funding to research programs at the level of tens of thousands of dollars, ARPA was willing to throw millions into the creation and support of promising research efforts.

Part of what made ARPA funding so successful was that its directors (such as J.C.R. Licklider and Bob Taylor) were free to aggressively seek out and fund the most promising individuals with "no strings attached".

Ibid.

The funding model used within ARPA at the time was that of giving large chunks of money to individual laboratories to be divided up at the discretion of the laboratory director.

This situation changed around 1973, when ARPA became DARPA. (The D is for Defense.)

wikipedia

The Mansfield Amendment of 1973 expressly limited appropriations for defense research (through ARPA/DARPA) to projects with direct military application. Some contend that the amendment devastated American science, since ARPA/DARPA was a major funding source for basic science projects of the time; the National Science Foundation never made up the difference as expected.

The resulting "brain drain" is also credited with boosting the development of the fledgling personal computer industry. Many young computer scientists fled from the universities to startups and private research labs like Xerox PARC.

One way of interpreting this is that the Mansfield Amendment killed research, but "induced labor" on an industry. The industrial mindset -- short-term, results-driven, immediately-applicable -- is generally hostile to long-term, exploratory, foundational research. (The canonical counterexamples, Bell Labs and Xerox PARC, were anomalies for various reasons. See The Idea Factory and Dealers of Lightning, respectively.)

The National Science Foundation continued to exist as a basic-science funding agency. But unlike ARPA, the NSF funds projects, not people, and project proposals must be accepted by a peer review board. Any sufficiently-revolutionary project, especially at the early stages, will sound too crazy for a board to accept. Worse, requiring a detailed project proposal means that the NSF simply can't fund truly exploratory research, where the goal is not to solve a problem, but to discover and understand the problem in the first place.

A third story to explain why so many ideas happened during this time period was that everyone was on drugs. See What the Dormouse Said.

A clarification about "not knowing what you're doing"

"The most dangerous thought you can have as a creative person is to think you know what you're doing."

It's possible to misinterpret what I'm saying here. When I talk about not knowing what you're doing, I'm arguing against "expertise", a feeling of mastery that traps you in a particular way of thinking.

But I want to be clear -- I am not advocating ignorance. Instead, I'm suggesting a kind of informed skepticism, a kind of humility.

Ignorance is remaining willfully unaware of the existing base of knowledge in a field, proudly jumping in and stumbling around. This approach is fashionable in certain hacker/maker circles today, and it's poison.

Knowledge is essential. Past ideas are essential. Knowledge and ideas that have coalesced into theory is one of the most beautiful creations of the human species. Without Maxwell's equations, you can spend a lifetime fiddling with radio equipment and never invent radar. Without dynamic programming, you can code for days and not even build a sudoku solver.

It's good to learn how to do something. It's better to learn many ways of doing something. But it's best to learn all these ways as suggestions or hints. Not truth.

Learn tools, and use tools, but don't accept tools. Always distrust them; always be alert for alternative ways of thinking. This is what I mean by avoiding the conviction that you "know what you're doing".

* * *

This point is perhaps best made by David Hestenes. Here, he's writing about mathematical tools for physics, but his observation applies identically to any sort of tool or way of thinking:

David Hestenes -- Reforming the Mathematical Language of Physics

Mathematics is taken for granted in the physics curriculum -- a body of immutable truths to be assimilated and applied. The profound influence of mathematics on our conceptions of the physical world is never analyzed. The possibility that mathematical tools used today were invented to solve problems in the past and might not be well suited for current problems is never considered...

One does not have to go very deeply into the history of physics to discover the profound influence of mathematical invention. Two famous examples will suffice to make the point: The invention of analytic geometry and calculus was essential to Newton’s creation of classical mechanics. The invention of tensor analysis was essential to Einstein’s creation of the General Theory of Relativity.

The point I wish to make by citing these two examples is that without essential mathematical concepts the two theories would have been literally inconceivable. The mathematical modeling tools we employ at once extend and limit our ability to conceive the world. Limitations of mathematics are evident in the fact that the analytic geometry that provides the foundation for classical mechanics is insufficient for General Relativity. This should alert one to the possibility of other conceptual limits in the mathematics used by physicists.

Lastly, here's some advice Alan Kay gave me (as I was going through a small personal crisis as a result of reading Jerome Bruner's "Toward a Theory of Instruction"):

I think the trick with knowledge is to "acquire it, and forget all except the perfume" -- because it is noisy and sometimes drowns out one's own "brain voices". The perfume part is important because it will help find the knowledge again to help get to the destinations the inner urges pick.