This is a set of readings seeded by
There are a lot of directions you can take this, and I ended up with a scattered cloud of things that felt resonant in some way. Most are short blog posts, or just excerpts that I've pasted into this email.
A couple more from Viznut:
The magic we need more in today's technological world is of the latter kind. We should strive to increase deepness rather than outward complexity, human virtuosity rather than consumerism, flexibility rather than effortlessness. The mysteries should invite attempts at understanding and exploitation rather than blind reliance or worship; this is also the key difference between esoterica and superstition.
This is Viznut's followup to the resource leak essay:
How to solve the problem? Discussion tends to bipolarize into quarrels between techno-utopians ("technological progress will automatically solve all the problems") and neo-luddites ("the problems are inherent in technology so we should avoid it altogether").
That particular passage reminded me of this chapter in Roszak's "From Satori to Silicon Valley":
For the Reversionaries, who trace back to John Ruskin, William Morris, Prince Kropotkin, and the Romantic artists generally, industrialism is the extreme state of a cultural disease that must be cured before it kills us. It is a stage of pathological overdevelopment in the history of human economy from which a healthy technology -- usually seen as some form of communitarian handicrafts -- will have to be salvaged once the industrial system has reached the point of terminal inhumanity. ....
Over against this stratagem of radical withdrawal and reversion, we have the technophiliac vision of our industrial destiny, a modem current of thought that flows back to Saint-Simon, Robert Owen, and H. G. Wells. For these utopian industrialists, as for Buckminster Fuller after them, the cure for our industrial ills will not be found in things past, but in Things To Come. Indeed, it will be found at the climax of the industrial process. What is required, therefore, is not squeamish reversion, but brave perseverance. We must adapt resourcefully to industrialism as a necessary stage of social evolution, monitoring the process with a cunning eye for its life-saving potentialities.
Moving back to software systems:
An example is USB, the Universal Serial Bus. It’s a success that people must cope with. It’s also a disaster. The universal protocols it dictates require a huge amount of software to support. To develop such software requires reading thousands of pages of specs. Which are neither complete nor accurate. So what happens? Someone develops an interface chip that encapsulates the complexity. And then you must learn to use that chip, which is at least as complex.
Complexity is the problem. Moving it from hardware to software, or vice versa, doesn’t help. Simplicity is the only answer. There was a product many years ago called the Canon Cat. It was a simple, dedicated word processor; done very nicely in Forth. Didn’t succeed commercially. But then, most products don’t.
I despair. Technology, and our very civilization, will get more and more complex until it collapses. There is no opposing pressure to limit this growth. No environmental group saying: Count the parts in a hybrid car to judge its efficiency or reliability or maintainability.
All I can do is provide existence proofs: Forth is a simple language; OKAD is a simple design tool; GreenArrays offers simple computer chips. No one is paying any attention.”
[1percent] For example, my VLSI tools take a chip from conception through testing. Perhaps 500 lines of source code. Cadence, Mentor Graphics do the same, more or less. With how much source/object code? They use schematic capture, I don't. I compute transistor temperature, they don't.
Sussman explaining why MIT switched from Scheme to Python
To be a viable computer system, one must honor a huge list of large, and often changing, standards: TCP/IP, HTTP, HTML, XML, CORBA, Unicode, POSIX, NFS, SMB, MIME, POP, IMAP, X, ...
A huge amount of work, but if you don’t honor the standards you’re marginalized.
Estimate that 90-95% of the work in Plan 9 was directly or indirectly to honor externally imposed standards.
At another level, instruction architectures, buses, etc. have the same influence.
With so much externally imposed structure, there’s little slop left for novelty.
Plus, commercial companies that ‘own’ standards, e.g. Microsoft, Cisco, deliberately make standards hard to comply with, to frustrate competition. Academia is a casualty.
Today’s graduating PhDs use Unix, X, Emacs, and Tex. That’s their world. It’s often the only computing world they’ve ever used for technical work.
Twenty years ago, a student would have been exposed to a wide variety of operating systems, all with good and bad points.
New employees in our lab now bring their world with them, or expect it to be there when they arrive. That’s reasonable, but there was a time when joining a new lab was a chance to explore new ways of working.
Narrowness of experience leads to narrowness of imagination.
The situation with languages is a little better many curricula include exposure to functional languages, etc. but there is also a language orthodoxy: C++ and Java.
In science, we reserve our highest honors for those who prove we were wrong. But in computer science...
I really want to put some Loper OS in here. Not sure where best to start, but here are a couple of seeds:
The fundamental problem with approaching computer systems as biological systems is that it means giving up on the idea of actually understanding the systems we build. We can’t make our software dependable if we don’t understand it. And as our society becomes ever more dependent on computer software, that software must be dependable. ...
When people who can’t think logically design large systems, those systems become incomprehensible. And we start thinking of them as biological systems. Since biological systems are too complex to understand, it seems perfectly natural that computer programs should be too complex to understand.
We should not accept this... If we don’t, then the future of computing will belong to biology, not logic. We will continue having to use computer programs that we don’t understand, and trying to coax them to do what we want. Instead of a sensible world of computing, we will live in a world of homeopathy and faith healing.
Finally, the STEPS reports, at least the intros.
Many of the readings above took off from the original post along the complexity and knowability angles, but a couple other directions might lead to McCullough's Abstracting Craft or Postman's Technopoly.