Quotes

Point at a quote and scroll horizontally.

Alan Kay: The Power of the Context

My interest in children's education came from a talk by Marvin Minsky, then a visit to Seymour Papert's early classroom experiments with LOGO. Adding in McLuhan led to an analogy to the history of printed books, and the idea of a Dynabook metamedium: a notebook-sized wireless-networked "personal computer for children of all ages". The real printing revolution was a qualitative change in thought and argument that lagged the hardware inventions by almost two centuries. The special quality of computers is their ability to rapidly simulate arbitrary descriptions, and the real computer revolution won't happen until children can learn to read, write, argue and think in this powerful new way. We should all try to make this happen much sooner than 200 or even 20 more years! This got me started designing computer languages and authoring environments for children, and I've been at it ever since.

Looking back on these experiences, I’m struck that my lifelong processes of loving ideas and reacting to them didn’t bear really interesting fruit until I encountered “The ARPA Dream” in grad school at the University of Utah. A fish on land still waves its fins, but the results are qualitatively different when the fish is put in its most suitable watery environment.

This is what I call "The power of the context" or "Point of view is worth 80 IQ points". Science and engineering themselves are famous examples, but there are even more striking processes within these large disciplines. One of the greatest works of art from that fruitful period of ARPA/PARC research in the 60s and 70s was the almost invisible context and community that catalysed so many researchers to be incredibly better dreamers and thinkers. That it was a great work of art is confirmed by the world-changing results that appeared so swiftly, and almost easily. That it was almost invisible, in spite of its tremendous success, is revealed by the disheartening fact today that, as far as I'm aware, no governments and no companies do edge-of-the-art research using these principles. Of course I would like be shown that I'm wrong on this last point.

Just as it is difficult to pin down all the processes that gave rise to the miracle of the United States Constitution, catching the key principles that made ARPA/PARC special has proven elusive. We know that the designers of the Constitution were brilliant and well educated, but, as Ben Franklin pointed out at the culmination of the design, there was still much diversity of opinion and, in the end, it was the good will of the participants that allowed the whole to happen. Subsequent history has shown many times that it is the good will and belief of Americans in the Constitution that has allowed it to be such a power for good—no scrap of paper full of ideas, however great, is sufficient.

Similarly, when I think of ARPA/PARC, I think first of good will, even before brilliant people. Dave Evans, my advisor, mentor, and friend was simply amazing in his ability to act as though his graduate students were incredible thinkers. Only fools ever let him find out otherwise! I really do owe my career to Dave, and learned from him most of what I think is important. On a first visit to the Lincoln Labs ARPA project, we students were greeted by the PI Bert Sutherland, who couldn't have been happier to see us or more interested in showing us around. Not too many years later Bert was my lab manager at Xerox PARC. At UCLA, young professor Len Kleinrock became a lifelong friend from the first instant. A visit to CMU in those days would find Bill Wulf, a terrific systems designer and a guy who loved not just his students but students from elsewhere as well. If one made a pilgrimage to Doug Engelbart’s diggings in Menlo Park, Bill English, the co-inventor of the mouse, would drop what he was doing to show everything to the visiting junior researchers. Later at PARC, Bill went completely out of his way to help me set up my own research group. Nicholas Negroponte visited Utah and we’ve been co-conspirators ever since. Bob Taylor, the director of ARPA-IPTO at that time, set up a yearly ARPA grad student conference to further embed us in the larger research processes and collegial relationships. As a postdoc, Larry Roberts got me to head a committee for an ARPAnet AI supercomputer where considerably senior people such as Marvin Minsky and Gordon Bell were theoretically supposed to be guided by me. They were amazingly graceful in how they dealt with this weird arrangement. Good will and great interest in graduate students as "world-class researchers who didn't have PhDs yet" was the general rule across the ARPA community.

What made all this work were a few simple principles articulated and administered with considerable purity. For example, it is no exageration to say that ARPA/PARC had "visions rather than goals" and "funded people, not projects". The vision was "interactive computing as a complementary intellectual partner for people pervasively networked world-wide". By not trying to derive specific goals from this at the funding side, ARPA/PARC was able to fund rather different and sometimes opposing points of view. For example, Engelbart and McCarthy had extremely different ways of thinking of the ARPA dream, but ideas from both of their research projects are important parts of today's interactive computing and networked world.

Giving a professional illustrator a goal for a poster usually results in what was desired. If one tries this with an artist, one will get what the artist needed to create that day. Sometimes we make, to have, sometimes to know and express. The pursuit of Art always sets off plans and goals, but plans and goals don't always give rise to Art. If "visions not goals" opens the heavens, it is important to find artistic people to conceive the projects.

Thus the "people not projects" principle was the other cornerstone of ARPA/PARC’s success. Because of the normal distribution of talents and drive in the world, a depressingly large percentage of organizational processes have been designed to deal with people of moderate ability, motivation, and trust. We can easily see this in most walks of life today, but also astoundingly in corporate, university, and government research. ARPA/PARC had two main thresholds: self-motivation and ability. They cultivated people who "had to do, paid or not" and "whose doings were likely to be highly interesting and important". Thus conventional oversight was not only not needed, but was not really possible. "Peer review" wasn't easily done even with actual peers. The situation was "out of control", yet extremely productive and not at all anarchic.

"Out of control" because artists have to do what they have to do. "Extremely productive" because a great vision acts like a magnetic field from the future that aligns all the little iron particle artists to point to “North” without having to see it. They then make their own paths to the future. Xerox often was shocked at the PARC process and declared it out of control, but they didn't understand that the context was so powerful and compelling and the good will so abundant, that the artists worked happily at their version of the vision. The results were an enormous collection of breakthroughs, some of which we are celebrating today.

Our game is more like art and sports than accounting, in that high percentages of failure are quite OK as long as enough larger processes succeed. Ty Cobb's lifetime batting average was "only" .368, which means that he failed almost 2/3s of the time. But the critical question is: what happened in the 1/3 in which he was succeeding? If the answer is "great things" then this is all the justification that should be needed. Unless I'm badly mistaken, in most processes today—and sadly in most important areas of technology research—the administrators seem to prefer to be completely in control of mediocre processes to being "out of control" with superproductive processes. They are trying to "avoid failure" rather than trying to "capture the heavens".

What if you have something cosmic you really want to accomplish and aren't smart and knowledgable enough, and don't have enough people to do it? Before PARC, some of us had gone through a few bitter experiences in which large straight-ahead efforts to create working artifacts turned out to be fragile and less than successful. It seems a bit of a stretch to characterize PARC's group of supremely confident technologists as "humble", but the attitude from the beginning combined both big ideas and projects, with a large amount of respect for how complexity can grow faster than IQs. I remember Butler, in his first few weeks at PARC, arguing as only he could that he was tired of bubble-gummed !@#$%^&* fragile research systems that could barely be demoed by their creators. He called for two general principles: that we should not make anything that was not engineered for 100 users, and we should all have to use our creations as our main computing systems (later called Living Lab). Naturally we fought him for a short while, thinking that the extra engineering would really slow things down, but we finally gave in to his brilliance and will. The scare of 100 users and having to use our own stuff got everyone to put a lot more thought early on before starting to crab together a demo. The result was almost miraculous. Many of the most important projects got to a stable, usable, and user-testable place a year or more earlier than our optimistic estimates.

Respect for complexity, lack of knowledge, the small number of researchers and modest budgets at PARC led to a finessing style of design. Instead of trying to build the complex artifacts from scratch—like trying to build living things cell by cell—many of the most important projects built a kernel that could grow the artifact as new knowledge was gained—that is: get one cell’s DNA in good shape and let it help grow the whole system.

Sydney Brenner: interview

ED: I asked him what inspired them to maintain their faith and pursue these revolutionary ideas in the face of such doubt and opposition.

SB: Once you saw the light you were just certain that you had to be right, that it was the right way to do it and the right answer. And of course our faith, if you like, has been borne out. 

I think it would have been difficult to keep going without the strong support we had from the Medical Research Council. I think they took a big gamble when they founded that little unit in the Cavendish. I think all the early people they had were amazing. There were amazing personalities amongst them.

This was not your usual university department, but a rather flamboyant and very exceptional group that was meant to get together. An important thing for us was that with the changes in America then, from the late fifties almost to the present day, there was an enormous stream of talent and American postdoctoral fellows that came to our lab to work with us. But the important thing was that they went back. Many of them are now leaders of American molecular biology, who are alumni of the old MRC.

ED: The 1950s to 1960s at the LMB was a renaissance of biological discovery, when a group of young, intrepid scientists made fundamental advances that overturned conventional thinking. The atmosphere and camaraderie reminded me of another esteemed group of friends at King’s College – the Bloomsbury Group, whose members included Virginia Woolf, John Maynard Keynes, E.M. Forester, and many others. Coming from diverse intellectual backgrounds, these friends shared ideas and attitudes, which inspired their writing and research. Perhaps there was something about the nature of the Cambridge college systems that allowed for such revolutionary creativity?

SB: In most places in the world, you live your social life and your ordinary life in the lab. You don’t know anybody else. Sometimes you don’t even know other people in the same building, these things become so large.

The wonderful thing about the college system is that it’s broken up again into a whole different unit. And in these, you can meet and talk to, and be influenced by and influence people, not only from other scientific disciplines, but from other disciplines. So for me, and I think for many others as well, that was a really important part of intellectual life. That’s why I think people in the college have to work to keep that going.

Cambridge is still unique in that you can get a PhD in a field in which you have no undergraduate training. So I think that structure in Cambridge really needs to be retained, although I see so often that rules are being invented all the time. In America you’ve got to have credits from a large number of courses before you can do a PhD. That’s very good for training a very good average scientific work professional.  But that training doesn’t allow people the kind of room to expand their own creativity. But expanding your own creativity doesn’t suit everybody. For the exceptional students, the ones who can and probably will make a mark, they will still need institutions free from regulation.

ED: I was excited to hear that we had a mutual appreciation of the college system, and its ability to inspire interdisciplinary work and research. Brenner himself was a biochemist also trained in medicine, and Sanger was a chemist who was more interested in chemistry than biology.

SB: I’m not sure whether Fred was really interested in the biological problems, but I think the methods he developed, he was interested in achieving the possibility of finding out the chemistry of all these important molecules from the very earliest.

ED: Professor Brenner noted that these scientific discoveries required a new way of approaching old problems, which resist traditional disciplinary thinking.

SB: The thing is to have no discipline at all. Biology got its main success by the importation of physicists that came into the field not knowing any biology and I think today that’s very important.

I strongly believe that the only way to encourage innovation is to give it to the young. The young have a great advantage in that they are ignorant.  Because I think ignorance in science is very important. If you’re like me and you know too much you can’t try new things. I always work in fields of which I’m totally ignorant.

ED: But he felt that young people today face immense challenges as well, which hinder their ability to creatively innovate.

SB: Today the Americans have developed a new culture in science based on the slavery of graduate students. Now graduate students of American institutions are afraid. He just performs. He’s got to perform. The post-doc is an indentured labourer. We now have labs that don’t work in the same way as the early labs where people were independent, where they could have their own ideas and could pursue them.

The most important thing today is for young people to take responsibility, to actually know how to formulate an idea and how to work on it. Not to buy into the so-called apprenticeship. I think you can only foster that by having sort of deviant studies. That is, you go on and do something really different. Then I think you will be able to foster it.

But today there is no way to do this without money. That’s the difficulty. In order to do science you have to have it supported. The supporters now, the bureaucrats of science, do not wish to take any risks. So in order to get it supported, they want to know from the start that it will work. This means you have to have preliminary information, which means that you are bound to follow the straight and narrow. 

There’s no exploration any more except in a very few places. You know like someone going off to study Neanderthal bones. Can you see this happening anywhere else? No, you see, because he would need to do something that’s important to advance the aims of the people who fund science.

I think I’ve often divided people into two classes: Catholics and Methodists. Catholics are people who sit on committees and devise huge schemes in order to try to change things, but nothing’s happened. Nothing happens because the committee is a regression to the mean, and the mean is mediocre. Now what you’ve got to do is good works in your own parish. That’s a Methodist. 

ED: His faith in young, naïve (in the most positive sense) scientists is so strong that he has dedicated his later career to fostering their talent against these negative forces.

SB: I am fortunate enough to be able to do this because in Singapore I actually have started two labs and am about to start a third, which are only for young people. These are young Singaporeans who have all been sent abroad to get their PhDs at places like Cambridge, Stanford, and Berkeley. They return back and rather than work five years as a post-doc for some other person, I’ve got a lab where they can work for themselves. They’re not working for me and I’ve told them that.

But what is interesting is that very few accept that challenge, providing what I think is a good standard deviation from the mean. Exceptional people, the ones who have the initiative, have gone out and got their own funding. I think these are clearly going to be the winners. The eldest is thirty-two. 

They can have some money, and of course they’ve got to accept the responsibility of execution. I help them in the sense that I oblige them and help them find things, and I can also guide them and so on. We discuss things a lot because I’ve never believed in these group meetings, which seems to be the bane of American life; the head of the lab trying to find out what’s going on in his lab. Instead, I work with people one on one, like the Cambridge tutorial. Now we just have seminars and group meetings and so on.

So I think you’ve got to try to do something like that for the young people and if you can then I think you will create. That’s the way to change the future. Because if these people are successful then they will be running science in twenty years’ time.

ED: Our discussion made me think about what we consider markers of success today. It reminded me of a paragraph in Professor Brenner’s tribute to Professor Sanger in Science:

A Fred Sanger [born 1918] would not survive today’s world of science. With continuous reporting and appraisals, some committee would note that he published little of import between insulin in 1952 and his first paper on RNA sequencing in 1967 with another long gap until DNA sequencing in 1977. He would be labelled as unproductive, and his modest personal support would be denied. We no longer have a culture that allows individuals to embark on long-term—and what would be considered today extremely risky—projects.

I found this particularly striking given that another recent Nobel prize winner, Peter Higgs, who identified the particle that bears his name, the Higgs boson, similarly remarked in an interview with the Guardian that, “he doubts a similar breakthrough could be achieved in today’s academic culture, because of the expectations on academics to collaborate and keep churning out papers. He said that: ‘it’s difficult to imagine how I would ever have enough peace and quiet in the present sort of climate to do what I did in 1964.’”

It is alarming that so many Nobel Prize recipients have lamented that they would never have survived this current academic environment. What are the implications of this on the discovery of future scientific paradigm shifts and scientific inquiry in general? I asked Professor Brenner to elaborate.

SB: He wouldn’t have survived. Even God wouldn’t get a grant today because somebody on the committee would say, oh those were very interesting experiments (creating the universe), but they’ve never been repeated. And then someone else would say, yes and he did it a long time ago, what’s he done recently?  And a third would say, to top it all, he published it all in an un-refereed journal (The Bible).

So you know we now have these performance criteria, which I think are just ridiculous in many ways. But of course this money has to be apportioned, and our administrators love having numbers like impact factors or scores. Singapore is full of them too. Everybody has what are called key performance indicators. But everybody has them. You have to justify them. 

I think one of the big things we had in the old LMB, which I don’t think is the case now, was that we never let the committee assess individuals. We never let them; the individuals were our responsibility. We asked them to review the work of the group as a whole. Because if they went down to individuals, they would say, this man is unproductive. He hasn’t published anything for the last five years. So you’ve got to have institutions that can not only allow this, but also protect the people that are engaged on very long term, and to the funders, extremely risky work.

I have sometimes given a lecture in America called “The Casino Fund”. In the Casino Fund, every organisation that gives money to science gives 1% of that to the Casino Fund and writes it off. So now who runs the Casino Fund? You give it to me. You give it to people like me, to successful gamblers. People who have done all this who can have different ideas about projects and people, and you let us allocate it. 

You should hear the uproar. No sooner did I sit down then all the business people stand up and say, how can we ensure payback on our investment? My answer was, okay make it 0.1%. But nobody wants to accept the risk. Of course we would love it if we were to put it to work. We’d love it for nothing. They won’t even allow 1%. And of course all the academics say we’ve got to have peer review. But I don’t believe in peer review because I think it’s very distorted and as I’ve said, it’s simply a regression to the mean.

I think peer review is hindering science. In fact, I think it has become a completely corrupt system. It’s corrupt in many ways, in that scientists and academics have handed over to the editors of these journals the ability to make judgment on science and scientists. There are universities in America, and I’ve heard from many committees, that we won’t consider people’s publications in low impact factor journals.

Now I mean, people are trying to do something, but I think it’s not publish or perish, it’s publish in the okay places [or perish]. And this has assembled a most ridiculous group of people. I wrote a column for many years in the nineties, in a journal called Current Biology. In one article, “Hard Cases”, I campaigned against this [culture] because I think it is not only bad, it’s corrupt. In other words it puts the judgment in the hands of people who really have no reason to exercise judgment at all. And that’s all been done in the aid of commerce, because they are now giant organisations making money out of it. 

ED: Subscriptions to academic journals typically cost a British university between £4-6 million a year. In this time of austerity where university staff face deep salary cuts and redundancies, and adjunct faculty are forced to live on food stamps, do we have the resources to pour millions of dollars into the coffers of publishing giants? Shouldn’t these public monies be put to better use, funding important research and paying researchers liveable wages? To add insult to injury, many academics are forced to relinquish ownership of their work to publishers.

SB: I think there was a time, and I’m trying to trace the history when the rights to publish, the copyright, was owned jointly by the authors and the journal. Somehow that’s why the journals insist they will not publish your paper unless you sign that copyright over. It is never stated in the invitation, but that’s what you sell in order to publish. And everybody works for these journals for nothing. There’s no compensation. There’s nothing. They get everything free. They just have to employ a lot of failed scientists, editors who are just like the people at Homeland Security, little power grabbers in their own sphere.

If you send a PDF of your own paper to a friend, then you are committing an infringement. Of course they can’t police it, and many of my colleagues just slap all their papers online. I think you’re only allowed to make a few copies for your own purposes. It seems to me to be absolutely criminal. When I write for these papers, I don’t give them the copyright. I keep it myself. That’s another point of publishing, don’t sign any copyright agreement. That’s my advice. I think it’s now become such a giant operation. I think it is impossible to try to get control over it back again.

ED: It does seem nearly impossible to institute change to such powerful institutions. But academics have enthusiastically coordinated to strike in support of decent wages. Why not capitalise on this collective action and target the publication industry, a root cause of these financial woes? One can draw inspiration from efforts such as that of the entire editorial board of the journal Topology, who resigned in 2006 due to pricing policies of their publisher, Elsevier.

Professor Tim Gowers, a Cambridge mathematician and recipient of the Fields medal, announced in 2012, that he would not be submitting publications to nor peer reviewing for Elsevier, which publishes some of the world’s top journals in an array of fields including Cell and The Lancet. Thousands of other researchers have followed suit, pledging that they would not support Elsevier via an online initiative, the Cost of Knowledge. This “Academic Spring”, is gathering force, with open access publishing as its flagship call.

SB: Recently there has been an open access movement and it’s beginning to change. I think that even Nature, Science and Cell are going to have to begin to bow. I mean in America we’ve got old George Bush who made an executive order that everybody in America is entitled to read anything printed with federal funds, tax payers’ money, so they have to allow access to this. But they don’t allow you access to the published paper. They allow you I think what looks like a proof, which you can then display.

ED: On board is the Wellcome Trust, one of the world’s largest funders of science, who announced last year that they would soon require that researchers ensure that their publications are freely available to the public within six months of publication. There have also been proposals to make grant renewals contingent upon open access publishing, as well as penalties on future grant applications for researchers who do not comply.

It is admirable that the Wellcome Trust has taken this stance, but can these sanctions be enforced without harming their researchers’ academic careers? Currently, only 55% of Wellcome funded researchers comply with open access publishing, a testament to the fact that there are stronger motivators at play that trump this moral high ground. For this to be successful, funders and universities will have to demonstrate collective leadership and commitment by judging research quality not by publication counts, but on individual merit.

Promotion and grant committees would need to clearly commit both on paper and in practice to these new standards. This is of course not easy. I suspect the reason impact factors and publication counts are the currency of academic achievement is because they are a quick and easy metric. Reading through papers and judging research by its merit would be a much more time and energy intensive process, something I anticipate would be incompatible with a busy academic’s schedule. But a failure to change the system has its consequences. Professor Brenner reflected on the disillusioning impact this reality has on young scientists’ goals and dreams.

Gerald Jay Sussman: Robust Design through Diversity

Computer Science is in deep trouble. Structured design is a failure. Systems, as currently engineered, are brittle and fragile. They cannot be easily adapted to new situations. Small changes in requirements entail large changes in the structure and configuration. Small errors in the programs that prescribe the behavior of the system can lead to large errors in the desired behavior. Indeed, current computational systems are unreasonably dependent on the correctness of the implementation, and they cannot be easily modified to account for errors in the design, errors in the specifications, or the inevitable evolution of the requirements for which the design was commissioned. (Just imagine what happens if you cut a random wire in your computer!) This problem is structural. This is not a complexity problem. It will not be solved by some form of modularity. We need new ideas. We need a new set of engineering principles that can be applied to effectively build flexible, robust, evolvable, and efficient systems.

In the design of any significant system there are many implementation plans proposed for every component at every level of detail. However, in the system that is finally delivered this diversity of plans is lost and usually only one unified plan is adopted and implemented. As in an ecological system, the loss of diversity in the traditional engineering process has serious consequences for robustness.

This fragility and inflexibility must not be allowed to continue. The systems of the future must be both flexible and reliable. They must be tolerant of bugs and must be adaptable to new conditions. To advance beyond the existing problems we must change, in a fundamental way, the nature of the language we use to describe computational systems. We must develop languages that prescribe the computational system as cooperating combinations of redundant processes.

From biology we learn that multiple strategies may be implemented in a single organism to achieve a greater collective effectiveness than any single approach. For example, cells maintain multiple metabolic pathways for the synthesis of essential metabolites or for the support of essential processes. For example, both aerobic and anaerobic pathways are maintained for the extraction of energy from sugar. The same cell may use either pathway, or both, depending on the availability of oxygen in its environment.

Suppose we have several independently implemented systems all designed to solve the same (imprecisely specified) general class of problems. Assume for the moment that each design is reasonably competent and actually correctly works for most of the problems that might be encountered in actual operation. We know that we can make a more robust and reliable system by combining the given systems into a larger system that redundantly uses each of the given systems and compares their results, choosing the best answer on every problem. If the combination system has independent ways of determining which answers are acceptable we are in very good shape. But even if we are reduced to voting, we get a system that can reliably cover a larger space of solutions. Furthermore, if such a system can automatically log all cases where one of the designs fails, the operational feedback can be used to improve the performance of the system that failed.

This redundant design strategy can be used at every level of detail. Every component of each subsystem can itself be so redundantly designed and the implementation can be structured to use the redundant designs. If the component pools are themselves shared among the subsystems, we get a controlled redundancy that is quite powerful. However, we can do even better. We can provide a mechanism for consistency checking of the intermediate results of the independently designed subsystems, even when no particular value in one subsystem exactly corresponds to a particular value in another subsystem. Thus the interaction between systems appears as a set of constraints that capture the nature of the interactions between the parts of the system.

For a simple example, suppose we have two subsystems that are intended to deliver the same result, but computed in completely different ways. Suppose that the designers agree that at some stage in one of the designs, the product of two of the variables in that design must be the same as the sum of two of the variables in the other design. There is no reason why this predicate should not be computed as soon as all of the four values it depends upon become available, thus providing consistency checking at runtime and powerful debugging information to the designers.

The ARPAnet is one of the few engineered systems in use that is robust in the way we desire. This robustness partly derives from the fact that network supports a diversity of mechanisms, and partly from the fact that the late-binding of the packet-routing strategy. The details of routing are locally determined, by ambient conditions of the network: there is no central control.

When we design a computational system for some range of tasks we are trying to specify a behavior. We specify the behavior by a finite description---the program. We can think of the program as a set of rules that describes how the state of the system is to evolve given the inputs, and what outputs are to be produced at each step. Of course, programming languages provide support for thinking about only a part of the state at a time: they allow us to separate control flow from data, and they allow us to take actions based on only a small part of the data at each step.

A computational system is very much a dynamical system, with a very complicated state space, and a program is very much like a system of (differential or difference) dynamical equations, describing the incremental evolution of the state. One thing we have learned about dynamical systems over the past hundred years is that only limited insights can be gleaned by manipulation of the dynamical equations. We have learned that it is powerful to examine the geometry of the set of all possible trajectories, the phase portrait, and to understand how the phase portrait changes with variations of the parameters of the dynamical equations. This picture is not brittle: the knowledge we obtain is structurally stable.

To support this focus on the development of interacting subsystems with multiply-redundant design requires the development of languages that allow description of the function and relationships between different parts of the overall system. These descriptions "let go" of the specific logic of individual processes to capture the interactions that are necessary for the redundancy and robustness of multiple processes. When stated in this way we see that it is the description of constraints between functional units of the system that are the essential parts of the collective description of the system. We propose to develop laguages that focus on describing these relationships/constraints among otherwise independent processes. We also propose to evaluate the relationship of such constraints to the robustness and flexibility of the system behavior as a whole.

Leigh Alexander: The Unearthing

Bob Phelan won’t make my travel arrangements. It’s always Elena. No matter what man is signing my paychecks it’s always one of these finance women to whom I have to send all the emails asking about the paychecks, the reimbursements, the flights. The Roxannes, the Christies, the Elenas. They always have horrible email style, terse grammar. It always takes a few more rounds of correspondence than it should. The worst person you could possibly put in charge of a writer’s money is someone who sucks at writing emails.

Of course, I wonder what Elena thinks of me: A full-grown adult she has to book airfare and a cab and a hotel for. Who really needs someone to speak for them and transact for them this way except a child? For all I know that’s why the billy-club linebreaks, the unwieldy syntax, the clipped sign-off. I’m a baby and she’s a nanny.

I land at the airport with a desert in my mouth, fearing that I stink of whiskey. There is a man holding a sign with my name on it, and in a few minutes I’m in the back of a town car, watching nondescript southwestern suburbia whip past. At times like this I like to run phrases around the inside of my mind: I’m on site and I’m traveling for work and I’m on the scene and I’m doing up-close journalism.

My iPhone decides it has service, rattles to life in my hand, an aging model wreathed in rubber safety bumpers. It buzzes to let me know that at some time between my takeoff in New York and now, Bob Phelan has left me a voicemail. I don’t listen to it. I won’t listen to it. So, just kinda checking in to make sure you got on site okay, and uh, just kinda figuring out what your timeline is.

Flat land shimmers past to either side of me. I try to look much more hot and tired and put-upon than I am so that the driver won’t try to talk to me. The hotel is suddenly present, a flat complex of adobe-colored guest houses with a geometric swimming pool slapped in. Didn’t bring a swimsuit.

The check-in lobby seems like a bad use of space, bad design. Needless plain of beige square tiles, thinly-populated island of tourism pamphlets, and a desk way, way at the back. Immediately I notice there are video game journalists hanging around it, like, people who write only about video games for a job.

Here are some ways you can spot a game journalist: Out of fashion, out of shape. I don’t really mean physically — I mean you distinctly realize you are looking at people who definitely don’t fit into the world easy. There is often a hesitant, performative body language. You’ll be struck by the contrast between the apparent age of the person’s face, or their thinning hair, and the fact they’re wearing sloppy, brightly-colored sneakers. Someone always is wearing a tight plaid shirt. Someone is always wearing a fucking Zelda t-shirt. The longer I do this kind of thing the more unsettling I find it, the huge gulf between a game person’s apparent age and the child-like way in which they carry themselves.

The growling of my suitcase’s wheels across the tile catches their attention: A gangly white dude with dreadlocks and his utterly nondescript companion: Middling height, forgettable face, more beard rash than beard, and a backpack with cartoon pins all over it. They stare at me and it doesn’t look like a friendly expression, but I don’t really know what it does look like.

Can’t tell if I’ve met these guys before or they just all blend together, these Bens, these Jeffs and Petes. Can’t tell if they know who I am or not. Can’t tell if they hate me or not. One of them is a woman who is doing a good job of playing along: short pink hair, owlish glasses, demure look. I can already tell I won’t see this woman anywhere around this trip without these geeks hovering possessively around her. She gives me a milky half-smile that makes me resent her instantly.

While I’m in the elevator I already realize I feel tense. I didn’t know those guys, but somewhere here is probably some guys I do vaguely know, and I’m probably going to have to have dinner and drinks with all of them. I taste a taut dread.

atari2

Ian is a game design teacher and a professional skeptic. People call him a “curmudgeon”, but they don’t really understand how much love, how much actual faith, that kind of skepticism takes. On a pretty regular basis one of us will IM the other something like “help” or “fuck” or “people are terrible”.

Only when you fully believe in how wonderful something is supposed to be does every little daily indignity start to feel like some claw of malaise. At least, that’s how I explain Ian to other people.

I’m not sure if the same is true for me. I’m not sure if it’s so super wonderful I’ve left home and come all this way to watch some nostalgia fetishists digging up novelty game garbage from the year I was born.

“Help,” I type to Ian. “There are game journalists here.”

“Everything is terrible,” he replies.

“Are you doing the, uhm,” he types, “the thing, the Atari thing.”

I go, “ya”.

Ian has been quietly infuriated by The Atari Thing for the past few weeks. The way the “thousands of unsold E.T. games buried in the desert” thing has been billed by every publication as an “urban legend” (it’s been verified many times over, he’s pointed out). The way it’s not just the E.T. game, but several other pallets of unmovable product.

Everyone seems to have forgotten about that part and made this a story about E.T., the unbelievably bad film tie-in that cost so much to make and was so terrible that history would name it the single great harbinger of the games market crash of 1982 (the economic factors were incredibly complicated and certainly not influenced by one single game, Ian has pointed out).

Ian also says the junked cartridges were probably ironed beyond recognition by a steamroller, and probably also they poured concrete over them. There is probably nothing to dig up. And even then, what’s the point. They gawk and take pictures. They invent and then devour yet another “cultural event”, these fans of a medium where culture goes to die.

“This is bad,” Ian types. It’s one of his truisms.

In another tab I’m watching a muted YouTube Let’s Play of the infamous E.T. game. The titular alien, a drain trap of pixels stuttering around a green screen, is trying to assemble pieces of a phone that will get him home. The phone and the home are the only parts remotely related to the premise of the 1982 movie, I think. E.T. ticks and clicks across the screen, following a trail of pale blips that are supposedly candies, occasionally ducking into unremarkable bushes, passing from one identical screen to the next.

It really is unspeakably awful, impenetrable nonsense. To call it surreal insults surrealism, I think, and then I write it down: To call it surreal insults surrealism. Ugh.

“Why the fuck am I writing about this,” I start to tweet, and I delete the tweet. My feed is full of writers who are here, writers who say they are “stoked”, who are making plans to go to a “hilarious” margarita bar some permachild from some website or another has found. This place is literally a desert, I quickly realize, idly Google Mapsing, watching blurry strip malls in Street View splay out alongside parched highways.

“I need a fucking real job,” I type to Ian, but he’s offline already.

My room is hot in a way that the dry chill of the A/C can’t seem to salve. I can almost taste freon, I imagine; I imagine my skin cracking. I imagine desiccating here, and I switch it off. I got a brick of Jack Daniels in JFK Airport, and there is still enough left.

Three hours fall into a hole in front of Twitter, YouTube, Facebook, Spotify, Wikipedia, Gmail, as I drink in my underwear, and the stifling sunlight turns blood red and spills across my room. I feel inexplicably irritable, like I’ll have to defend this time to some invisible interrogator.

There is an email from R. Phelan that says “can u call me plz” and it makes me close Gmail. I can’t shut off the din outside: voices and footsteps in the hall. People are hanging out. They are laughing much too loud, singing tunelessly. They sound like they don’t know how to behave.

At one point I open my door on its chain for a minute and squint down the hallway. I think I spot the pink-haired girl and her increasing posse, adult men in flip flops and wacky ballcaps.

David Graeber: interview

And when did this expectation finally start dying out?

By the ‘60s, most people thought that robot factories, and ultimately, the elimination of all manual labor, was probably just a generation or two away. Everyone from the Situationists to the Yippies were saying “let the machines do all the work!” and objecting to the very principle of 9-to-5 labor. In the ‘70s, there were actually a series of now-forgotten wildcat strikes by auto workers and others, in Detroit, I think Turin, and other places, basically saying, “we’re just tired of working so much.”

This sort of thing threw a lot of people in positions of power into a kind of moral panic. There were think-tanks set up to examine what to do—basically, how to maintain social control—in a society where more and more traditional forms of labor would soon be obsolete. A lot of the complaints you see in Alvin Toffler and similar figures in the early ‘70s—that rapid technological advance was throwing the social order into chaos—had to do with those anxieties: too much leisure had created the counter-culture and youth movements, what was going to happen when things got even more relaxed? It’s probably no coincidence that it was around that time that things began to turn around, both in the direction of technological research, away from automation and into information, medical, and military technologies (basically, technologies of social control), and also in the direction of market reforms that would send us back towards less secure employment, longer hours, greater work discipline.

Today productivity continues to increase, but Americans work more hours per week than they used to, not fewer. Also, more than workers in other countries. Correct?

The U.S., even under the New Deal, was always a lot stingier than most wealthy countries when it comes to time off: whether it’s maternity or paternity leave, or vacations and the like. But since the ‘70s, things have definitely been getting worse.

Do economists have an explanation for this combination of greater productivity with increased work hours? What is it and what do you think of it?

Curiously, economists don’t tend to find much interest in such questions—really fundamental things about values, for instance, or broader political or social questions about what people’s lives are actually like. They rarely have much to say about them if left to their own devices. It’s only when some non-economist begins proposing social or political explanations for the rise of apparently meaningless administrative and managerial positions, that they jump in and say “No, no, we could have explained that perfectly well in economic terms,” and make something up.

After my piece came out, for instance, The Economist rushed out a response just a day or two later. It was an incredibly weakly argued piece, full of obvious logical fallacies. But the main thrust of it was: well, there might be far less people involved in producing, transporting, and maintaining products than there used to be, but it makes sense that we have three times as many administrators because globalization has meant that the process of doing so is now much more complicated. You have computers where the circuitry is designed in California, produced in China, assembled in Saipan, put in boxes in some prison in Nevada, shipped through Amazon overnight to God-knows-where… It sounds convincing enough until you really think about it. But then you realize: If that’s so, why has the same thing happened in universities? Because you have exactly the same endless accretion of layer on layer of administrative jobs there, too. Has the process of teaching become three times more complicated than it was in the 1930s? And if not, why did the same thing happen? So most of the economic explanations make no sense.

All true, and very correct about the universities, but there’s got to be an official–if not economic–explanation for why we didn’t get this Truly Great Thing that everyone was expecting not all that long ago. Like: Keynes was all wet, or such a system just wouldn’t work, or workers aren’t educated enough to deserve that much vacation, or the things we make today are just so much better than the things they made in Keynes’ day that they are worth more and take more work-hours to earn. There must be something.

Well, the casual explanation is always consumerism. The idea is always that given the choice between four-hour days, and nine or ten-hour days with SUVs, iPhones and eight varieties of designer sushi, we all collectively decided free time wasn’t really worth it. This also ties into the “service economy” argument, that nobody wants to cook or clean or fix or even brew their own coffee any more, so all the new employment is in maintaining an infrastructure for people to just pop over to the food court, or Starbucks, on their way to or from work. So, sure, a lot of this is just taken as common sense if you do raise the issue to someone who doesn’t think about it very much. But it’s also obviously not much of an explanation.

First of all, only a very small proportion of the new jobs have anything to do with actually making consumer toys, and most of the ones that do are overseas. Yet even there, the total number of people involved in industrial production has declined. Second of all, even in the richest countries, it’s not clear if the number of service jobs has really increased as dramatically as we like to think.  If you look at the numbers between 1930 and 2000, well, there used to be huge numbers of domestic servants. Those numbers have collapsed. Third, you also see that’s what’s grown is not service jobs per se, but “service, administrative, and clerical” jobs, which have gone from roughly a quarter of all jobs in the ‘30s to maybe as much as three quarters today. But how do you explain an explosion of middle managers and paper-pushers by a desire for sushi and iPhones?

And then, finally, there’s the obvious question of cause and effect. Are people working so many hours because we’ve just somehow independently conceived this desire for lattes and Panini and dog-walkers and the like, or is it that people are grabbing food and coffee on the go and hiring people to walk their dogs because they’re all working so much?

Maybe part of the answer is that people forgot about the expectation of more leisure time, and there’s no political agency to demand it anymore, and hence no need to explain what happened to it. I mean, there’s no wildcat strikes anymore.

Well, we can talk about the decline of the union movement, but it runs deeper. In the late 19th and early 20th centuries, one of the great divisions between anarcho-syndicalist unions, and socialist unions, was that the latter were always asking for higher wages, and the anarchists were asking for less hours. That’s why the anarchists were so entangled in struggles for the eight-hour day. It’s as if the socialists were essentially buying into the notion that work is a virtue, and consumerism is good, but it should all be managed democratically, while the anarchists were saying, no, the whole deal—that we work more and more for more and more stuff—is rotten from the get-go.

I’ve said this before, but I think one of the greatest ironies of history is how this all panned out when workers’ movements did manage to seize power. It was generally the classic anarchist constituencies—recently proletarianized peasants and craftsmen—who rose up and made the great revolutions, whether in Russia or China or for that matter Algeria or Spain—but they always ended up with regimes run by socialists who accepted that labor was a virtue in itself and the purpose of labor was to create a consumer utopia. Of course they were completely incapable of providing such a consumer utopia. But what social benefit did they actually provide? Well, the biggest one, the one no one talks about, was guaranteed employment and job security—the “iron rice bowl”, they called it in China, but it went by many names.

[In socialist regimes], you couldn’t really get fired from your job. As a result you didn’t really have to work very hard... I have a lot of friends who grew up in the USSR, or Yugoslavia, who describe what it was like. You get up. You buy the paper. You go to work. You read the paper. Then maybe a little work, and a long lunch, including a visit to the public bath. If you think about it in that light, it makes the achievements of the socialist bloc seem pretty impressive: a country like Russia managed to go from a backwater to a major world power with everyone working maybe on average four or five hours a day. But the problem is they couldn’t take credit for it. They had to pretend it was a problem, “the problem of absenteeism,” or whatever, because of course work was considered the ultimate moral virtue. They couldn’t take credit for the great social benefit they actually provided. Which is, incidentally, the reason that workers in socialist countries had no idea what they were getting into when they accepted the idea of introducing capitalist-style work discipline. “What, we have to ask permission to go to the bathroom?” It seemed just as totalitarian to them as accepting a Soviet-style police state would have been to us.

That ambivalence in the heart of the worker’s movement remains... On the one hand, there’s this ideological imperative to validate work as virtue in itself. Which is constantly being reinforced by the larger society. On the other hand, there’s the reality that most work is obviously stupid, degrading, unnecessary, and the feeling that it is best avoided whenever possible. But it makes it very difficult to organize, as workers, against work.

Let’s talk about “bullshit jobs.” What do you mean by this phrase?

When I talk about bullshit jobs, I mean, the kind of jobs that even those who work them feel do not really need to exist. A lot of them are made-up middle management, you know, I’m the “East Coast strategic vision coordinator” for some big firm, which basically means you spend all your time at meetings or forming teams that then send reports to one another. Or someone who works in an industry that they feel doesn’t need to exist, like most of the corporate lawyers I know, or telemarketers, or lobbyists…. Just think of when you walk into a hospital, how half the employees never seem to do anything for sick people, but are just filling out insurance forms and sending information to each other. Some of that work obviously does need to be done, but for the most part, everyone working there knows what really needs to get done and that the remaining 90 percent of what they do is bullshit. And then think about the ancillary workers that support people doing the bullshit jobs: here’s an office where people basically translate German formatted paperwork into British formatted paperwork or some such, and there has to be a whole infrastructure of receptionists, janitors, security guards, computer maintenance people, which are kind of second-order bullshit jobs, they’re actually doing something, but they’re doing it to support people who are doing nothing.

When I published the piece, there was a huge outpouring of confessionals from people in meaningless positions in private corporations or public service of one sort or another. The interesting thing was there was almost no difference between what they reported in the public, and in the private sector. Here’s one guy whose only duty is to maintain a spreadsheet showing when certain technical publications were out of date and send emails to the authors to remind them it needed updating. Somehow he had to turn this into an eight-hour-a-day job. Another one who had to survey policies and procedures inside the corporation and write vision statements describing alternative ways they might do them, reports that just got passed around to give other people in similar jobs a chance to go to meetings and coordinate data to write further reports, none of which were ever implemented. Another person whose job was to create ads and conduct interviews for positions in a firm that were invariably filled by internal promotion anyway. Lots of people who said their basic function was to create tasks for other people.

The concept of bullshit jobs seems very convincing and even obvious to me–I used to work as a temp, I saw this stuff first-hand–but others might pull market populism on you and say, who are you to declare someone’s else’s job to be bullshit, Mr. Graeber? You must think you’re better than the rest of us or something.

Well, I keep emphasizing: I’m not here to tell anybody who thinks their job is valuable that they’re deluded. I’m just saying if people secretly believe their job doesn’t need to exist, they’re probably right. The arrogant ones are the ones who think they know better, who believe that there are workers out there so stupid they don’t understand the true meaning of what they do every day, don’t realize it really isn’t necessary, or think that workers who believe they’re in bullshit jobs have such an exaggerated sense of self-importance that they think they should be doing something else and therefore dismiss the importance of their own work as not good enough. I hear a lot of that. Those people are the arrogant ones.

Is the problem of bullshit jobs more apparent to us now because of the financial crisis, the Wall Street bailouts, and the now-well-known fact that people who do almost nothing that’s productive reap so much of our society’s rewards? I mean, we always knew there were pointless jobs out there, but the absurdity of it all never seemed so stark before, say, 2008.

I think the spotlight on the financial sector did make apparent just how bizarrely skewed our economy is in terms of who gets rewarded and for what. There was this pall of mystification cast over everything pertaining to that sector—we were told, this is all so very complicated, you couldn’t possibly understand, it’s really very advanced science, you know, they are coming up with trading programs so complicated only astro-physicists can understand them, that sort of thing. We just had to take their word that, somehow, this was creating value in ways our simple little heads couldn’t possibly get around. Then after the crash we realized a lot of this stuff was not just scams, but pretty simple-minded scams, like taking bets you couldn’t possibly pay if you lost and just figuring the government would bail you out if you did. These guys weren’t creating value of any kind. They were making the world worse and getting paid insane amounts of money for it.

Suddenly it became possible to see that if there’s a rule, it’s that the more obviously your work benefits others, the less you’re paid for it. CEOs and financial consultants that are actually making other people’s lives worse were paid millions, useless paper-pushers got handsomely compensated, people fulfilling  obviously useful functions like taking care of the sick or teaching children or repairing broken heating systems or picking vegetables were the least rewarded.

But another curious thing that happened after the crash is that people came to see these arrangements as basically justified. You started hearing people say, “well, of course I deserve to be paid more, because I do miserable and alienating work” – by which they meant not that they were forced to go into the sewers or package fish, but exactly the opposite—that they didn’t get to do work that had some obvious social benefit. I’m not sure exactly how it happened. But it’s becoming something of a trend. I saw a very interesting blog by someone named Geoff Shullenberger recently that pointed out that in many companies, there’s now an assumption that if there’s work that anyone might want to do for any reason other than the money, any work that is seen as having intrinsic merit in itself, they assume they shouldn’t have to pay for it. He gave the example of translation work. But it extends to the logic of internships and the like so thoroughly exposed by authors like Sarah Kendzior and Astra Taylor. At the same time, these companies are willing to shell out huge amounts of money to paper-pushers coming up with strategic vision statements who they know perfectly well are doing absolutely nothing.

You know, you’re describing what’s happened to journalism. Because people want to do it, it now pays very little. Same with college teaching. 

What happened? Well, I think part of it is a hypertrophy of this drive to validate work as a thing in itself. It used to be that Americans mostly subscribed to a rough-and-ready version of the labor theory of value. Everything we see around us that we consider beautiful, useful, or important was made that way by people who sank their physical and mental efforts into creating and maintaining it. Work is valuable insofar as it creates these things that people like and need. Since the beginning of the 20th century, there has been an enormous effort on the part of the people running this country to turn that around: to convince everyone that value really comes from the minds and visions of entrepreneurs, and that ordinary working people are just mindless robots who bring those visions to reality.

But at the same time, they’ve had to validate work on some level, so they’ve simultaneously been telling us: work is a value in itself. It creates discipline, maturity, or some such, and anyone who doesn’t work most of the time at something they don’t enjoy is a bad person, lazy, dangerous, parasitical. So work is valuable whether or not it produces anything of value. So we have this peculiar switch. As anyone who’s ever had a 9-to-5 job knows, the thing everyone hates the most is having to look busy, even once you’ve finished a job, just to make the boss happy, because it’s “his time” and you have no business lounging around even if there’s nothing you really need to be doing. Now it’s almost as if that kind of business is the most valued form of work, because it’s pure work, work unpolluted by any possible sort of gratification, even that gratification that comes out of knowing you’re actually doing something. And every time there’s some kind of crisis, it intensifies. We’re told, oh no! We’re all going to have to work harder. And since the amount of things that actually need doing remain about the same, there’s an additional hypertrophy of bullshit.

David Graeber: What’s the Point If We Can’t Have Fun?

Survival of the Misfits

The tendency in popular thought to view the biological world in economic terms was present at the nineteenth-century beginnings of Darwinian science. Charles Darwin, after all, borrowed the term “survival of the fittest” from the sociologist Herbert Spencer, that darling of robber barons. Spencer, in turn, was struck by how much the forces driving natural selection in On the Origin of Species jibed with his own laissez-faire economic theories. Competition over resources, rational calculation of advantage, and the gradual extinction of the weak were taken to be the prime directives of the universe.

The stakes of this new view of nature as the theater for a brutal struggle for existence were high, and objections registered very early on. An alternative school of Darwinism emerged in Russia emphasizing cooperation, not competition, as the driver of evolutionary change. In 1902 this approach found a voice in a popular book, Mutual Aid: A Factor of Evolution, by naturalist and revolutionary anarchist pamphleteer Peter Kropotkin. In an explicit riposte to social Darwinists, Kropotkin argued that the entire theoretical basis for Social Darwinism was wrong: those species that cooperate most effectively tend to be the most competitive in the long run. Kropotkin, born a prince (he renounced his title as a young man), spent many years in Siberia as a naturalist and explorer before being imprisoned for revolutionary agitation, escaping, and fleeing to London. Mutual Aid grew from a series of essays written in response to Thomas Henry Huxley, a well-known Social Darwinist, and summarized the Russian understanding of the day, which was that while competition was undoubtedly one factor driving both natural and social evolution, the role of cooperation was ultimately decisive.

The Russian challenge was taken quite seriously in twentieth-century biology—particularly among the emerging subdiscipline of evolutionary psychology—even if it was rarely mentioned by name. It came, instead, to be subsumed under the broader “problem of altruism”—another phrase borrowed from the economists, and one that spills over into arguments among “rational choice” theorists in the social sciences. This was the question that already troubled Darwin: Why should animals ever sacrifice their individual advantage for others? Because no one can deny that they sometimes do. Why should a herd animal draw potentially lethal attention to himself by alerting his fellows a predator is coming? Why should worker bees kill themselves to protect their hive? If to advance a scientific explanation of any behavior means to attribute rational, maximizing motives, then what, precisely, was a kamikaze bee trying to maximize?

We all know the eventual answer, which the discovery of genes made possible. Animals were simply trying to maximize the propagation of their own genetic codes. Curiously, this view—which eventually came to be referred to as neo-Darwinian—was developed largely by figures who considered themselves radicals of one sort or another. Jack Haldane, a Marxist biologist, was already trying to annoy moralists in the 1930s by quipping that, like any biological entity, he’d be happy to sacrifice his life for “two brothers or eight cousins.” The epitome of this line of thought came with militant atheist Richard Dawkins’s book The Selfish Gene—a work that insisted all biological entities were best conceived of as “lumbering robots,” programmed by genetic codes that, for some reason no one could quite explain, acted like “successful Chicago gangsters,” ruthlessly expanding their territory in an endless desire to propagate themselves. Such descriptions were typically qualified by remarks like, “Of course, this is just a metaphor, genes don’t really want or do anything.” But in reality, the neo-Darwinists were practically driven to their conclusions by their initial assumption: that science demands a rational explanation, that this means attributing rational motives to all behavior, and that a truly rational motivation can only be one that, if observed in humans, would normally be described as selfishness or greed. As a result, the neo-Darwinists went even further than the Victorian variety. If old-school Social Darwinists like Herbert Spencer viewed nature as a marketplace, albeit an unusually cutthroat one, the new version was outright capitalist. The neo-Darwinists assumed not just a struggle for survival, but a universe of rational calculation driven by an apparently irrational imperative to unlimited growth.

This, anyway, is how the Russian challenge was understood. Kropotkin’s actual argument is far more interesting. Much of it, for instance, is concerned with how animal cooperation often has nothing to do with survival or reproduction, but is a form of pleasure in itself. “To take flight in flocks merely for pleasure is quite common among all sorts of birds,” he writes. Kropotkin multiplies examples of social play: pairs of vultures wheeling about for their own entertainment, hares so keen to box with other species that they occasionally (and unwisely) approach foxes, flocks of birds performing military-style maneuvers, bands of squirrels coming together for wrestling and similar games:

We know at the present time that all animals, beginning with the ants, going on to the birds, and ending with the highest mammals, are fond of plays, wrestling, running after each other, trying to capture each other, teasing each other, and so on. And while many plays are, so to speak, a school for the proper behavior of the young in mature life, there are others which, apart from their utilitarian purposes, are, together with dancing and singing, mere manifestations of an excess of forces—“the joy of life,” and a desire to communicate in some way or another with other individuals of the same or of other species—in short, a manifestation of sociability proper, which is a distinctive feature of all the animal world.

To exercise one’s capacities to their fullest extent is to take pleasure in one’s own existence, and with sociable creatures, such pleasures are proportionally magnified when performed in company. From the Russian perspective, this does not need to be explained. It is simply what life is. We don’t have to explain why creatures desire to be alive. Life is an end in itself. And if what being alive actually consists of is having powers—to run, jump, fight, fly through the air—then surely the exercise of such powers as an end in itself does not have to be explained either. It’s just an extension of the same principle.

Friedrich Schiller had already argued in 1795 that it was precisely in play that we find the origins of self-consciousness, and hence freedom, and hence morality. “Man plays only when he is in the full sense of the word a man,” Schiller wrote in his On the Aesthetic Education of Man, “and he is only wholly a Man when he is playing.” If so, and if Kropotkin was right, then glimmers of freedom, or even of moral life, begin to appear everywhere around us.

It’s hardly surprising, then, that this aspect of Kropotkin’s argument was ignored by the neo-Darwinists. Unlike “the problem of altruism,” cooperation for pleasure, as an end in itself, simply could not be recuperated for ideological purposes. In fact, the version of the struggle for existence that emerged over the twentieth century had even less room for play than the older Victorian one. Herbert Spencer himself had no problem with the idea of animal play as purposeless, a mere enjoyment of surplus energy. Just as a successful industrialist or salesman could go home and play a nice game of cribbage or polo, why should those animals that succeeded in the struggle for existence not also have a bit of fun? But in the new full-blown capitalist version of evolution, where the drive for accumulation had no limits, life was no longer an end in itself, but a mere instrument for the propagation of DNA sequences—and so the very existence of play was something of a scandal.

Why Me?

It’s not just that scientists are reluctant to set out on a path that might lead them to see play—and therefore the seeds of self-consciousness, freedom, and moral life—among animals. Many are finding it increasingly difficult to come up with justifications for ascribing any of these things even to human beings. Once you reduce all living beings to the equivalent of market actors, rational calculating machines trying to propagate their genetic code, you accept that not only the cells that make up our bodies, but whatever beings are our immediate ancestors, lacked anything even remotely like self-consciousness, freedom, or moral life—which makes it hard to understand how or why consciousness (a mind, a soul) could ever have evolved in the first place.

American philosopher Daniel Dennett frames the problem quite lucidly. Take lobsters, he argues—they’re just robots. Lobsters can get by with no sense of self at all. You can’t ask what it’s like to be a lobster. It’s not like anything. They have nothing that even resembles consciousness; they’re machines. But if this is so, Dennett argues, then the same must be assumed all the way up the evolutionary scale of complexity, from the living cells that make up our bodies to such elaborate creatures as monkeys and elephants, who, for all their apparently human-like qualities, cannot be proved to think about what they do. That is, until suddenly, Dennett gets to humans, which—while they are certainly gliding around on autopilot at least 95 percent of the time—nonetheless do appear to have this “me,” this conscious self grafted on top of them, that occasionally shows up to take supervisory notice, intervening to tell the system to look for a new job, quit smoking, or write an academic paper about the origins of consciousness. In Dennett’s formulation,

Yes, we have a soul. But it’s made of lots of tiny robots. Somehow, the trillions of robotic (and unconscious) cells that compose our bodies organize themselves into interacting systems that sustain the activities traditionally allocated to the soul, the ego or self. But since we have already granted that simple robots are unconscious (if toasters and thermostats and telephones are unconscious), why couldn’t teams of such robots do their fancier projects without having to compose me? If the immune system has a mind of its own, and the hand–eye coordination circuit that picks berries has a mind of its own, why bother making a super-mind to supervise all this?

Dennett’s own answer is not particularly convincing: he suggests we develop consciousness so we can lie, which gives us an evolutionary advantage. (If so, wouldn’t foxes also be conscious?) But the question grows more difficult by an order of magnitude when you ask how it happens—the “hard problem of consciousness,” as David Chalmers calls it. How do apparently robotic cells and systems combine in such a way as to have qualitative experiences: to feel dampness, savor wine, adore cumbia but be indifferent to salsa? Some scientists are honest enough to admit they don’t have the slightest idea how to account for experiences like these, and suspect they never will.