Additional Notes on "Drawing Dynamic Visualizations"

Bret Victor / May 21, 2013

Last week, I released a talk called Drawing Dynamic Visualizations, which shows a tool for creating data-driven graphics through direct-manipulation drawing.

I expect to write a full research report at some point (at which I'll make the research prototype available as well). In the meantime, here is a quick and informal note about some aspects of the tool which were not addressed in the talk.

This note assumes you've watched the talk!

Contents

Snapping
Sub-pictures
Control flow
Export
Challenges
FAQ

Snapping

Unambiguous actions

Much of the effort in designing this tool went into designing a set of drawing actions ("primitive operations") that allows the artist to precisely and unambiguously express intent. The editor is intentionally not magical, and does not "guess" what the artist is intending to do.

By contrast, many drawing programs will guess how the artist is intending to align or position an object, and snap it accordingly. Here's OmniGraffle:

This magic is very helpful when drawing a static picture, but would be troublesome for the tool here, where the drawing actions form a procedure which must work correctly in the general case.

For example, when moving an object, the tool does not allow the artist to simply "drag the object". Instead, the artist must start dragging from a specific point on the object (shown as small blue dots). In the following example, the artist starts dragging from the top-right corner of the rect, so the rect snaps its top-right corner to the target point:

This is just one example of how the set of drawing actions has been carefully designed to allow (and force) the artist to express their intent unambiguously. (See also the note at the end about "programming by example".)

Snap point disambiguation

One place where some ambiguity was unavoidable was when multiple snap points overlap. For example, when dragging to the point below, the artist might intend to snap to the endpoint of the line, or the center of the circle, or the center of the canvas. These points happen to overlap at drawing time, but they might not always overlap in the general case (say, with different data), so it's crucial that the artist can express specifically which point is intended.

The tool heuristically chooses one of the points to snap to initially. The artist can then hit the tab key to cycle through the possible snaps. The selected snap is clearly shown by highlighting the target object in yellow, and through the caption at the top of the canvas:

The situation is the same for selecting a snap point to start from. The tool heuristically selects a likely one, and the artist can hit the tab key to correct it.

Intersections

Every object provides a number of snap points (shown as yellow dots):

In addition, the artist can snap to the intersection of any two objects:

Intersections are essential to classical geometric constructions, and the idea is that this tool should allow computation to be expressed through geometric construction, when appropriate, instead of the algebraic expressions used in conventional programming. (See the "if" example, below.)

So far, I haven't actually made much use of intersections, but it's likely that I'm not yet "thinking geometrically" enough.

(For example, here is a geometric construction of a Fourier series, using intersections. This is from a book written in 1929, and I suspect that a lot of such knowledge has been lost over the last century.)

Glomp

In addition to provided snap points, the artist can hold down the "glomp" modifier key to snap to an arbitrary position along an object. The position can then be parameterized, if desired.

This allows for programmatically traversing paths. For example, here is a dotted border:

Sub-pictures

The tool allows a picture to be placed inside another picture. Such a "sub-picture" serves the purpose of a subroutine -- it encapsulates and abstracts some computation.

When we have an encapsulation mechanism, we can ask how information gets in and how it gets out. A conventional "function", for example, imports information through arguments, and exports information through its return value. (Aside from global variables and side effects.)

In the tool here, there are two kinds of information -- algebraic information (variables created through algebraic expressions in the data panel) and geometric information (positions and objects on the canvas).

Data panel

A sub-picture imports algebraic information through data variables.

The definition of a variable can be overridden by an expression, which is evaluated when the sub-picture is created. This expression can make use of dynamic ("runtime") information, such as the measurements of objects on the canvas at that time.

Measurements

A sub-picture exports algebraic information through measurement variables.

A measurement variable is defined just like a data variable, except that its definition is evaluated after the procedure has completed, and can make use of any information thus computed, such as the measurements of objects on the canvas.

The sub-picture's measurements are visible to the outer picture, and can be used in expressions in the outer picture:

Magnets

A sub-picture exports geometric information through magnets.

From the perspective of the outer picture, everything inside the sub-picture is encapsulated and inaccessible. All the outer picture can do is snap to points on the bounding box:

The sub-picture can export a snap point by placing a magnet at that point. "Drawing" a magnet works exactly like drawing any object, such as a line or a circle. (A magnet is always a guide, meaning that it doesn't appear in the final picture.) The outer picture can then snap to this point:

There currently is no way for a sub-picture to import geometric information. One would want some way for a sub-picture to "import" a snap point from the outer picture, complementary to the way in which a magnet exports a point to the outer picture. I have a few ideas for this, but more exploration is needed.

Recursion

Yes, you can draw a picture inside itself:

There's currently an arbitrary depth limit. I haven't found recursion to be very useful, but again, this may be due to not being good at "thinking recursive-geometrically" yet.

Hat tip to Toby Schachman's Recursive Drawing project.

Control flow

There are two control-flow mechanisms, looping (equivalent to "for") and conditionals (equivalent to "if").

Looping

Selecting a range of steps and hitting the loop key makes a loop. The bounds of the loop are expressions.

The way that iteration works is unusual, and a little questionable. The loop loops over columns of data. On each iteration, a particular column is selected. The token of an array variable refers to the array's value at the currently-selected column.

That is, arrays are dereferenced implictly, and the loop index is not typically needed. When it is needed, the hidden "column" variable refers to the index of the selected column.

I went with this scheme because it was the simplest thing that seemed to work. It normally works well. Some downsides of the selection model are that it's limited to flat, spreadsheet-like data structures (no trees), and it limits the usefulness of nested loops (because only one column is selected at once, an inner loop cannot refer to the selected data in the outer loop).

Using the "all items" token in the reductions palette, it is possible to obtain the entire array and index it "manually" (as well as map it, filter it, etc.), although this facility is probably too much of a back-door to the underlying implementation, and should be rethought.)

Conditionals

The artist can make a range of steps conditional by selecting them and hitting the "if" key. The condition expression can then be edited:

As an example, consider a bar chart with numbers inside the bars:

When the bar gets too short, we would like the text to flip to the top of the bar, to avoid the situation here:

Here is how we do that. We draw a guide line from the text to the bottom of the canvas, and measure that line. If the line goes up instead of down (indicating that the text is below the bottom of the canvas), we move the text above the bar:

Scope

The system uses the equivalent of conventional lexical scoping. Objects created within a loop or conditional are scoped to that block, and cannot be referenced (mutated or snapped to) by a step outside that scope. (They do get rendered into the final picture, unless they are guides.)

Usually, this is the right thing. Occasionally, it's awkward. For example, an artist commonly wants to connect objects with a line:

If each of these dots is created on a different iteration of the loop, then only one is ever in scope at a time, so it's not possible to draw a line from one to another. One solution is to create an object outside of the loop to "remember" the position from the previous iteration. (This same problem, and this solution, are very common in code as well.)

Another solution is to add points to a path object, which inherently accumulates, although paths have their own troublesome scoping issues.

There are often objects visible on the canvas that are not in scope. Because the objects are visible, the artist is tempted to make use of them, but that is not allowed -- they cannot be selected or snapped to. This can be frustrating, and precludes certain "obvious" constructions. Allowing references to such objects opens up a number of ambiguity issues.

I can't go into all of the issues with scoping in this brief note, but it is a subtle and challenging area.

Export

In the implementation of the current prototype, the editor is separate from the "runtime" which interprets and draws the picture.

The intent is that a picture drawn in the editor can be exported as a standalone JavaScript file, which can then be dropped into a webpage. Through the picture's JavaScript API, the author can set the picture's data, render it into a canvas, and query its measurements.

(Export is currently not implemented; the challenges have to do with preventing library conflicts and other JavaScript nonsense.)

Challenges

A few major conceptual challenges to be addressed in the work going forward:

Scoping, as mentioned above. Paths are particularly hairy, since the points of a path can be created in different scopes from each other, and in a different scope from the path object itself. When the artist is snapping to a path point, it's not always clear what the intent is.

Iteration over geometric objects. It's not currently possible to iterate over geometry. For example, to rotate every object on the canvas, or to rotate every rect on the canvas. One would like to iterate over individual points of an object as well. I have some ideas for how the interface for this might work (perhaps resembling "selection"), but more exploration is required.

Abstraction over geometric transformations. Sub-pictures are currently the only abstraction mechanism, but a sub-picture starts from an empty canvas. One would like a way to package up a procedure that takes in geometric objects and modifies them. For example, a procedure that can be applied to an arbitrary object to turn it upside down, or stroke it with a dotted border. This is related to the comment in the Magnets section that there is no way to "import" geometric information.

FAQ

Is this "programming"?

Sure. The artist is certainly creating a computer program.

(I would deny that it's "coding", however, since the artist is not directly manipulating a symbolic representation, and what else could the word "code" mean? (I've heard people claim that it must be "coding" because code is being "created under the hood". This seems equivalent to the claim that drawing in Illustrator is "coding in PostScript". (It's not.)))

More to the point, the artist is indeed required to think procedurally. (But note that the act of drawing, like a procedure, is a step-by-step process.). The artist must also think abstractly (when designing loops and sub-pictures and such). Learning abstraction is no doubt a challenge for anyone, although the tool is designed to help make that challenge approachable. (See Learnable Programming for a dissection of many of the techniques used.)

When considering "who" would or could use a tool like this, realize that we have a cultural gap of sorts. We've grown a set of people who are good at thinking in procedural abstractions; they've been trained to manipulate code, and they call themselves "programmers". And we've grown a set of people who are good at thinking in pictures; they've been trained to directly manipulate those pictures, and they call themselves "designers" or "illustrators". Because we haven't had a means of directly manipulating abstract procedural pictures, we don't yet have a set of people who think that way. I expect that it will take time for such people to grow, and I'm excited about what they will create once they're here.

Thirty years ago, digital 3D modeling as an established art discipline did not exist. This was because direct-manipulation tools, Maya and its ilk, did not exist. Today, we've grown an incredible culture of 3D artists who think in the new way.

So, to me, the question of whether today's programmers will "learn direct manpulation", or today's artists will "learn abstract procedural thinking", is uninteresting. I'm more interested in growing new people, free from old biases.

Is this "visual programming"?

No. The term "visual programming" has had a well-established definition for several decades, and this tool is not that.

A "visual programming language" provides graphical representation and spatial manipulation of the program structure. (That is, static "elements" or "operators", analogous to the "code" in a conventional programming language, or schematic components in a circuit).

This is not the case with the tool here, which actually represents program structure as a fairly conventional list of textual instructions. The only direct manipulation of the instructions is merely in selecting them for looping, deleting, etc.

Instead, the tool provides graphical representation and spatial manipulation of the program data. (Dynamic or "runtime" information, analogous to the values of data structures in a conventional programming language, or voltages in a circuit. In this case, the data is a picture.) The program structure is built up implicitly by directly manipulating the data.

Taxonomically, the tool is more closely related to the family of "programming by example" systems, although it's not that either. They both involve direct manipulation of the data, and in particular, providing a concrete instance of the desired program output, which is then generalized.

However, a typical "programming by example" system generalizes via inference. This is appropriate when the user is "in the loop" to verify the generalization. (For example, the user might rename a list of files by explicitly renaming a few of them to establish the pattern, letting the system rename the rest, and then verifying that the system did so correctly.) This form of inference is less appropriate when the user is creating a component that must be shipped and work reliably and deterministically in an unknown environment.

Instead of inference, a user of the tool here generalizes the example by explicitly parameterizing the steps. (For example, performing a "scale by 50%", and then changing the "50%" to some expression that depends on variables.)

The tool here is also related to the family of "constraint-based drawing" tools, although once again, it's not one of them. The prototypical example is Ivan Sutherland's "Sketchpad", and the concepts were explored further in (for example) David Kurlander's "Chimera" and Michael Gleicher's "Briar". (It seems likely that I got the idea for semantic snapping from "Briar".)

All of these tools produce static pictures, not dynamic pictures, in the sense that I've been using those terms. That is, the pictures they produce do not take input data. They cannot be used as a components that turn different data into different pictures.

The programming style of the tool here is perhaps most closely related to the humble "macro recorder". When recording a macro in Emacs, for example, the user's actions build up a Lisp procedure which can then be edited and generalized. The most useful actions to record tend to be semantically-rich ones that generalize well (such as navigating via incremental regexp search instead of arrow keys). This is more-or-less similar to the programming style embodied in the tool here.

* * *

While on this subject, I might as well mention my own past work on this problem. The tool here is my third run at the problem of drawing dynamic pictures. See Dynamic Pictures for an overview and philosophy.

My first attempt was a concept described in the Designing a Design Tool section of Magic Ink, where an artist draws static snapshots corresponding to sets of example parameters, with the tool inferring the mapping between parameters and picture elements. The second attempt was Substroke, a language based around parameterized transformations, where the artist's drawing history could be parameterized at each stage. The present work was inspired by both, but grew directly out of the animation tool shown in Stop Drawing Dead Fish.