What is computer music (or does it matter)?

by Dave Maier

300px-Rca_mk22 As everybody knows, with the proper encouragement computers can make bleeps and bloops, and so: computer music! That's been true for many years, and there are plenty of histories of computer music which will tell you all about the Telharmonium, the Synket, and the RCA Mark II Sound Synthesizer (pictured here). This thing, which was once the state of the art, is the size of several refrigerators and was decidedly not a real-time sound production device. Nowadays, on the other hand, everyone who has a laptop, or even an iPad (or iPhone!), and access to the Internet, can download, often for free, sound generation and manipulation programs which make even the most powerful tools of the previous century look like TinkerToys. Yet our understanding of the significance and meaning of “computer music” remains mired in the compositional and ontological assumptions of the distant past.

This is unfortunate but entirely understandable. As plenty of wise guys have pointed out over the years, we rarely understand change as it happens and only get it, if at all, in retrospect. Still, we should try to keep up; so let's see what we can do. What is “computer music”, and why should we care?

We naturally distinguish between a tool used for making art, on the one hand, and the art made with it on the other. Technical innovations have often led to artistic advances – think of the spectacular paintings made possible by the development of oil paint – but they don't constitute artistic advances in themselves. We don't put tubes of oil paint in museums (at least not art museums).

On the other hand, technical innovations don't have to be as clearly extra-artistic as paint and chisels are. In fact, plenty of art is valuable precisely because of its technical innovation. It's just that technical innovation is generally not sufficient for artistic greatness. In classical music, for example, we tend to say that Chopin and Beethoven use their innovations for particular artistic purposes, not for their own sake, while composers like Paganini or Liszt are regarded as flashier but less substantial (that's the conventional wisdom anyway; each of these latter has had recent scholarly advocates).

For computer music, we may naturally ask: what does it matter that we listen to music, but can only understand a computer program by, as we might say, looking under the hood to see how it works? Do I have to be a coder to appreciate it, or, on the other hand, are such considerations (as, say, programming elegance) strictly irrelevant to evaluating computer music as art?

The problem in answering this question is that computers are used in so many different ways in making art of this sort, whatever we decide to call it. “Computer music,” after all, is a bit of a prejudicial term in this sense. A few examples will show how daunting this investigation can be, and we may decide simply to give it up as hopeless. However, even if so, if that null result forces us to abandon some natural assumptions about what music or art is, then it may be very significant indeed.

Eventually. For right now, let's just dig into the horrifically confusing details. Even if we do distinguish between computer programs, on the one hand, and the artistic uses to which they are put, again, these uses are themselves uncomfortably diverse for our purposes. Is the result of using these programs best thought of as (attempts at) works of art, or instead as computer programs themselves? Or are there further possibilities still?

Take a program like Nodal. Let's see what the developer says about this program.

Nodal is generative software for composing music, interactive real-time improvisation, and a musical tool for experimentation and fun. Nodal uses a new method for creating and exploring musical patterns, probably unlike anything you've used before. You can play sounds using Nodal's built-in synthesiser or any MIDI compatible hardware or software instrument.

Nodal network Nodal is based around the concept of a user-defined network. The network consists of nodes (musical events) and edges (connections between events). You interactively define the network, which is then automatically traversed by any number of virtual players. Players play their instruments according to the notes specified in each node. The time taken to travel from one node to another is based on the length of the edges that connect the nodes. Nodal allows you to create complex, changing sequences using just a few simple elements. Its unique visual representation allows you to edit and interact with the music generating system as the composition plays.

(If what this program does is not clear, check out the embedded YouTube videos at that site.) What should we make of this?

Hmm. First, we should note that the generative issue can be detached from the computer issue. We don't need computers to make generative music, as we've had the instructions-on-paper equivalent for some time – and that's even if we accept the traditional distinction between composition and performance.

Take Terry Riley's breakthrough work In C. The score instructs performers to play a series of phrases, playing each one for as long as they see fit before going on to the next. At first the performers are all together, but as their decisions take effect the various phrases go farther and farther out of phase with each other. Of course they've been composed for this purpose; but in any case different performances of this work can sound very different indeed. In fact they are so different that we might easily think of them as different works – if we didn't know, that is, how they were produced. In fact if Riley had used this process to write out different versions of the work, instead of allowing the performers to make this decision, we might have a tough time justifying a distinction between different works and different “versions” of the same work.

In Nodal, we have much the same thing going on, albeit a bit differently. Nodal can be programmed to deliver a different set of sounds each time one starts it up. Naturally computers can't decide to do things as they see fit, so any differences between “performances” of a Nodal … patch, I guess we should call it, can only be the result of random chance (as programmed, that is, by the composer). That might look like a salient difference, but of course random operations are old news in pencil-and-paper music as well, even the kind which is not particularly conceptual as opposed to sensory (e.g. Morton Feldman).

So no help yet for our question. And of course not all “computer music” is like this. But let's stick with Nodal for a moment. Even if I do think of Nodal, as the developers suggest, as a “generative” program in this sense, I need not use it to make “generative” music. That is, even if I do use it to generate pitch patterns, or even sounds, for use in a composition, that doesn't mean those patterns or sounds constitute the composition as a whole. Nothing requires that I use only Nodal for a piece; and even if I do, my composition is determined by how I arrange the sounds, not how they got onto my palette. Maybe I use Nodal only to generate a sequence which hums along in the background while the important things happen up front. In that case, whether the sequence is generated or, as we might say, “through-composed,” is surely of little compositional importance.

Valhallaroom_v1_0_4 Here's another example. Like many other programs, Max/MSP can be used simply to generate sounds according to the composer's wishes. Like Nodal, it can also be used to make generative music. However, it is often used not to make music directly, but instead to make sound processors, known as plug-ins (like ValhallaRoom, a reverb plug-in). As the name suggests, you can plug these processors into the sound path of your DAW (digital audio workstation) to modify the sound. Some plug-ins are sound generators rather than processors (although this distinction is pretty fuzzy itself, given how synthesizers work).

Here's one such program. This one's a “standalone” app, not a plug-in, but the principle is the same. Its developers describe Ambient as

A unique ambient soundscape generator. AMBIENT is capable of producing a vast array of ambient textures, from the bizarre to the beautiful. AMBIENT processes any sound you care to load into it. The possibilities are endless.

So it's a “soundscape generator”, not an instrument; but it works by not by generating its own sound, as synthesizers do, but by processing sounds which the user feeds into it.

So here's another question for us. The Ambient webpage directs us to a release by one Christopher Hipgrave, and tells us that the tracks there (streamable – check it out) were “made with Ambient.” But we don't know exactly what this means. What did the composer do? Did he simply load various sounds into the program and record the result? Or was his involvement more hands-on than that?

The releasing label doesn't see fit to tell us this, or even mention Ambient at all. The description there is wonderfully ambiguous on the matter:

Hipgrave’s sound is one of gentle manipulation, shimmering melodies and an exquisite sense of indistinct fragility. Whilst multiple layers of texture and tone are the main focal point of this album, background hisses, pops and crackles can also be heard which all seem to emanate from some strange electronic machine.

It’s this attention to detail that provides [‘Slow, With Pages of Fluttering Interference'] with such warmth and humanity, creating a perfect balance against the structure of the music. Each piece is formed using a bare minimum of sound sources, which are then explored in depth, allowing the tracks to breathe, morph or run until their natural conclusion, resulting in an album of sublime intricate beauty.

The composer has shown “attention to detail” (so he didn't just sit back and let the computer do all the work), and he has “explored [the various sound sources] in depth,” but on the other hand he has shown restraint in not interfering with the “natural” course of the processing algorithm.

AMBIENT-screenshot-558 Still, one might wonder how the credit should be split between programmer and composer (or “arranger,” should we decide that it is the former person who deserves both credits). In this case, however, it turns out that the same person plays both roles anyway, as Hipgrave himself is the creator of Ambient as well as the composer of this work. So he gets all the credit (although of course we might still wonder about the sounds that he loaded into the program, which might very well have been generated by someone else for all we know). Yet surely our questions remain, even as we wonder whether their answers really matter. It is tempting simply to say that if something sounds good, its origin is immaterial; but on the other hand knowing its origin may very well affect how we hear it, and thus whether it sounds good in the first place.

One more example. Paulstretch is a program which, like Ambient, processes whatever you load into it, making it sound very different indeed. However, unlike Ambient, which chops the sound up into little pieces first (“granular synthesis”), Paulstretch simply stretches it out to many times its original length (although there are other parameters you can tweak) without changing the pitch. In this sense, the original sound is in a way much more present, even as it is equally unrecognizable.

Paulscreenshot1 In fact, this issue has already turned out to come up. As I understand the story, someone posted on the Internet a Justin Bieber song which had been stretched out in this way, the result being a very ambient soundscape indeed. No doubt a similar result could have resulted from feeding it into Ambient as well. However, in this case, the track had to be taken down because, it was argued, the original song could be reconstructed from the stretched version by compressing it back to its original length. (Now this story seems most implausible, and I'm not sure I'm even getting right what the claims were, but for our purposes that doesn't matter.)

Even if we put the legal issues to one side, along with the compositional issue of the rampant sonic borrowing of “plunderphonics,” we are again forced to consider what exactly one is doing, compositionally, in using such tools. Having used the program a bit, I can predict what will sound good when stretched out, but I have no real idea *exactly* what the result will be. I also wonder about the difference between getting a sound which can be used for something given further tweaking and processing, and something which sounds perfectly wonderful just as is. How much compositional credit do I deserve for doing something which took five minutes, even if I suspected that such a procedure would work beautifully? Does it matter if my source material was a) sound I myself generated with another program; b) found sound (field recordings) I myself recorded; c) sound recorded by someone else; d) sound recorded by someone else and released commercially (I took it off the CD); and e) someone else's music?

Phew! As with so many ultimately useful philosophical investigations (as well as quite a few useless ones), all we've done so far is make trouble. But these questions aren't going away, and next time we'll try to make some constructive progress too. Stay tuned!