The word “computer” shows up often in the titles and tracks of early electronic artists like Kraftwerk and on early Detroit techno records – artifacts of culture created right on the cusp of the microcomputer revolution. They were objects that were venerated, celebrated, coveted, sometimes feared.
Today, with computers dominant in our lives to the point where our bodies and complex moral systems thousands of years old are being altered by their presence, music hardly mentions the machines, which now play an indispensable role in making music too.
The step after ubiquity is invisibility: computer music has become similar to electricity, industrial pollution and WiFi, noticed only when it’s not there.
Even the term “computer music” feels archaic and old fashioned – not least of all because everything is computer music now. The rare new music that wasn’t composed with the assistance of a computer has been in all probability processed, reproduced, distributed and in most cases played back by one. Other than the music you hear performed live on acoustic instruments, everything is “computer music” now. The step after ubiquity is invisibility: computer music has become similar to electricity, industrial pollution and WiFi, noted only in its absence.
Yet things could have easily gone a different way. Consider the ANS synthesizer, created in the Soviet Union when technology exchanges between the East and West had ceased and research began to branch off in fascinating ways. The ANS was an amazingly inventive device which used as an interface not a typewriter but glass plates and photocells through which light would pass through and send signals to amplifiers.
Max Mathews was among the first to see beyond the limitations of early Cold War technology to a future day, he said, “when a plumber can come home from work and instead of watching television turn on his home computer and make music with it.”
Max Mathews was among the first to see beyond the limitations of early Cold War technology to a future day, he said, “when a plumber can come home from work and instead of watching television turn on his home computer and make music with it.” This sounds like a caricature of a world in which hundreds of thousands – perhaps millions – make music with computers, but it was actually prophecy. Max Mathews shared that vision not in the last decade but in 1965, when there were no home computers or capable plumbers to make “techno bangers” on them.
Max Mathews made his computer sing one day in 1957. He wasn’t the first to accomplish the feat, but the event was for electronic music what the Trinity test was for nukes. It is the Anno Domini, the date at which all timelines begin.
Mathews was likely born to be an academic. He was raised in rural Nebraska where both of his parents were instructors at a teaching college – teachers of the teachers, as it were. A stint in the Navy exposed him to radio technology; after discharge he earned his bachelors in electrical engineering at CalTech and a doctorate from MIT in 1954.
A year later he was hired at Bell Laboratories – the famed “idea factory” and self-contained (and wholly owned) Silicon Valley of the era before Silicon Valley actually existed. Nine Nobel Prizes were awarded to Bell employees over the years. Among the discoveries at the Lab were Cosmic Background Radiation (the echo of the Big Bang), UNIX and the transistor.
Bell Laboratories was owned by AT&T and as such, their first priority had to do with the phone system and studying the electronic transmission of signals and human speech. (Among the earlier Bell Labs inventions, some two decades before Mathews arrived, was the Vocoder.) Mathews worked in acoustics and behavioral research, developing technology to input sound into a computer and output it again.
“It was immediately apparent,” he later remembered, “that once we could get sound out of a computer, we could write programs to play music on the computer. That interested me a great deal. The computer was an unlimited instrument, and every sound that could be heard could be made this way. And the other thing was that I liked music. I had played the violin for a long time…”
MUSIC I (as in “one”), the program that Max Mathews wrote, was not the first software for playing music, but it was the first that made it possible to synthesize sound on a computer and play it back at the user’s leisure. Earlier experiments with computer music had resulted in a tune which was played once and then had to be reconstructed all over again.
Getting a computer to do this sort of thing was not easy in 1957. The first computer music machine wouldn’t fit on your desk. It might not fit in your house. The IBM 704 was a gigantic mainframe but was far too slow to process musical sounds in real time. The first performance on the machine lasted just a few seconds but would have taken an hour of processing time for the computer to play it back, so Mathews had the output transferred to tape and sped up to account for this.
“Computer performance of music was born in 1957,” Mathews recalled 40 years later, “when an IBM 704 in NYC played a 17 second composition on the Music I program which I wrote. The timbres and notes were not inspiring, but the technical breakthrough is still reverberating.”
That first performance was a composition called “The Silver Scale.” The series of beeps and tuneless tones can still be heard much as it was heard for the first time in Bell Labs. Mathews was under no illusion about the art of what they had made. “It was terrible,” he said. His mentor, John R. Pierce (the man who had coined the word “transistor”) thought it sounded awful. MUSIC I had only “one waveform,” Mathews recalled, “a triangular wave, no attack, no decay, and the only expressive parameters you could control were pitch, loudness and duration.”
Max would later call MUSIC I simply “a beginning.” He was being humble here: it was in fact a tremendous proof-of-concept of what had been purely a theoretical study up until that time. Being able to generate some sounds within a mainframe computer meant, according to fellow Bell Labs alumni Claude Shannon’s Information Theory, that any sound could be eventually be synthesized. And stored. And altered. And played back. Or, as Mathews said in an influential 1963 article in Science, “There are no theoretical limits to the performance of the computer as a source of musical sounds.”
MUSIC I would evolve through five iterations to MUSIC V by Mathews’ hand. MUSIC II was released the following year, adding three voices and the wavetable oscillator. By 1960, Mathews had evolved MUSIC to account for “modularity” – sounds working together in the manner of a band.
“Music I led me to Music II through V,” Mathews said in 1997. “A host of others wrote Music 10, Music 360, Music 15, Csound, Cmix, and SuperCollider.”
Mathews continued to write music for computers and encouraged others to do so too. Compositions like “Numerology” sounded like the sound effects and chiptunes of Donkey Kong or another arcade game. (Maybe even more weird is that the primitive chirps of “Numerology” shows up as a sample in J. Cole’s “Motiv8.”)
One composition, however, was not original but probably became the most famous cover song in the history of computer music.
Since technology first granted humans the God-like power to build computers, we quickly began thinking of the multitude of ways in which our creations could destroy us. The notion of the mechanical servant rebelling against its masters was best captured in culture when the HAL 9000 began its homicidal rampage in Stanley Kubrick’s 1968 film 2001: A Space Odyssey. After astronaut Dave Bowman returns to the ship and begins deactivating the computer, HAL reverts into its earliest programming, giving a childlike rendition of a song taught to it by it’s beloved creator – a song called “Daisy Bell.”
What is about to be stated really happened – it’s not an urban legend – and the anecdote is probably better known than any scientific accomplishment by Max Mathews.
In the early 1960s, science fiction author Arthur C. Clarke visited Bell Labs and heard a demo on an IBM 704 mainframe designed to illustrate speech synthesis. He heard a vocoder “sing” an old standard, “Daisy Bell,” which was originally written by Harry Dacre in 1892. The vocal was programmed by John L. Kelly and Carol Lockbaum with music programmed by Max Mathews.
The demo (which was part of formal tours of Bell Labs for important visitors for many years) and the computer’s childlike performance must have made a striking impression on Clarke. Perhaps he saw this as a hinge of history – the coming evolution of the computer from a helpless toy being “taught” to recite nursery-like rhymes into something that might eventually turn into a neurotic brain capable of murder.
Bell Labs’ demo of “Daisy Bell” appeared in a remarkable short film called Incredible Machine released in 1968. Mathews appears leaning over a device called a Graphic 1 – a machine that enabled users to draw directly on a CRT computer monitor with a light pen stylus.
You don’t have to squint too hard to see this device as the precursor to the modern DAW, in which sound is presented as shapes that can be moved, manipulated, cut and pasted.
“The Graphic 1,” Mathews wrote, “allows a person to insert pictures and graphs directly into a computer memory by the very act of drawing these objects… Moreover the power of the computer is available to modify, erase, duplicate and remember these drawings.” It says something about the spirit of a place like Bell Labs that a device which is shown earlier being used to model circuits would be appropriated by the art freaks to simplify programming music.
Max Mathews would go on to develop the GROOVE program in 1970, which was focused on computer-aided synthesis for performance. He also invented a peculiar device called the Radio Baton – two rods that look like timpani drumsticks but are actually used like conductor’s batons combined with gestures to control all facets of recorded music files in three dimensions. The Radio Baton became in a sense a kind of trademark of Mathews’ wild experimentation, in the same manner that Buckminster Fuller said a lot of interesting things but is mostly associated with geodesic domes.
After retiring from Bell Labs, Mathews became Stanford professor emeritus of music at the Center for Computer Research in Music and Acoustics (CCRMA, pronounced “karma”) until his death on April 21, 2011 at age 84. In an interview with WIRED published three months before his death, Mathews expressed some disappointment that the massive amount of computing power available for almost free today hadn’t been fully taken advantage of, as well as his unique vision of the computer’s place in the modern musical environment. He didn’t see it just as a brain, nor as a synthetic replacement for musicians or instruments but an instrument in its own right – “something he believes has not happened yet.”
“What we have to learn is what the human brain and ear thinks is beautiful,” Mathews said. “What do we love about music? What about the acoustic sounds, rhythms and harmony do we love? When we find that out it will be easy to make music with a computer.”