© NY Times Aug. 28, 2003 By ANNE EISENBERG
Sony has a hit product in Aibo, its robotic pooch that can ride a skateboard, respond to voice commands, take snapshots with its digital camera and play with a ball.
Now, for those who want a more refined electronic sidekick, a French researcher at Sony's Computer Science Laboratory in Paris has come up with the makings of a musical companion rather than a canine one.
François Pachet, a skilled pianist and jazz guitarist who is also a scientist at Sony,
has designed an electronic accompanist that can riff with a musician[/b] , seamlessly extending and improvising on the human player's musical output.
This talented accompanist does not come in a winsome plastic skin like Aibo's; at this point it is a software prototype with no more personality than a player piano. But it makes convincing music, its admirers say, by rapidly analyzing what a musician is playing and then joining in with music of a similar style.
The process of creating this new music is entirely digital. Dr. Pachet's software program dissects the music, which must be played on an instrument capable of sending a stream of notes to the processor in MIDI, a digital standard for devices like synthesizers that control the emission of music. When the performer pauses, the computer program instantly begins generating new music with a MIDI-equipped synthesizer.
The computer-generated music picks up so quickly and smoothly that many people who have heard demonstrations say it is impossible to tell when the human musician stops and the program begins.
Programs like Dr. Pachet's, which is called the Continuator, may some day be part of the array of skills offered by entertainment robots that people buy for companionship.
"Robots like Aibo are designed to be interactive in an entertaining way," Dr. Pachet said. "And part of the fun comes from their being able to learn." The Continuator learns, he said, because the more data it has, the more convincingly it analyzes and extends a particular musical approach.
Andrew Schloss, a professor of computer music at the University of Victoria, heard Dr. Pachet demonstrate his system on a Yamaha Disklavier, a keyboard that can play automatically by reading electronic files - the keys move up and down like those on a player piano, without benefit of a human being. Because the keyboard is also acoustic, it can be played the standard way by a musician. Dr. Pachet used both modes in his demonstration, playing on his own, then letting the computer pick up, then returning as the mood took him.
"It was the best improvising program I've ever heard," Dr. Schloss said of the demonstration. He distinguished no telltale break between the two performers that would signal that one was real, the other virtual.
Dr. Schloss suggested that the system could be particularly useful for musicians who want to analyze their own styles. "You can learn a tremendous amount from using it for an hour or two," he said. "You are listening to yourself mixed up, turned around and modified. It's one hundred percent based on what you've played."
Automatically generating music in a certain style is not a new idea. In earlier days, people cast dice repeatedly and used the numbers to pick musical sequences that together formed a dance tune, thus creating music without relying on the rules of composition. "Musikalisches Wurfelspiel," or "A Musical Dice Game'' (1787), for instance, attributed to Mozart, involves composing a minuet by rolling dice to combine prewritten measures of music.
Nowadays music can be generated automatically, with a mathematical model, for example, that calculates the probability that certain notes might follow a particular input. Relying on this model, Dr. Pachet wrote programs that continuously divide the note stream of the musicians into phrases. Each phrase is sent to the analyzer and processed. As the musician plays, the system generates a continuation from the database. The program can also keep up with changes in rhythm and chords, so that it can, say, produce not only a continuation in the same style as the guitarist but also harmonize with the pianist.
One of the main calculations that concerned Dr. Pachet was the pause that exists before the system continues a musician's phrases. It had to be virtually undetectable for performances to be convincing. To decide on the interval, he listened to many pieces, among them music played by the jazz guitarist John McLaughlin, who has a reputation for speed and who, it turned out, plays a note every 60 milliseconds on average. Using that and other times as a basis, he and his team programmed the Continuator to learn and produce sequences in less than 30 milliseconds. Using a Java prototype of the Continuator running on a Pentium III laptop, they have since gotten it to produce continuations in less than 5 milliseconds, Dr. Pachet said.
But could people distinguish between the human player and the virtual one in this musical version of a Turing Test? The answer in most cases is no. So far, Dr. Pachet said, very few people can tell whether a human or a computer is at work, especially when rapid jazz is played. "We can tune the continuation so that it is virtually indistinguishable from the human input," he said.
Last month the Continuator made an appearance at the annual Siggraph annual conference in San Diego, sponsored by the Association for Computing Machinery. Mary Farbood, a classically trained pianist and graduate student at the M.I.T. Media Lab, tried it out there. Her research is in a related area: she helped design a computer program, Hyperscore, that enables children to compose music with colors and graphic elements. She found the Continuator innovative. "It takes a novel approach to computer-human improvisation," she said, adding that it is flexible and intuitive to work with.
However elegant the Continuator may sound, Dr. Pachet said, its performance comes down to design rather than musicality.
"There is no magic in it," he said. "Mainly it is designing software that will be able to do all these computations efficiently."