«Thesis submitted for the degree of PhD at the Department of Musicology, University of Oslo February 2012 i Abstract Autonomous instruments are ...»
Self-organised Sound with Autonomous Instruments:
Aesthetics and experiments.
Thesis submitted for the degree of PhD
at the Department of Musicology,
University of Oslo
Autonomous instruments are computer programmes that generate music algorithmically and without realtime interaction, from the waveform level up to the
large scale form. This thesis addresses questions of aesthetics and the role of the
composer in music made with more or less autonomous instruments. Furthermore, a particular form of autonomous instruments, called feature-feedback systems, are developed. These instruments use feature extractors in a feedback loop, where features of the audio output modulate the synthesis parameters.
Methods adopted mainly from chaos theory are used in experimental investigations of several feature-feedback systems. Design principles are also introduced for controlling limited aspects of these instruments. These experimental methods and design strategies are not widely used in current research on synthesis models, but may be useful to anyone who wishes to build similar instruments.
Whereas Varèse preferred to designate music as “organised sound”, autonomous instruments may be said to engender self-organised sound, in the sense that the result was not speciﬁed in detail by the composer—in fact, the result may not even have been expected. Thus, there is a trade-oﬀ between a deliberate sound-shaping by the composer on the one hand, and truly autonomous instruments on the other.
The idiomatic way of operating an autonomous instrument is experimentation followed by serendipitous discovery.
Preface Two broad topics interested me at the time when I conceived of the directions for this thesis, and they still do. One of them is nonlinear feedback systems and chaos, the other is the application of feature extractors in the analysis and synthesis of sounds. I thought they could be combined somehow, so I just had to invent feature-feedback systems.
During the ﬁrst three or so years of this project (begun in August 2008), I did not know what to call the systems that I wanted to investigate. In the autumn of 2008, it crossed my mind that there must be some people out there who experiment with similar things, so I sent out a request on the cec-conference discussion forum for practitioners of “adaptive synthesis”, which was the term used at that time. This is more or less how I
then described it:
Essentially, adaptive synthesis consists of a sound generator (a synthesis technique, either digital or analogue), a signal analysis unit which performs feature extraction of the signal produced by the generator, and ﬁnally there is a mapping from the analysed attributes back to the control parameters of the sound generator.
Several people responded to my request with useful suggestions. It is no coincidence that many of them independently pointed to the work of Agostino Di Scipio, whose seminars, music and writings have had a noticeable inﬂuence on a current generation of composers and musicians. In particular, I would like to thank Di Scipio himself for his helpful reply, Owen Green for sharing some of his music and other information, and Nick Collins for pointing me in useful directions and for helping me out with installing his Autocousmatic programme.
The common denominator of all the practitioners of “adaptive synthesis” was that they worked with live-electronics. However, as I developed my own feature-feedback systems another aspect foregrounded, namely that of autonomy or self-regulating processes. I have always preferred to work with ﬁxed media rather than live-electronics and realtime processing in my own electroacoustic music making. Therefore, my engagement with feature-feedback systems is restricted to oﬄine processes and computer programming.
Since there does not appear to be any previous studies on autonomous featurefeedback systems, I did not often get the feeling of plodding along in well-trodden paths of research. Nonetheless, many people have contributed to this project in various ways.
First of all, I would like to thank my main supervisor Rolf Inge Godøy for his longstanding generous support and encouragement. Sverre Holm entered as my second supervisor half-ways into this project. Eystein Sandvik and Steven Feld read and commented on an early version of Chapter 5. Many others at the Department of Musicology have also con
tributed with their general encouragement. Asbjørn Flø lent me a recording of Gendy3, and has been a great support otherwise by his persistent curiosity.
In an extended e-mail correspondence, Scott Nordlund has introduced me to several fascinating examples of what we think might be autonomous instruments, ranging from analogue neural nets to no-input mixers. In particular, I would like to thank him for sharing his own recordings and for making me reconsider the idea of autonomous instruments once more.
Maury Sasslaﬀ did the copyediting on most of Chapters 1, 2, 4 and 5; then Melinda Hill took over and did the copyediting of Chapter 8. Any remaining stylistic inconsistencies within or across the chapters are my sole responsibility. I would also like to thank the members of the committee, Eduardo Reck Miranda, Stefania Seraﬁn, and Alexander Refsum Jensenius for their advice.
In December 2010 I followed a week-long PhD course at the Aalborg University which was very inspiring. Small traces of ideas from that course (especially due to Dave Meredith and Bob Sturm) seem to have made their way into this thesis. Thanks goes to the tea-drinking contingent of fellow PhD students for a pleasurable time, and in particular to Iballa Burunat for her response on the test version of the Autonomous Instrument Song Contest (the one with the shrill sounds), and of course to all those who answered it.
Introduction The motivation behind the present thesis is mainly a curiosity about a feedback system that could be used as a musical instrument. The question was, what would happen if one were to put a synthesis technique into a feedback loop and, so to speak, let it listen to its own output whilst modifying its synthesis parameters in response to the sound it was currently producing? There are already some examples of similar feedback systems, in which a musician may typically interact with the system. In contrast, my research interest soon narrowed down to systems that took no realtime input. Such systems will here be referred to as autonomous instruments.
There are not many well-known exemplars of music made with strictly autonomous instruments. Some plausible reasons for this will be discussed in this thesis. Nonetheless, there are several examples of digital music instruments that allow for interaction, although the purpose is not to make the performer directly responsible for every nuance of sound as a violinist would be, but rather to engage the musician in a dialogue. This kind of instruments will be called semi-autonomous, because the instrument is able to respond in the musical dialogue with output that the performer did not directly call for.
Topics such as self-organisation and emergence are recurrent in writings on more or less autonomous instruments. Moreover, there appears to be some shared aesthetic views among the practitioners of music made with autonomous instruments. Related to this aesthetics are dichotomies such as nature versus the artiﬁcial and the self-organised as opposed to the deliberately designed. In this thesis, notions of self-organisation will be analysed and related to musical practice. One may think that music resulting from a selforganising process cannot have been deliberately organised by the composer. Whether this is true or a common misunderstanding is another question that will be addressed.
While the discussion of the aesthetics of music made with more or less autonomous instruments is an important part of this thesis, the most original contribution is the introduction of a novel class of autonomous instruments that we call feature-feedback systems. These systems consist of three components; a signal generator produces the audio output, a feature extractor analyses it, and a mapping function translates the analysed feature to synthesis parameters (see Figure 1.1). Feature-feedback systems as used here are not interactive in realtime, and most of them are deterministic systems in the sense that they always produce the same output if the initial conditions are the same.
Already some preliminary experimentation with feature-feedback systems revealed
2 CHAPTER 1. INTRODUCTION
that their behaviour would not be easily understood. Therefore, one of the main purposes with this thesis is to develop a theory as well as practical know-how related to the operation of feature-feedback systems. Now, a closer study of feature-feedback systems leads to several other questions to be inverstigated. First, we need to understand the relationship between synthesis parameters and feature extractors, and in turn, how the sounds are perceived and how this relates to feature extractors. Then, most importantly, a closer look at dynamic systems and chaos theory will be necessary for setting up the proper theoretical framework for these feedback systems.
Figure 1.1: Schematic of a basic feature-feedback system.
Experimental investigations of feature-feedback systems (as well as other dynamic systems and synthesis techniques) form a prominent part of this thesis. In eﬀect, one of the major contributions of this thesis is to show how a range of experimental techniques drawn from dynamic systems theory can be applied to any synthesis techniques, and to feature-feedback systems in particular. So far, it has not been common to study novel synthesis techniques as though they were some physical system with unknown properties, but this is exactly the approach taken here.
The new ﬁndings about feature-feedback systems, and the broad outlook on the musical scene related to autonomous instruments that are presented in this thesis should be of interest to composers, musicians and musicologists in the ﬁeld of computer music.
Due to the interdisciplinary nature of this thesis, maybe others will ﬁnd it fascinating as well. Many new feature-feedback systems are presented in full detail, but more general design principles are also provided that can be used as recipes for anyone who wishes to experiment with these techniques for musical purposes or just out of curiosity. The primary motivation behind this research project has actually not been the making of music, and perhaps not even the crafting of useful musical instruments as much as a theoretical understanding of their manner of operation and of eﬃcacious design principles. This point will be clariﬁed below and related to the emerging ﬁeld of research in the arts.
In the rest of this chapter, some related work will be reviewed, then diﬀerent approaches to synthesis models are contrasted. The ﬁnal two sections of this chapter address the musical and aesthetic setting in which autonomous instruments are situated.
1.1. PREVIOUS AND RELATED WORK
1.1 Previous and related work Machine listening is often a crucial component in semi-autonomous instruments. Feature extraction is then used on the incoming audio signal to extract some pereptually salient descriptors of the sound, but feature extractors have many other uses in techniques such as adaptive eﬀects processing and adaptive synthesis. Many of these techniques have served as an inspiration for the present work on feature-feedback systems and will therefore be brieﬂy reviewed here.
Feedback in various forms is another topic that will be important throughout this thesis. Indeed, feedback loops of various kinds are ubiquitous and, in eﬀect, indispensable for music making as will be exempliﬁed in Section 1.1.3. But ﬁrst, we shall clarify the aims of this research project by comparing it to artistic research.
1.1.1 Concerning research in the arts Immediately, it may appear that the goal of developing a new kind of musical instrument would imply an aﬃliation with so-called artistic research, or research in the arts. This is research where the production or performance of works of art is an inseparable part of the research itself, although the emphasis may be more on the working process or the ﬁnal result. The present thesis does not try to document the traces of a process that led to musical compositions, nor was the composition of music a part of the research. However, the studies of feature-feedback systems and other dynamic systems as well as feature extractors has resulted in a knowledge that can be useful to anyone who might want to make their music with feature-feedback systems. Indeed, myself being a composer, I have made some attempts to compose music using feature-feedback systems, but I do not consider this to be the right place to document my experiences as a composer. Doing so would be more appropriate in the context of research in the arts. Nonetheless, the questions that motivated this research could perhaps not have been posed by anyone else than a composer. In particular, I will use my background knowledge of how some composers think, gained from numerous conversations with composer colleagues and, needless to say, from my own experience; this will perhaps become most evident in the ﬁnal chapter.
In order to substantiate the claim that this is not reasearch in the arts, let us summarise the various kinds of research that are involved with the arts to various degrees, following Henk Borgdorﬀ (2006).
Borgdorﬀ actually draws a distinction between three diﬀerent approaches to artrelated research. First, there is the traditional academic research on the arts, including musicology and other disciplines of the humanities. Historically, musicology has mainly been concerned with the analysis or interpretation of existing music from a theoretical distance. There is a separation between the researcher and the object of research; the musicologist is usually not directly involved in producing the music that is the subject of research.