Reflection on the process, and during the process

 

Part of the documentation for the project

“New creative possibilities through improvisational use of compositional techniques,

- a new computer instrument for the performing musician”

 

Øyvind Brandtsegg, research fellow.

NTNU, Trondheim.

Program for Research Fellowships in the Arts

 

Reflection on the process, and during the process. 1

Introduction. 2

Reflections on algorithms for composition (10.2004) 2

Thoughts on algorithmic composition (10.2004) 4

About generating musical events in real time (12.2004) 4

A sketch of the modules for the new instrument (12.2004) 6

Some programming issues (02.2005) 6

About exploring the various dimensions of expression (06.2005) 7

On the aesthetic value of formalism (07.2005) 7

On determining the aesthetic direction of the scholarship work (08.2005) 8

Reflections related to my own work (03.2006) 10

Reflections on playing with Motorpsycho (03.2006) 11

Reflection on the Markov melody generator (06.2006) 13

On the relationship between composing and improvising music (08.2006) 13

About generating musical events in real time (03.2007) 13

Reflection on hearing a process (03.2007) 14

Thoughts on what is considered the composition (the work) when working with software (04.2007) 15

Reflection on writing artistic software (04.2007) 16

Reflection relating to ImproSculpt Classic (04.2007) 17

Reflection on programming, when updating the PartikkelCloudDesigner application (05.2007) 18

Reflections relating to playing with feedback (05.2007) 18

Reflection on programming (05.2007) 19

Are we in control? (06.2007) 19

Reflection on free improvisation (06.2007) 20

Reflection on singing for a baby (09.2007) 21

Reflection on the interaction between programming and performance activities (10.2007) 21

Reflection on the setting of a structured frame for improvisation (10.2007) 22

Reflection after a practice session with Stian Westerhus October 15th 2007 (10.2007) 22

Reflection on the choice of algorithms (11.2007) 24

Reflections related to the work with the software (11.2007) 24

Reflection after the final artistic presentation concert, December 1st 2007 (12.2007) 27

Reflection on my own competence as an improviser (12.2007) 28

Reflection on the current state of the ImproSculpt instrument (12.2007) 28

Conclusive remarks (01.2008) 30

Further work. 31

References. 32

 

 

 

Introduction

The reflections in this document represent a selected collection of my experiences during the work process. Each paragraph is marked with the date of writing, and has been left unedited as far as possible. Some of the paragraphs were written just in the form of short notes at the time of writing; these have been written out in more complete sentences to convey the meaning more clearly to the reader. Most of the paragraphs were originally written in Norwegian, and have been translated. Care has been taken not to change the original content during translation and formatting, so that it should represent an as true as possible account of my reflections at the time of writing.

The different writing styles used for reflections written during different parts of the process can reflect my search for a working method during the project. As the scholarship program is fairly new and can be said to still be under development, I see this search for a working method as a natural part of being in such a program.

 

The reflections are a collection of my thoughts and experiences during the work process. They are written, not primarily because my insights are genuinely new or that no one has thought out these issues before, but because they represent a collection of knowledge that in many cases remain tacit. Many musicians (and other artists) will recognize the insights, possibly also say they are obvious, but I still think it might be valuable to write down these thoughts. I am fully aware, and want to explicitly point out that, the written reflection in itself does not represent the (tacit) knowledge, but it points at and tries to circle in the tacit knowledge. In this manner, a core might become visible, in the readers understanding of what this reflection is all about.

back to top

 

Reflections on algorithms for composition (10.2004)

Lindenmayer[1] systems combined with cellular automata[2] contain properties that intuitively (for me) seem attractive for music composition. The most obvious application seems to be to use L-systems for structuring and form, and to use cellular automata for more local development characteristics in a progress. However, I ought to look at various ways of mapping the algorithm for sound control parameters. It is easy to be inspired by how these algorithms can be represented in visual presentations, but it is not certain that the same methods for mapping are the best when the algorithms are to be converted into musical structures.

 

L-systems may obviously be applied in that they can generate a horizontal structure (like a tree that grows, image turned 90 degrees so that it grows from the left to the right), which may be scanned by means of a time pointer moving from left to right on the image. The structure’s (the tree’s) individual branches/elements correspond to a part, a motif or another compositional element. These are played back when the time pointer is positioned on the same place in the structure as the object/element (similar to a “piano roll” display in a traditional sequencer program). In this context, a horizontal approach means that the parameters for pitch/transposition are mapped along the y axis (vertically), whereas time runs along the x axis (horizontally).

 

Cellular automata contain properties that make it natural to translate the visual representation of the algorithm in another way. If we imagine that each cell represented by a sound particle (duration < 100 milliseconds), this type of algorithm may create the basis for dynamic timbral development. As a simple approach, it seems to be natural to let the x axis represent spatialization of the sound image, for example right/left in a stereo picture. Pitch/transposition may still be represented in the y axis. In this context, the time axis is represented by the algorithm’s development over time, i.e. analogous to how each iteration of the cellular automata alter the resulting cells over time.

 

These two methods of mapping do not seem to be compatible as the x axis in the two cases represents two different parameters in a composition progress. It is not productive to get too involved in visual representations, as the desired result is to be represented as sound, but the difference in mapping of the various axes nevertheless seems to be a conceptual weakness. Also, there are many ways of imagining visual representations as a sort of score in a traditional sense of the word, irrespective of how the various axes are used for the conversion into sound. Time does not have to move from left to right in a score.

 

In addition, I have tried to think of methods for generic control of the system, i.e. a sort of algorithm or modeling that constitutes an intuitive and organic meta-level. This may be done for example analogue with the physical modeling of a water surface, in which the control of the various parameters are assigned to points on the surface and the user/composer controls the movements in the surface by means of exterior influence. An image of this is when you throw a stone into the water you will have concentric circles in the water. This constitutes a simple and organic pattern that may gain complexity by interaction with similar simple patterns (i.e. throwing two stones in the water at different locations).

Other types of algorithms for a unified control may be based on network science, i.e. how nodes in a network are related to each other and influence on each other (Barabási 2002). Various control parameters may thus be combined, with a mind to achieve organic “interaction” between various parameters. Network theory describes various ways of connecting nodes in a network, from accidental connections, via “circular with accidental cross connections” to a network of central hubs that have a lot of connections. How this may be applied to sound control remains to be looked into, but it seems to be logical to have some sort of interaction between various parameters. It is necessary in order that the various layers (or parts) in the composition, mutually may be related to each other. Maybe network theory can be used to link various parts together, or to link various modules/generators in the instrument, in that central parameters in a module have the opportunity to influence on central parameters in another module. This type of influence may readily be mutual, so that the modules are coordinated (“balance of power”) or it may be that some modules ought to be superior and have greater influence.

 

Analysis of input sound

If one would just create a real time composition system with the above (or other) algorithms, any sort of sound material could, in principle, be used for realizing the compositions. In order to relate the system to fellow musicians playing acoustic instruments (technically speaking: input sound data), the input sound ought to be analyzed for significant features (e.g. pitch/amplitude/phrase-form/rhythm). The data from the analysis may create the basis (the “DNA code”, axiom or similar) for the composition algorithms. In addition, the analysis of the input sound may create the basis for a classification of the sound material, and this classification can be used for distributing the sound/phrases in the musical structure and to assign sounds with certain characteristics to the various sound generators in the instrument. The analysis may also create the basis for rhythmic and melodic creation in the composition algorithms, either by means of direct use of motif material, or by means of statistical distribution (the frequency of occurrence of a certain type of interval etc.)

back to top

 

Thoughts on algorithmic composition (10.2004)

Music composed solely by use of algorithms may come close to the composers’ musical intention, or, close to the performing practice the algorithms were designed to simulate. On the other hand, an algorithm lacks the intuition, will power, spontaneity, or musical consciousness that is commonly found in a human musician or composer.

How should one create algorithms that enable some sort of realtime creative control input, so that the performer or composer’s musical intuition may affect the music output? Randomness or even fuzzy weighing in an algorithm can not recreate a creative consciousness in the same manner as “an inspired moment”. Even though the algorithm is refined to comply with a certain musical style, it does not have “taste”. I think that taste and style must be added by performer control of the algorithm. I am aware that this issue may lead to a philosophical discussion on questions like “what is the self”, and “what is creativity”. I will leave this interesting discussion for the benefit of going back to making some music with the algorithms.

back to top

 

About generating musical events in real time (12.2004)

If the duration is determined at the start of the event, you have already planned into the future, and the continuation (or stopping) of the event does not depend on other realtime events. The advantage in this case is obvious: you can make a timbral change that is evenly distributed all over the duration of the event, plan a soft fade out envelope for the event, etc. This way of planning the duration of the event is similar to a Csound score, where the p3 parameter determines the duration of each event.

However, some musical scenarios require that the duration of an event is not fixed, for example when several voices in a polyphonic texture converge on a chord as could happen when ending a musical section. If the duration of the event is not planned beforehand, problems with the automation of timbral variations over the duration of the event will occur.

This way of handling events is typical for real time composition, and the problem is similar for “musician generated events”, for example those initiated via midi input.

The artistic reason for doing things in this manner is simple: “Let me hear what it sounds like before I determine when to stop."

 

Possible solutions:

Timbral changes that are to be distributed over time in the event may still be carried out, but the timbral modulation envelope is planned at the start of the event. For a soft amplitude fade out, the event can be extended for a small amount of time after being turned off; this is a common technique in all midi controlled instruments. During the event, the composer/performer (or an algorithm) must decide whether the timbral change is to continue in the same direction (similar manner), stop, or be reversed. In its turn, this means that the composer must focus his attention on details during the performance as each individual event requires a decision for its future progress.

Another way of solving the problem is to create slowly varying modulations that do not have a clear direction by using combinations of several low frequency oscillators.

The timbral change may be distributed over time, planned at the start of the event during which a gradual change from one state to another takes place. Afterwards, the modulation could move from “gradual change” to “oscillating variations”. It is also possible to let timbral changes be controlled manually, where all the parameters are controlled by the composer in real time. This method of working leads to a particular focus on details in the moment of performance.

One may also envisage that timbral changes are controlled manually by the composer, but by means of a sort of “meta modulation presets". Such a preset may contain a total set of parameters for a timbral modulation state, and gradual transitions between the states are facilitated. The partikkel cloud generator for the Flyndre installation could serve as an example of this approach. This makes for a partial focus on details during the performance, but also a substantial degree of automation. Some flexibility is lost, because it is not practically feasible to write presets for all imaginable parameter combinations. An additional layer of metaparameter control can help regain some flexibility, where metaparameters act on and modify the groups of parameters in a preset. Variations of this technique with gradual transitions between preset states are previously known through the works of Ali Momeini[3], Tim Blackwell, Tim Place and others.

Another possibility is that timbral changes are generated through algorithmic progresses in the same way as instrumental events are generated from algorithms. This provides a unified approach to a way of thinking about the method of composition.

 

(See also the entry Reflection on the current state of the ImproSculpt instrument regarding parameter mapping for the partikkel generator, for an example of how this was solved later on).

back to top

 

A sketch of the modules for the new instrument (12.2004)

Event generator is a general module. Special cases (manual midi control, existing generators in ImproSculpt Classic) may need their own modules. Maybe I can set it up so that midi control and any old ImproSculpt generators can call the general event generator module. In that case, the special cases will be handled at the algorithmic or the mapping level.

The data stream from Algorithmic process should have as general a format as possible, but may vary with different algorithms. The Mapping module should translate to a strict general format to be sent to the Event generator.

back to top

 

Some programming issues (02.2005)

(This paragraph is the unedited contents of my log/diary for February 21st 2005)

 

During the last week, I have worked with modularization of Csound code, and tested different methods for sending events and parameters from Python to Csound. I think the Csound code is well structured now, and I plan to write an article on this modular setup. Large chunks of code are still missing, like midi control, insert effects, more aux effects etc. Still, the main issue is that the basic structure is set (hopefully), and that it works together with Python.

 

I’ve also tried to modularize the Python code, but I don’t understand enough about Python to do this yet. There seem to be some issues related to re-using code and giving it a unique variable name that I need to figure out. I have to ask someone how this works.

Sigurd Saue helped me implement a periodic call in Python, so I can send continuously changing parameter values from Python to Csound. By continuous, I really mean discrete value with a high update rate (obviously, as we’re in the digital domain). The current implementation can send parameter changes at a rate of 100 per second. This should be sufficient in most cases.

 

Today I also implemented an “always on” instrument for midi handling in Csound. This will allocate a Voice Group and generate Csound instrument events triggered by midi note on and off. The next step will be to write a “midi controller to zak channel write routine”, so that midi controller values can be accessed by any instrument in Csound.

back to top

 

About exploring the various dimensions of expression (06.2005)

For me, it is easiest to turn to the purely timbral variations as examples, but the discussion may be valid within aesthetic axes and stylistic/genre typical axes.

In the work with the sound installation Flyndre, I want to obtain gradual transitions from one sound image to another. In this context, gradually means a totally linear transition, completely without breaks and without identifiable “transitional stations”. I hope this will be feasible by using the new Csound plug-in for granular synthesis that Torgeir and Thom has made for me. I believe that the exploration of gradual transitions will serve to become more familiar with the sound’s various dimensions and the relationship between these dimensions, or axes. If you compare with the three dimensional space, each dimension is seem as perpendicular to the two others and distance is conceived as linearly distributed along each axis. This is not to say that the same is valid for timbral dimensions. They may have nonlinear occurrences with reference to distance, so that a given step size does not equal the same distance if transposed to another place on the axis. From previous work that has been done to explore this type of “timbral transposition”, I may mention Fred Lerdahl’s article “Timbral Hierarchies” (Lerdahl 1987). In order to obtain gradual transitions, these nonlinearities must be found and reworked to create a perceptional linear axis. At the same time, it is not certain that the axes can actually be perceived as perpendicular so that the space they constitute is a "warped space” (in the same way as nonlinearities in the axes also adds elements of warped space). This means that in order to locate and move multi-dimensionally in this space, the orientation of the axes relative to each other must be determined; otherwise one will not be able to see the space as it really is. When the space has been explored in all its extent, linearly in all dimensions, it will be possible to exert artistic control over discontinuities in this space. It may be debatable whether it is possible to arrange all timbral parameters within such a defined space, nevertheless it might be possible to categorize which parameters that are found within the same “universe”, and which parameters/dimensions that clearly fall outside. If possible, I should try to map where these “universes” exist relative to each other.

Here, I have used timbral dimensions as an example, but there is no reason to believe that this will behave differently with reference to other parameters (aesthetic, stylistic, etc.). For example, the composition methods based on pitch class sets include the concept of distance from one pitch class set to another, this creates a somewhat similar multidimensional space as what I try to describe related to timbral transformations.

back to top

 

On the aesthetic value of formalism (07.2005)

Xenakis says in the preface to “Formalized music” (Xenakis 1992, p.9) that art based on scientifically based formalism is more serious, more worthy and less perishable than art based on the inspiration of the moment. Later in the same book (Xenakis 1992, p.11), I believe to be able to read that the above statement is somewhat modified. Here, he says that his exploration has required a synthesis filled by conflicts based on previous theories, further that the aesthetic criteria only may be determined by the artist himself. Hereunder, the artist’s aesthetic choice, and the value of the results he thus obtains.

 

I tend to believe that one may not construct “correct” music based on mathematical or other scientific algorithms. Sound as an acoustic phenomenon is subjected to physical laws, but music is not consequently physics. Music carries in it a potential for communicating feelings and ideas and the way these are communicated are dependent on the cultural code within the era the music is experienced in. This is valid both when the music is experienced in its time, and in another way when it is performed and experienced in another cultural era than the one it was created in. Thus follows that music neither can nor may be considered as "correct" or "incorrect" in a scientifically verifiable manner. On the contrary, it may be interesting as an artistic idea to pursue certain analogies between the sciences and music, for example as Xenakis does through his formalisms. However, Xenakis draws the objectivity of music rather far. In this context, the concepts “beautiful” or “ugly” become meaningless (Xenakis 1992, p.9). This is a well known turn of phrase from aesthetical disciplines that apparently entails a necessary objectivity. At the same time, these are aesthetic criteria that we nevertheless relate to on a more or less conscious level, and that influences our conception of the aesthetic object. To believe any thing else will be meaningless, in my view.

back to top

 

On determining the aesthetic direction of the scholarship work (08.2005)

In a report from May 2005, I refer to some criteria for assessment of the artistic result of the scholarship work. It was pointed out that such criteria are dependent on the musical genre and the aesthetical framework. This includes criteria such as integrity, interior logic, and unified aesthetic performance. These concepts may again be said to create rewrites of familiarities that are used analytically, but at the same time are not analytical concepts. This again demonstrates the inadequacy of language to describe these elements precisely. Nevertheless, these concepts are used orally (for example at the jazz department at NTNU), and no-one seem to have problems in understanding what they mean. The linguistic construction is treated as an abstract tool for maintaining and communicating a familiar phenomenon and the linguistic subtleties with reference to the dictionary’s definition of the concept is left untouched with good conscience.

 

As an attempt at describing the aesthetic framework for the scholarship work, I will discuss a few axes or dimensions that may be of assistance in order to define this space. I am not at all sure that any presentation in writing may clarify the picture, or whether it, on the contrary, serves to draw a distorted picture. However, it may be worth an attempt.

Pointing out the aesthetic direction of the scholarship work may be done along several axes. These may be described by means of their extremes. Among several possible aesthetic parameters, or axes, I may mention the conscious/unconscious, the elitist/popular, the open/closed, and the complex/minimal.

 

One of the axes represents the classical dilemma, in which the unconscious is attributed the property of “the pure”, whereas the conscious must go through the whole specter of layers of consciousness in order to get back to "the pure". This antagonism may be found a long way back in history; an example that has often been used in the relationship between Goethe and Schiller, in which Goethe is ascribed the role as the intuitive, whereas Schiller appears as the reflected, conscious poet. The scholarship work must necessarily move along this axis in order to obtain new insight through reflection. However, it seems imperative to try to internalize the new insight and make familiarities out of them. This is very much to the point as far as improvisation is concerned.

Thomas Mann formulated this antagonism through his character Adrian Leverkühn in “Doktor Faustus” (p.353):

 “ “Basically, there is just one problem in the world, and it reads like this: How do you reach out? How do you get out into the free? How do you burst the cocoon and become a butterfly? The whole of our situation is governed by this question. Even here”, he said and picked at the red insert ribbon in Kleist’s writings that were lying on the table, “even here the breakthrough is discussed, namely in the brilliant article about the marionettes, and there, it is actually described as “the last chapter of the history of the world”. However, it is only a question of the aesthetical, about charm, about the free grace, that really is reserved for the doll man and God, that is the unconscious or an infinite consciousness, while any reflection that is found between zero and infinity kills grace. Consciousness must, this author believes, must have gone through infinity in order that grace again is to present itself, and Adam must again eat from the tree in order to again fall back into the state of innocence.” ”

 

Another axis is drawn along the degree of elitism, i.e. expected knowledge in the receiver/viewer. Is the expression more important, more refined and more valuable, if it entails a higher degree of elitism? May communicative aspects be ascribed aesthetic values? My objective is to move dynamically along this axis. By opening up an entry point into the work, and at the same time maintain the progressive elements in the expression on other levels in the work. Thus, it might be considered a non pure form of art, by its blend of communicative and exclusive elements. According to Adorno, these were incompatible antagonisms.

 

The open/closed may be understood in several ways. On the basis of an aesthetic professional approach, one would say that “the open work of art” contains unsolved riddles, so that the reader of the work adds something to the work on one or several levels. Closed works, on the other hand, are complete, and lack nothing. Improvisation is in its very nature open because of several reasons. Firstly, because it is not the same from one performance to the next, further, because the openness consists in an element of interaction between many parties. Often, this interaction is understood as the interaction between several performers, but it is also found in the reciprocal influence between the performer and the instrument, the performer and the music as it unfolds, between the performer and the audience, and between the audience and the music as it unfolds. Simultaneously, one may interpret openness in an artistic expression as inviting, and this axis therefore gradually blends with the degree of communicative elements.

 

The degree of complexity will necessarily vary, and some times possibly touch on the minimal. At the same time, the purpose of the scholarship project is to obtain a possibility for complexity, striving for flexible governing of musical processes not accessible by use of traditional instruments. aimed at.

 

It may possibly be appropriate to mention some of my sonic preferences in order to create a basis of references for the musical content aimed at in the scholarship work. Timbral references (or preferences) from art music are found in Stravinsky, Stockhausen and Olga Neuwirth.  Just as important are sound production influences from popular music, for example Tricky, Björk, Beck, Nine Inch Nails, Aphex Twin and Kings X. Further, I could mention representatives from popular music who have tried to break the narrow confines for composition within the genre, for example King Crimson, Yes, Pink Floyd, and obviously Frank Zappa.

 

I try to unite techniques and ideas from art music with a timbral aesthetic more in line with pop music. This approach entails a desire to communicate beyond the elitists, and at the same keeping the artistic integrity of the work. I aim at a variable degree of elitism, i.e. that some parts or compositions are more demanding on the listener than others. This may show in the form of exterior and interior dimensions in one and the same work, and it is a conscious way of trying to communicate on several levels. It may be conceived as a missionary approach, but it is my intention to try to open up my artistic production.

back to top

 

Reflections related to my own work (03.2006)

- written after a research fellowship seminar earlier this week, and also related to several art seminars in Trondheim lately, during which the subject of relational art[4] has been debated.

 

I feel that these issues have little or nothing to do with my artistic work, and that my work comprise quite different issues and themes.

I believe that my work has more of the characteristics of natural science, including an application of technology and mathematics. The technology is exploited for artistic purposes in the process of creating music, and in this respect there is an artistic intention behind the work. Development of concepts and ideas lies just as much in developing a technology as in communicating concrete artistic ideas. I think that music in many ways represents a classical understanding of art, in that we work with "the beautiful" and the less beautiful within an abstract language/expression. Within my music, I rarely seek to express concrete social criticism or philosophical ideas, but rather to express something within the music itself. It could be said that this constitutes a sort of “hermit art form” in the sense that it pleads an intrinsic value to such an extent that it demands to be placed outside everything else that takes place in the world.

On a similar note, my work deals with the use of new technology and exploiting it to its full extent. If there is a philosophical basis for this, I believe it must be found in man’s relationship with technology. In this respect, my work embeds an argumentation for proving its intrinsic value, just because it shows in a practical sense, man's mastery of technology. If this is still an interesting issue can of course be debated.

 

I notice that my thoughts during the process to a great extent deal with conceptual technical solutions that provide a basis for flexibility in the moment the technology is to be used in practice. I feel that this has an artistic value as it creates potential for expressive freedom. I also notice that this is all pretty distant from the issues the contemporary discourse within the “fine arts” concerns itself with.

back to top

 

Reflections on playing with Motorpsycho (03.2006)

(Motorpsycho is a Norwegian rock band, they work in a diversity of musical styles. Their musical influences can be traced to groups like Motorhead, The Who, Led Zeppelin, The Grateful Dead, Beatles, Sonic Youth, Sun Ra, John Coltrane, Kiss, King Crimson and others)

 

Playing with Motorpsycho, I experience as a rather “restricted” musical situation at the same time as there is lot of improvisation and freedom to create my own role. A trained composer will possibly say that rock music is so simply constructed that it is no wonder it is experienced as restricted or rigid. But it is not the restricted selection of resources I think of as a “restricted” musical context. In some ways the restrictions may be compared with the act of performing strictly notated music, but the feeling of restriction in this sense is related to the performative aspect and the energy flow of the music. The music has an almost frightening forward thrust and the restriction is related to maintaining the momentum, not to hesitate and not to use measures that would hinder the forward thrust. The same element is found in all rhythmically based music to a varying extent.

 

Generally, in my relationship to improvisation, I notice that I can always twist and turn the music, adapt it my own purposes and let it justify itself on its own terms. Sometimes, this self justification can turn in on itself, and become a convenient excuse for downright sloppiness. The fact that I have tools (improvisation techniques, composition techniques) that can create contexts where no context existed, may at times erase the feeling of clear intention and direction in the music. This is well known within improvisation, for example if you make a mistake, repeat it several times and let the error become a feature.

My intention of working within restricted musical contexts is to experience, in practical playing what provides the music with intrinsic intention and direction. In this sense, to play with Motorpsycho is an exercise in musically uniform energy.

 

It must also be said that Motorpsycho’s music contains several elements that are often left out in more mainstream rock music, for example odd time signatures, melodic modulations, and bi-tonality. Naturally, these elements occur to a smaller extent than within so called art music, but the use of these elements show an urge to burst the narrow frames of rock at the same time as the elements are worked into Motorpsycho’s style in such a way that they don’t stand out as foreign elements.

 

Playing in musical contexts with high energy and loud volume poses some interesting challenges; the following story sheds some light on the matter:

One day when rehearsing with Motorpsycho in January 2006, I discovered a strange acoustic phenomenon. We were improvising, and as it often happens with Motorpsycho, we were in the key of D minor and we were playing pretty loud. I had a nice warm timbre made up of a mellotron string sample. I started out playing the note A, and then moved to D. This did sound quite fat, but kind of boring I thought. I did want to try to “tilt” the melodic/harmonic relationship a little, so I went from D to E flat. To my surprise, the E flat did not make any audible sound at all. I tried to turn up my amplifier level, and played the E flat again. It sounded like a D, but a kind of “uncomfortable and not quite clean D”. I realized that the whole rehearsal room were sort of filled up with resonating frequencies in D minor and that any foreign notes just would be forced to comply. I did experiment a bit more with melodic figures, and if I played a melodic line that logically would include foreign notes (like E flat for example), I could make them audible by means of context. This made me think (as also stated in the artistic documentation document) about the presence or audibility of a contribution to this musical context. A weakly formulated melodic phrase would simply not make itself heard, apparently for physical and acoustic reasons.

 

It is unquestionable that some details in the expression to a certain degree can be obscured in high volume settings. This requires other ways of thinking about one’s own contribution to the whole. Not all the nuances will be heard clearly, but the part of the personal contribution that has the same direction as the united momentum will be heard loud and clear. This does not mean that everyone plays the same thing, but that everyone has the same sense of direction. Other clearly chiseled musical statements will also make themselves heard if the timbre of the instrument is designed with impact in high volume contexts in mind. Examples of such timbres are the electric guitar and also typical monophonic analog synthesizers

 

Concerning the above, it could be considered a valid argument that it is futile to use a complex instrument that generates many simultaneous parts in a musical setting where details are likely to be drowned out. I find that this argument does not hold true. The details most often contribute to the total picture, but you do not necessarily notice each individual detail any more. The details nevertheless make the sound image richly woven. I see that I have a job to do in the form of timbral creation for the various sources of sound I want to use. I do have instrument timbres that I use with the Marimba Lumina (the percussion midi controller I normally use as an instrumental interface to ImproSculpt/Csound) which manages to stand out in high volume contexts and I ought to do the same for automatically generated parts. This requires an even more critical eye on the musical statements and particularly the clarity of the musical statements that come from automatically generated processes. One of the aspects that makes a (manually played) instrument part stand out clearly in a high volume context is a sort of musical intuition the musician develops as a pure survival instinct in order to formulate statements that have a possibility of being heard. Automatically generated parts do not have this survival mechanism built in (at least not yet).

back to top

 

Reflection on the Markov melody generator (06.2006)

(I had implemented a melody generator based on Markov chains. I had put quite a lot of work into it, and it worked technically correct. It would analyze monophonic midi files, and create a database with statistics of note combinations.  It was meant to be used in the Flyndre installation, but I dropped it some three months before the Flyndre opening.)

 

I think the reason why my Markov melody generator does not work musically is that it tries to “mimic reality”, but it is too simple to do it properly. It will adhere closely to the analyzed input melodies most of the time, but with unmotivated breaks using other melodic figures. To my ears, it sounds too much like an amateur practicing on a difficult melody.

back to top

 

On the relationship between composing and improvising music (08.2006)

 

The relationship between a finished work and an improvised performance, or, in the case of the Flyndre installation:

The relationship between a finished work (that all the same is in continuous development) and a performance tool for improvisation.

 

The unfinished work has a potential of unknown size, it has promise and it is without borders. The finished work has a clear set of limitations, and it can develop within this frame.

During the final stages of working with Flyndre, I discovered that the work appeared for me to have a decreasing value, because it did no longer promise a potential of unknown size. Deciding the limitations and the framework within which the work should be allowed to develop appeared to me as a theft of the potential I had imagined in the earlier phases of the work. I realize that I now imagine the next version of ImproSculpt to fulfill this unknown potential, and I know I will be disappointed again. The moment I define something, I feel I take away the possibility for this something to be something else. In this, I see a good reason to continue doing improvised performances, because the limitations and framework in this setting are more ephemeral and volatile.

I also think that when I’m getting bored by an otherwise interesting project, there is reason to believe I’ve done everything I can to make it good.

back to top

 

About generating musical events in real time (03.2007)

One of the conceptual difficulties in implementing a realtime system for music composition is the issue of timed automation. By timed automation, I mean the system’s ability to schedule events precisely in time. On a general level, there is also the problem of system latency, i.e. the time from an event is triggered until the audio output of the event can be heard. This determines the system’s responsiveness to external control, and can be said to be directly related to the amount of processing power available. The lower the latency, the harder the computer must work to complete processing within the buffer set by the latency time. Another aspect of timed automation is the actual scheduling of events in a compositional algorithm. In many cases the composition algorithm is asked to calculate certain values depending on the current musical situation. As a simple example, this may be related to generating a melody note that is in a specific harmonic relation to other notes that are already playing. For the calculation to be performed correctly it can not be done in advance, it has to be done as close as possible in time to the actual generation and playback of the note. Considering that the actual calculation can demand several (on complex cases several hundred, or several thousand) processing steps, it might take some (small amount of) time to complete. The conceptual problem is thus related to when should the calculation be triggered, and how should we try to ensure that the result can be delivered on time? Furthermore, the actual timekeeping method should ideally be closely synchronized to the audio synthesis clock (the sample rate of the audio engine) to enable precise coordination of events. There is a technical problem related to this synchronization, as any process called from the audio synthesis clock must finish (return) within a narrow time window, otherwise the audio synthesis will temporarily be halted while waiting for the called process to finish. Such a halting of the audio engine will be clearly heard in the audio output as “clicks” or in severe cases stuttering. If the process called from the audio synthesis clock is allowed to run in an independent thread of execution, the audio engine will not halt but in this case there is no longer a guarantee of synchronization.

The current ImproSculpt implementation relies on synchronization with the audio synthesis clock, and relies on composition algorithm optimization to avoid audio clicks.  The implementation of the timed automation has been the subject of numerous rewrites during the course of the project, and I feel there is still room (need) for significant improvement in this matter.

back to top

 

Reflection on hearing a process (03.2007)

After conversations with John Pål Inderberg (at the Jazz dept. at NTNU), I’ve started thinking about ear training issues, about hearing the music internally before actually playing it.

In the context of my interval melody generator, it is appropriate to focus on interval based ear training, and to try to (internally) hear polyphonic interval based melodies, such as those generated by the interval melody algorithm. This is an aspect of traditional ear training. Another, and basically more new, aspect for me is that I realize I am slowly acquiring the ability to hear the result of an algorithmic process. This works in much the same manner as the imagination of a musical progression when composing in a more traditional sense, e.g. by writing notes on paper. More often than not, new musical ideas that I come up with these days come in the form of an idea for an algorithm. My imagination of how an algorithm might sound, or what musical structures to expect it to generate, might be compared to that of a traditional composer imagining the sound of the orchestra and the musical structure and form when sitting alone with pencil and paper.

It is difficult for me at this point to say anything specific about how to train the ability to hear an algorithm or a process. For me, it seems the ability has slowly evolved by working with these issues over several years. This is a form of tacit knowledge, and I hope to later be able to formulate methods for training this ability.

back to top

 

Thoughts on what is considered the composition (the work) when working with software (04.2007)

On occasions prior to this research project, I’ve had discussions with the Norwegian performing rights organization (TONO) about the problem of determining work classification for audio installations based on realtime algorithmic composition. I recognize the difficulty in determining what the actual composition is in such cases, and which parts of the work constitute intellectual property in different respects.

There are at least three distinct parts of an audio installation that can be considered intellectual property; first, there is of course the audio installation as a whole. This can be considered a work of art and as such it is intellectual property. Next, there is the software used to run the installation. The software comprises a different type of intellectual property, with its own legal implications in terms of licensing etc. Third, there is the question of the musical composition. What is the composition, and how is it documented? For traditional compositions written as notes on paper, we commonly recognize the composed work to be represented by the score. In software based algorithmic compositions, the actual composition might be embedded in the software used to create the work. Sometimes, the software code that contains the composition may be intertwined with other software code that makes up the infrastructure enabling the composition code to work. The actual concept of the composition is not clearly documented separately from the tools that make such documentation (or storage) possible. It is as if we could not conceptually distinguish pencil, paper and notes from a minuet written by Mozart. As an additional problem, some algorithmic works have an indefinite duration, so a recording that truly represents the work is not practically feasible (and perhaps not wanted).

As a perspective on this topic, I will try to describe what I consider the composition in the audio installation Flyndre:

 

The software (a version of ImproSculpt) can be considered the instrument for which the composition was written (seen together with Nils Aas’ sculpture and the speaker technique used). The instrument enables the potential for the work to be realized. A small part of the software code handles input parameter mapping and timed automation, and I consider this part of the code[5] to represent the composition.

One could consider this a relatively small amount of code (800 lines including comments and blank lines) for encoding a musical composition with a very long duration. The amount of code used to represent (or notate) a work is not an indicator for the complexity of the work.

 

The following excerpt shows a part of the code (line 1356 to 1366 of the source) that generates a melodic interval series based on date and time data in a very straightforward manner:

 

intervalYear = (year - 2006) + 1

intervalHour = ((hour-1) % 12) + 1

intervalMinute = ((minute-1) % 12) + 1

intervalMonth = self.intervalMonth[month-1]

intervalWeekday = self.intervalWeekday[weekday]

intervalDay = ((day-1) % 12) + 1

 

# time of day here is a value in range 0-3,

# with 0 indicating night,

# followed by morning, day, evening

timeOfDay = int(hour/6) # divide 24 hours into 4 equal slices

intervalTimeOfDay = self.intervalTimeOfDay[timeOfDay]

 

# order of interval motifs: W, H, W, Mo, Da, Td, Mi, Td, Y

# (yes, more than one copy of W and Td)

intervalList = [intervalWeekday, intervalHour,

intervalWeekday, intervalMonth, intervalDay, intervalTimeOfDay, intervalMinute, intervalTimeOfDay, intervalYear]

back to top

 

Reflection on writing artistic software (04.2007)

The term artistic software is used here to describe software that is written for artistic purposes.

A lot of the time, when writing artistic software, there is a constant compromise between a "patch-up" and proper programming technique.

This is because software written for an artistic purpose is in constant flux as the artistic idea will change while one experiments with it.

Most important for an artist is to "make it work", to make the software support the artistic idea, and that will sometimes make it necessary to cut some corners regarding proper programming technique. For example, when implementing a method for updating some gui parameters, it would indeed be good to follow a "best practice" and a standard way of implementing this kind of method. Still, sometimes I need to see how the parameters are behaving, quickly, while the artistic idea is still active in my mind. If I am to think too much about programming technique, and investigating how these GUI updates could best be done in a generic and standardized way, I loose the focus on the artistic idea. While working for an extended period of time, new perspectives on "best practice" within my application arises. From a programming perspective, it would be good to rewrite every old detail to conform to the newly achieved standard. However, this might take several hours, and the currently investigated artistic idea might then be lost or diminished. This necessitates the use of "patch-up" programming in certain cases. The resulting application might have several slightly different ways of implementing very similar tasks. A "standardizing" rewrite of the whole application would be a good thing to every few weeks or so, to keep some sort of best practice for implementation across the entire application. Another way of doing this might be to accept the patch-up for the time being, expecting several subsequent changes in "best practice", and delaying the standardization run until a later stage when the application is about to be finalized. Come to think of it, as ImproSculpt is an ever moving target continually developing, the application is never "about to be finalized".

back to top

 

Reflection relating to ImproSculpt Classic (04.2007)

ImproSculpt Classic (the previous version of ImproSculpt, as per 2003) was the result of a mature process and as such represents an instrument that is complete in itself. The instrument is complete in so far as it has a set of tools that supplement each other well, and I experience it as complete because I have played a lot with it without doing any major changes to the instrument.

ImproSculpt Classic has 14 sample slots in which live sampled segments may be stored temporarily and used as source for playback modules. The most commonly used method to assign an audio segment to a playback module can be termed “assign last”, it works by assigning the audio segment that was recorded last to the playback module. Triggering of the segment assignment can be done via GUI or midi control. This method of using live samples and to assign the samples to playback modules forces the use of the immediate, as an old sample (3-4 samples old) is not readily available for use. There is no “undo” option for segment assignments. The old samples are still stored until 14 more new samples have been recorded (filling all memory slots, then starting to overwrite old slots), but all the samples are only available via a sample slot number, and the only indication to identify a sample is the duration of the sound. The use of “assign last” creates a situation in which I never look for samples visually, but assign the most recently recorded sample to the playback modules by means of hardware controls (midi). When I assign a sample to a playback module, I know that the previous sample assignment will be lost and that it will be impossible to reconstruct fast enough to be able to make musical use of it (this process will take at least 5 seconds and a great part of this reaction time consist of the fact that I must find out which sample slot number the sound is to be found in).

This method of progressing by cutting off the opportunity to go back, forces a way of musical thinking in which you have to work with what you have. Some times I am on stage and may have only “bad” samples available (e.g. short elements that do not contain particularly exciting information), but I am nevertheless forced to use these source sounds to create meaningful musical statements. In instances where I have good sounds available, I am forced to decide whether I should dare to make new recordings and to assign them to modules; it feels just like operating on the edge of a knife because I can risk losing a good sample, and at the same time I want to renew the available sound material because I don’t want to work with a small number of sounds recorded early in the performance.

One of the musically effective aspects of live sampling is when the recording is used immediately, because at this time the transformation of the sound appears very distinctively and the transformation may be heard not only as an answer to the sound, but as a direct continuation and development of it.

The reflections in this paragraph apply particularly for the granular playback modules, but are also audible in the randPlayer modules with their half automated assignments (for example assigning “the four shortest sounds currently in memory” to a randPlayer module).

 

(See also the entry Reflection on the current state of the ImproSculpt instrument regarding the segment organizer module to see how the issue of sample assignment has (not) changed).

back to top

 

Reflection on programming, when updating the PartikkelCloudDesigner application (05.2007)

(I wrote a version of the PartikkelCloudDesigner application during spring 2006, to compose particle clouds for the Flyndre installation. A year later, I started thinking that other people might benefit from this application and I wanted to publish it. The parameter specifications for the partikkel opcode had changed in the meantime, as the version used in Flyndre was a prototype. I also realized that the python code for the application was not in a state to be published as it was very messy.

The reworked application is described in the artistic documentation, including a link to the source code).

 

Rewriting and updating the PartikkelCloudDesigner did provide some new insights.

I became aware that I have learned quite a bit about programming during the last year, as I can clearly see that the earlier version of this application is extremely messy. I now feel like a teacher marking an undergraduate student assignment.

Cleaning up the structure of the code is pretty quick and easy, even if this application has an extensive amount of GUI controls. I did restructure the best part of the application in two hours. Today (May 24th), I rewrote the code for the GUI and decreased the amount of code from 3700 lines to 550 lines maintaining the exact same functionality. This was done by using dynamic evaluation of code, allowing Python itself to generate Python code as text and then evaluating the text as code.

Now it is actually possible for humans to read the code and understand something.

back to top

 

Reflections relating to playing with feedback (05.2007)

To perform with the feedback instrument (in ImproSculpt4) feels like swimming in the dark. I have a rough knowledge as to what the instrument will sound like, but small changes in the input parameters may result in dramatic changes in the sound produced. At the same time, it may be difficult to change a resonating frequency that has taken a good hold in the system. This paradoxical behavior; that it may be difficult to influence on the instrument in the desired direction, combined with super sensibility in the input parameters, makes me associate such an instrument with complex processes in nature. It is as if the instrument is constantly on the verge of chaos. Nevertheless, it is worth mentioning that the instrument is strictly deterministic. If the exterior conditions and parameter settings are the same, the instrument will behave exactly the same from performance to performance; there are no stochastic processes involved.

 

Several years ago, I became aware that my instrument (as any musician’s instrument) encompasses all of the technical system from my own fingers until the sound reaches the audience. In many settings, this includes a PA system of a certain size. It was a new feeling to experience this big PA system as a part of the instrument, responsive and under fingertip control. I have internalized this feeling and do not normally think too much of it. But during playing on the feedback instrument, I once again became aware of this phenomenon. With the feedback instrument, it becomes quite evident that the instrument’s timbral space is the whole concert hall and that the space and the PA system constitute important parts of the instrument. This adds to the feeling of “swimming in the dark” because the instrument is physically bigger in scope than you are able to reach with your hands. You cannot physically embrace the instrument.

 

After the performance with Ingrid Lode on the feedback instrument (Dora May 8th 2007), I thought that this was an exciting way of playing together. The interaction between us as performers was very interdependent, each action from one of us influenced directly on what the other was able to get out of the instrument. In a way, we played on one and the same instrument from different directions. In this performance, we lacked the clear cut musical intentions that may provide the musical piece a clear direction. Some more experimentation and practice is needed for us to be able to follow up each others’ initiatives, and to be able to create some sort of dramaturgical development.

back to top

 

Reflection on programming (05.2007)

When I program large applications, I need to keep rigid discipline and order to keep track of all methods and variables. When a software bug becomes apparent, I notice that I react with irritation (maybe I could use some sleep?), that this is something that obstructs my progress. I wonder if I’ve taught myself a way of handling software problems that is not entirely healthy, and that I transfer this attitude to other aspects of life. When something is not optimal in my (personal) life, I start (with irritation) to look for “the bug”. Life can not be handled as a computer program.

back to top

 

Are we in control? (06.2007)

Reflections when reviewing and editing the material recorded in studio session with Hagg Quartet (recorded November 2005)

 

What I am about to say may be read as a grave detraction of the music recorded; I am not at all certain that what I am about to say is correct, and it is not my intention to invalidate the musical statements in the music. But a reflection may allow itself to assess qualities of the expression without saying unambiguously that this is correct.

Rest assured that all the musicians that took part in the recording sessions are proficient performers and improvisers that I admire highly, and I do not want to put any of them in disgrace. Rather, the comments reflect on what we were able to accomplish as a group in these specific sessions.

When I listen to these recordings, I sometimes think that it sounds like an improvisation from “the Dungeon Dimensions” to use an expression borrowed from Terry Pratchett. In Pratchett’s “Discworld”, there are “monsters from the Dungeon Dimensions”, creatures from another dimension that try to get into “our” world. They imitate creatures that exist here, for example humans, but they do not get it quite right, as it were… The representations of human beings that they constitute are flawed by substantial defects, for example looking like a human being, but not knowing how to perform simple operations like walking by moving one leg in front of the other. In essence, a very comical apparition. When I use this analogy on music, I mean music that is played by someone who has heard about free improvisation, but does not have the full grasp of what a musical statement is and what it means. I have a feeling that we, during the performances present musical statements that we are not able to follow up. This may be caused by the fact that we do not have control over what the musical statements that we introduce into the performance mean, and thus we are not able to create a “logical” continuation or development of certain statements.

I sometimes experience similar things when I listen to other musicians’ free improvisation as well, and in composed music for that matter, but not so often. This also applies to recognized performers. An example that comes to mind just now is a concert with Bill Bruford and Michiel Borstlap during the Trondheim Jazz Festival May 31st. Here, I felt that the musical statements were isolated and that the music did not have intention or coherence. The musical statements were set forth unrelated to any larger structure of form or development. I am not able to define clearly what the “coherence” is supposed to be in this context. Music that is experienced as meaningful and coherent maybe has a form in which the statements receive a logical follow-up (whatever “logical” may mean in the creation of a musical answer). It is hard to pinpoint what is lacking, but it is easy to point out that something is lacking.

back to top

 

Reflection on free improvisation (06.2007)

Lately, I’ve been wondering about the meaning of a musical statement, and the “logical” continuation, or answer to, a musical statement. The problems I encounter on this subject are most acutely present in free collective improvisation, possibly due to the absence of set building blocks that can aid in giving the performance a distinct direction. What I try to identify here is hard to pinpoint with words, but it evolves around the issue that the musicians in an ensemble have a common understanding of the sum of everybody’s contribution. This does not mean that everyone has to pull in the same direction, or that everybody must convey the same thoughts, but that everyone playing (or keeping quiet) has an understanding of how his contribution affects the total result.

I have often experienced (when playing with Kanon, with Carl Haakon Waadeland, with Motorpsycho) that it might be very successful to build a long free improvisation on a commonly sketched plan. By sketched plan, I mean some elements or situations that should be included in the performance; these could be specific composed themes, vague suggestions on instrumentation (player constellations) or mood that are arranged in order and might have an approximate time at which to appear in the performance. Such a plan can help each performer to focus on the common result and the large form. In ensembles that work together over extended periods of time, where the musicians know each other extremely well, such structural sketches might become redundant because everyone just knows (or feels) “where we are going together”.

back to top

 

Reflection on singing for a baby (09.2007)

(My fiancée Randi Martine and I got a baby in August 2007)

 

This may be both pretentious and of private nature, but as this document should contain my personal reflections it might nonetheless be relevant. When I sing for the baby, and I don’t care to use the traditional children’s songs, I’ve sung improvised atonal melodies with abstract phonemes in a rather abrupt rhythmic manner. I’ve noticed that if I manage to create phrases with a nice good shape (e.g. using the traditional intensity and pitch shape with an apex approximately 2/3 into the phrase) it seems the baby finds my singing interesting. If I try to deliberately not create good phrasing, or sing outright gibberish, he quickly loses interest and looks away. This may indicate some sort of fundamental musical principle, or it might just be the imagination of a proud father.

back to top

 

Reflection on the interaction between programming and performance activities (10.2007)

When constructing a new instrument, there’s a continuous interaction between the experiences (with the instrument) as a performer and implementations of new features. Oftentimes, the need arises for small additions to, and variations in, the instrument. This process is never finished. When practicing on any instrument, the experimental performer will look for “new features” or functions in the instrument and explore these. When (as in my case) it is possible to design new functionality for the instrument as a direct consequence of experiences as a performer on the same, the work of adding new features becomes an integrated part of practicing on the instrument. When I prepare myself for a performance, I routinely program and bug fix more than I actually practice playing music on the instrument. I check that everything is technically in good order. If the instrument works as I expect it to, performing music on it is not a problem. The act of programming substitutes actually rehearsing on the instrument. This is probably related to the fact that I think musically about programming, that the programming is focused on enabling the solution of musical problems.

I am aware that the balance between technical and musical preparation is a common problem for everyone that designs their own electronic instruments. The common advice from experienced performers (to less experienced ones) is usually that “you’ve got to stop programming at some point and start making music”. This seemingly good advice does not hold true for me, though.

back to top

 

Reflection on the setting of a structured frame for improvisation (10.2007)

In several settings within this project, I have come across the challenge “how to structure a (rather) free improvised concert”, i.e. how to take some sort of compositional control over an improvised performance. In case of solo playing, this is easier than in the case of ensemble playing because I may set up an approximate frame for myself and dynamically adhere to (or deviate from) that frame in relation to the musical situation that emerges during the performance. In case of ensemble playing, the performers must find ways to agree on the structural aspects of the performance. In ensembles that have played together for many years, one may obtain an intuitive interaction related to the structure and organization of a performance (as a “composed” large-scale form), in other contexts such structures must be agreed upon in advance in some form or another. In the work towards the final concert, I have been struck by the thought that it is strange that one has not found a self-evident method in order to solve this issue. Here, I work with musicians that are very competent improvisers with considerable experience; all of them have played together earlier, if not in this particular constellation. One should believe that we, during the practice situation, would be able to take advantage of the ensemble's entire experience and quickly find methods in order to obtain control over the performance's overall form.  On the basis of experience, this is not always so. A question that often come up in such a work situation is “are we to use a composed theme in order to generate musical “meeting points” in the performance?”, and if so “how are these themes to be organized?” with reference to order, cues to begin or end such themes etc. The use of composed themes, even if they may be very loosely formulated, may generate a controlled large-scale form and create an indicator for a dramatic curve in the improvised parts of the performance. I find the interaction between the composed and improvised material very interesting and the issue related to how the material is to be organized is experienced as almost universal for improvisation playing in an ensemble.

back to top

 

Reflection after a practice session with Stian Westerhus October 15th 2007 (10.2007)

At this point of time, the ImproSculpt module randPlayer has a rather limited range of expression. The same may be said about the feedback instrument, but that is another matter. All in all, I think that each module in ImproSculpt possibly is a bit more limited with reference to expressional flexibility than I had hoped and this is a task I want to work more with in order to solve the problems. For randPlayer, transposing of samples may be of great help, because it brings my output into another frequency range than that of the input audio for sampling (in this context, the music that Stian plays). The fact that ImproSculpt’s output in cases of live sampling lies in the same frequency range as the live input is a natural and obvious problem, since there are multiple representations of the same sounds appearing in the resulting music. This may result in a sound image that is packed in certain frequency ranges, appearing quite impenetrable, resulting in difficulties for precise orientation (for performers and audience alike). This is a general problem with live sampling which I want to try to solve.

 

Another issue we worked on during this rehearsal was “pointilistic improvisation”, loosely inspired by groups like Office-R(6)[6] and the work of composers like Anton Webern. By “pointilistic” in this context, I mean the technique of splitting up musical phrases between performers, where each performer plays only one (or a few) notes in succession.  When practicing this kind of improvisation, it struck me as related to “sound charts”[7] as used by John Cage in the sense that this is gestural playing, based on fragmented phrases. The difference between these techniques is that each phrase has an undetermined content in pointilistic improvisation, whereas sound charts (and the randPlayer) use phrases with a determined content.

 

Improvisation within 12 tone composition (with basis in the composition “Sjakktrekk”/”Chess move”) was explored, and we tried to loosen the strict systematic requirements in order to use more energy and focus on interaction and musicianship. We did this by breaking up the 12 tone series in intervallic motifs that could be freely transposed (for example by using the first 4 intervals as a clear motif). By using a very limited and short motif, less concentration was required in order to perform the music "correctly" and we were able to use more of the performative focus on listening and on creating something together. This worked out rather well and we went on to attempt to create a fugue-like interplay of melodic lines. The use of intervallic motifs appear as a good structural tool in order to create a frame for improvisation; it is easy to hear what the other musician is doing and which part of the composed material he or she is using at any given time. It appears as if this also creates a tight integration internally in the material so that the result appears structurally solid, irrespective of the details in our individual choices of notes and rhythm. It also looks as if it works well in this situation to use other parts (than the limited first four or five notes) of the 12 tone series in order to create contrasting motifs as a break from, or variation of, the core motif.

We discussed the possibility to create a totally contrasting intervallic motif as a basis/frame for a “B section”, a contrasting section. The 12 tone series in Sjakktrekk is in itself rather limited in terms of intervals, in that it uses the same intervals again and again. It would seem at first sight that it should be easy to find a contrasting intervallic motif, but on closer inspection it is not obvious that one can find intervals that have not already been used or implied by the first intervallic motif. The following gives a musicological explanation for this:

When working with interval series as a melodic device for improvisation (or composition), we use octaviation and inversion freely. This reduces the total of available intervals to 6, in accordance with set theory[8]. The initial motif in the “Sjakktrekk” 12 tone series is based on minor thirds and minor seconds. A contrasting interval motif may be based on a perfect fourth and a major second. If you look at these two motifs together, only the major third and the tritone has not been used, and the tritone can be regarded as two minor seconds put on top of each other, which means that only the major third is quite “virginal”. This indicates that there are limitations as to which unique interval types are available for making motifs as a basis for improvisation within this type of theory. I want to look into this issue to see whether there are ways to develop new compositions (frames for improvisation) within this paradigm. The musical validity of inversional equivalence for intervals is debated[9], but at the same time it seems to me at least partly valid. I think that in order to create several unique compositions within this pattern, one must first create motifs that are longer and that have more sense of direction, perhaps they should also contain characteristic octaviations and/or inversions. This will make the motif more identifiable and set it more clearly apart from all other possible motifs. Thus it will be easier to compose contrasting motifs. It will automatically become more difficult to improvise on longer motifs, but this is purely a question of practicing the technique. I think we have made a wise choice by beginning to work with short motifs in order to establish a form of ensemble playing within this type of technique and that the general experiences we make may be transferred to more difficult material at a later stage. This also points at a methodology to teach others to improvise within the same type of (a-) tonal technique.

back to top

 

Reflection on the choice of algorithms (11.2007)

In the early phases of this project, I investigated the use of several different types of algorithms. I did some experiments with Lindenmayer systems, Markov chains and Cellular Automata. As the work progressed, I turned more and more to algorithms related to fuzzy logic and algorithms based on serialism. I realize that I have been searching for algorithms where I could easily hear the development of a source material, and that this method of certifying the algorithms’ musical validity is essential to me. This is akin to traditional methods of improvisation training, where the issue of hearing before playing is stressed (as discussed earlier e.g. in Reflection on hearing a process). The issue of developing a source material has been important, and that this source material is extracted from live improvised input.

In the process of selecting and implementing algorithms, I have focused on the issue “how can this be used for musical realtime interaction?”

back to top

 

Reflections related to the work with the software (11.2007)

A major part of the work has consisted in the design of technical implementations; this applies to a number of details that, after they have been completed, may be regarded as trivialities. Examples of these are the design of audio routing and effects processing, straightforward midi controlled instruments, the use of objects in programming, include files and macros etc. Further, in the same process, I have spent a lot of time on solving technical details, such as “how to sort a list containing events in order to find all possible permutations of the list” (used in the module vector harmonizer). There are specific computer technical issues that I have tried to solve by searching for information on the Internet, and inquire in discussion forums (the Csound mailing list, the python mailing list, asking available academic personnel at NTNU). Nevertheless, I have been forced to find implementations that suit my purpose and test whether each implementation will work in the context (the algorithm) in which I wanted to use it. I have also spent time in trying to settle the architecture (for the computer program and for the flow of signals in the audio processing) that is flexible in order to facilitate later extensions. These tasks have been new to me, and I have tried to predict which demands the software may have to meet in the next few years. I see that the solutions that I have come up with are not perfect, but they are more prepared for future alterations than they would have been if I had not thought about this issue at all. Early in the project, I spent a lot of time creating a multi timbral instrument configuration for using the Marimba Lumina as a melodic controller. One may think of this as a variation on a well known product: the multi timbral synthesizer, which is easily and commercially available. I would not have to spend so much time on it if I did not want to get this synthesizer integrated in the same instrument as the rest of ImproSculpt. I chose to be able to integrate this synthesizer because integration provides the opportunity for flexible routing of sound and control signals between the multi timbral synthesizer and other parts of ImproSculpt. Also, I did not find a ready-made architecture for such a synthesizer flexible enough to match my requirements, and as a consequence I designed the synthesizer from scratch in Csound. Important features were program change via midi, a good selection of ready made instruments (tone generators), flexible routing of audio signal for effects and audio outputs. The use of voice groups in ImproSculpt is based on the requirement for a flexible and standardized way of controlling signal routing. Voice groups in ImproSculpt have developed in a direction where they are very similar to the traditional configuration of a sound mixer for studio and PA (indicating that the well tested architecture of the traditional audio mixer is quite good, and it suggest that I might consider changing the name from “voice group” to “mixer channel”).

On the internet, there is a wide selection of instruments written in Csound, some are written for midi/real time use and some are written for deferred time (for use with a Csound score). Csound has opcodes that serve to generalize instruments in order that they may be used both from score and via midi, but there are conceptual differences between real and deferred time that make it difficult to standardize an instrument to work in both contexts. The most important reason is that in real time, you do not know how long a note is to last, but in deferred time you decide the duration of a note before the note is started. For timbral modulations that should develop through the full duration of the note, this carries a conceptual significance, and has to be solved differently from instrument to instrument. I also wanted to adapt the synthesizer in such a way that I had as many modulation parameters as possible available without having to lay down the Marimba Lumina mallets. This has been solved through extensive use of expression pedals as well as utilization of the Marimba Lumina features, like sending different midi controller data for each mallet when gliding mallet along tone bars. I also use the Lumina’s trigger pads for specialized functions in ImproSculpt, like turning on and off the master clock, adjusting the amount of attack in all the sounds, a tap tempo function for the master clock which also sets the tempo for delay effects and so on.

Early in the project, with reference to software architecture, I made an effort to make clear divisions between the various basic functions in the program, such as GUI, audio processing, and composition logic. In the early stages, I had an “eventCaller” that was a central module able to communicate with other modules (The eventCaller is still such a central module in the current implementation). The architecture was developed further through the diploma project “Software Architecture of the Algorithmic Music System ImproSculpt”[10] with the students Thor Arne Gald Semb and Audun Småge, supervised by Letizia Jaccheri. The current software architecture is based on the research made by Gald Semb and Småge, and I have expanded and modified it as appropriate.

 

I have spent a lot of time trying to develop a Markov based melody generator, without having succeeded (in the musical sense) yet. A main reason for not having succeeded is that I need a melody generator that is able to function with a very limited database of events to choose from, because I want the opportunity for the instrument’s memory to be “empty” when a performance starts. Markov chains work best when there is a large database of events, and optimally, the database is (manually) filtered to only contain events that combine well with each other. During a course with David Cope in Santa Cruz during the summer of 2005 (WACM 05), he confirmed that he prepares his database of events manually in order to achieve the good results he gets with his computer program EMMY.

 

The work with interval series and intervalMelody has been very much inspired by the course in algorithmic composition WACM 05. Paul Nauert’s work with pitch class sets was in step with my previous interest in serial techniques. On a related subject, Paul Nauert has also worked with scales that do not repeat themselves in octaves, a system that resembles the interval series technique as I learned it from the Norwegian composer Christian Eggen. I still have not used the most theoretical implications of pitch class sets. I have simplified the rules to be able to control whether the algorithm works correctly by simply listening to what it generates. I think this aspect is extremely important; otherwise I might just as well have used some sort of stochastic scheme instead of a musically inspired algorithm.

The IntervalMelody generator is also inspired by the technique Fuzzy Logic that was reviewed by Peter Elsea in the WACM course. This is a technique that coincides to a great deal with the way I think of music, “somewhat more this way, and a bit less that way”. Terms like “somewhat more” and “a bit less” are very much in line with fuzzy logic. I have implemented Peter Elsea’s LISP fuzzy library for Python, but I have not formalized my own algorithms to make use of the fuzzy library. I see that my algorithms are “fuzzy related” and I think there would be an advantage in generalizing the algorithms to use standard routines for fuzzy logic; when this has been done, it would probably be easier to develop this aspect of the algorithms further. At the same time, I feel that I put myself under a constant production pressure in order to make use of the algorithms, and in such cases during the work process I have sometimes implemented in the fastest way in order to get on. The main target remains: the musical use of the algorithms.

 

I have made attempts to use algorithms like L-systems and cellular automata. I thought these to be very promising algorithms during the early phase of the project, but I have not found a musical application of these algorithms that makes sense to me. I see that the algorithms, technically speaking, have a potential for organic development patterns, but I still have not been able to find the mapping for the output of the algorithms that provide musical meaning for my personal expression. I want to get back to these algorithms in later work and see once more whether I will be able to find out how to use them for my purposes.

back to top

 

Reflection after the final artistic presentation concert, December 1st 2007 (12.2007)

It would be fair to say that I (and I think, also the other musicians) got a little bit over-excited and some things went a little fast (in terms of tempos, general precision and carefulness). On the other hand, I do like the energy and the attitude of this concert a lot.  When listening to the recording of the concert there are certain sections that would actually make me envious if I heard them played by other musicians in concert and I thought I was not able to play in this way.

That said, we could have spent more time developing some of the improvised sections. I also remember thinking during the set: “Øyvind, you could try making some more slowly evolving melodies and try using longer notes for a change…” I just could not find the calm to keep that up for long, except for in the very beginning and in the interval melody section starting after 53 minutes.

 

Why does it have to be so loud?

Some people did have objections to the sound level used during the concert. A reflection on this may be appropriate. I do agree that it was loud at times, all in accordance with my intentions for the concert. I have not heard such objections from the appraisal committee, but from some people in the audience. Some of the objections are concerned with the loss of detail in the algorithmic output from my instrument due to excessive sound level. I have reflected on playing in high volume settings earlier in this document. I have worked systematically towards a mastery of this dynamic range, and parts of the concert were intended to display this. It is a fair argument that I should also have shown an equal mastery of the extremely soft dynamic range. Playing loud also comes as a physical necessity of employing a loud drummer. With Kenneth (Kapstad), the acoustic sound of the drums and especially the cymbals is in itself quite loud. Still, I would want the drums to be amplified to create the fat sound of drums through a PA system. I have become aware lately that I really enjoy sound to come from speakers, that I like amplified sound better than acoustic sound. I also wanted the drumming style provided by Kenneth, obviously for the rhythmic groove based sections, but I also really like the way he solves the more free sections. The general sound level follows as a natural balance to the sound of the drums. Further, playing loud creates a momentum in the music not available by other means. It creates almost a feeling of danger, which I consider a good thing. As I have mentioned earlier, playing in high volume settings demands a sort of survival instinct regarding the shaping of very solid musical phrases. I figured this was an essential feature of my playing (and of ImproSculpt’s output) to display during the final concert.

 

After the concert, I expected to be happy that it all went well. Rather, I was actually sort of sad, almost irritated for several days. I have heard stories of a “post premiere depression”, but never experienced such a thing, at least not to this degree. I realize that the motivational focus for working has been unconsciously focused on this one performance for the last three years, and that it might take some time to adjust and refocus.

back to top

 

Reflection on my own competence as an improviser (12.2007)

Working on the development of a new instrument for improvisation has naturally affected my competence as an improviser. Studying specific composition techniques involved have led to new (for me) ideas on how to approach improvisation. However, it is difficult for me to objectively analyze my own playing and to compare the “now” to the “then”, in terms of describing specific aspects where I have improved. My personal feeling is that I have become more self confident about improvising, and I have developed some structural tools for creating coherence in musical statements and musical developments. The most interesting aspect is probably what is hardest to describe; how do I improvise within these techniques after I have familiarized myself with them to the extent that I no longer think about the (compositional) techniques? The critical aspect of this is “that I no longer think”, and as such it makes it hard to pinpoint the essence of the matter in writing. What I do know is that I routinely break “the rules”, but that the rules (the specifics of a particular composition technique) are still in effect, and have a significant impact on the structuring of the resulting music. Some of the rule-breaking occurs due to a conscious choice during performance, while sometimes it happens due to the fact that there are more things happening simultaneously than I can intellectually process while playing. Going back to the rehearsal room and practicing the specifics of the compositional technique in terms of both ear training and physical execution will help the ability to comply with “the rules”, and also create an even better foundation for breaking them. However, none of the insights in this paragraph should come as a surprise to any practicing improviser.

back to top

 

Reflection on the current state of the ImproSculpt instrument (12.2007)

This paragraph is a technical reflection on the current state of the software instrument. It will probably not contain any useful or comprehensible information for a reader that is not familiar with the implementations details of the software. I still think that these technical issues have a natural place in a reflection.

 

The parameter mapping for the partikkel single voice module is mostly a direct mapping of controller input to instrument parameters. This leads to a detailed focused attention for the performer, but also allows very precise expressive gestures in an immediate manner. On the other hand, the parameter mapping for the partikkelCloud module is largely a playback of interpolations between different presets of a large array of parameter values. These two approaches are opposites, and a middle ground was implemented in the form of metaparameter control for the partikkelCloud preset automations.  However, the metaparameter control was not actively used during the concert on December 1st, as I was largely occupied controlling other voices (partikkel single voice and randPlayer).

 

I’ve implemented a segment organizer allowing assignment of shortest and latest sampled segments, but in practice I did only use the “assign latest recorded segment” method. I think there is a significant gain in further development of the segment organizer. It would be relatively easy to incorporate pitch tracking and centroid analysis of segments (the analysis of segments is already implemented in Csound; the data is just not used for anything other than wave display yet). I also see the need for new segment organizer modes like: “assign the brightest and shortest samples with a clear attack, and use only the 20 latest recorded segments to choose from”. It would also be very useful to store “segment assignment presets”, as this would create the possibility of going back to a previous musical state after developing a musical section (and possibly doing some mistakes by throwing away good segments).

 

In the randPlayer module, I’ve added a lot of features during the last few months, like transposition, filtering, panning, effect sends, rhythm tempo factor and so on. Especially the filtering and duration controls provide ways of shaping the overall musical impression generated from the module in an intuitive manner. I still feel that this module is not very flexible in terms of musical expression. I will need to play more with it in different musical settings to determine how to develop it further.

 

Another issue is the “always on” instruments for monophonic melodic sounds: The tone generating instruments for fofVoice and noiseFormant (noiseFormant is a new instrument implemented after the initial submission of the instrument early in November) are a bit heavy on CPU usage. I can clearly see that these monophonic instruments provide valuable instrument timbres that work well with algorithmically generated melodies. They do not have the stiff “note by note” feeling that some of the other melodic instruments have. Development of more monophonic instruments is needed, but it is not technically possible to run many more such instruments in the current setup. Currently, these instruments run at all times, and I use one instrument per voice that could possibly make use of them. In practice, this wastes a lot of CPU resources, as the instruments may not be used at all for long periods of the performance. For the concert on December 1st, I disabled all such instruments for voices where I knew I was not going to use them. This limits flexibility, as I did not have the choice of changing my mind during performance. A method for dynamic allocation of “always on” instruments needs to be devised in order to gain flexibility in this matter. The same method should also be used for audio effect processing instruments.

 

Regarding rhythm: Currently, ImproSculpt only uses precomposed rhythm patterns for algorithmically generated voices (intervalMelody and randPlayer modules). This is very limiting in terms of expressional flexibility, as the precomposed rhythms quickly will sound stiff and repetitive. The current workaround is to change rhythm patterns frequently, and also use a slight random deviation in the rhythm pattern selection method. Rhythm pattern recording (and assignment) must be implemented in the near future. A synchronization of ImproSculpt’s master clock to an external audio input source is also needed to enable flexible interplay with live musicians (e.g. drummers). The synchronization method could loosely be based on the sync feature as implemented in the partikkel cloud instruments (with the partikkel and partikkelsync Csound opcodes). The type of synchronization needed is a “soft sync” where ImproSculpt’s master clock can gradually adjust and “gravitate” toward the analyzed pulse of an input signal.

 

During the concert on December 1st, I heard some clicking sounds during the attack portion of midi triggered notes. I guess this is related to audio dropouts or system latency in some way. Inspection of the multitrack recording from the concert does not show actual dropouts; rather it shows some sort of “short and terminated attack” a few milliseconds before the actual instrument waveform starts. In any case I consider such clicking a catastrophe and the matter must be investigated. I am confident that the issue is not related to CPU outage, as I did actually watch the CPU meter on the computer when I heard these clicks.

back to top

 

Conclusive remarks (01.2008)

 

Seen as a whole, the reflections in this document provide a perspective on the process of working with an artistic development project, working within the scholarship program, and the process of writing the new software instrument. The reflections are of a very diverse nature, and I do not find it appropriate to even try to sum up the contents. For the impatient reader, I would rather encourage reading the paragraph headers to sift through the material. However, some conclusive remarks about the process as a whole may be appropriate.

 

The work has adhered to the original project description in that it has been focused on the making of a new instrument for improvisation, and on performing with this instrument. The incentive has been to search for new approaches to improvisation. It would be possible to discuss to whom is the approach new? I know that I have learned to approach improvisation in ways that I was unable to before embarking on this project, but I would not be naïve enough to say that no one has improvised in this manner before. There are some examples, however, of techniques that I have not heard others use. The intervalMelody generator coupled with live recording of improvised interval series is such an example of a technique that has been utilized in a new manner. Further, the work on the partikkel synthesizer provides an extensive tool for audio transformations. Even if the particle synthesis technique has been used to great extent in the project, there’s still a vast potential waiting to be unleashed.

 

The process has been more divided than originally expected, as performance activities and software development have been executed mostly in parallel. This is due to the simple fact that it was not possible to perform on the new instrument before it was built, still it seemed reasonable to keep up some performing activities with whatever parts of the instrument that was ready at any given time during the project. My own judgment of the instrument development is that it was first ready for use during early summer 2007, as this was the first time all parts of the instrument did work together simultaneously and as a whole.

back to top

 

Further work

 

As a substantial amount of work has been put into making a new instrument for improvisation, one of the obvious tasks for further work is to use the instrument in practical performance situations to explore the possibilities and limitations of the instrument. It is also my firm belief that software development and musical performance are interdependent activities in a context like this, so some further development of the software is highly likely to occur.

The software instrument ImproSculpt in its current form provides a tool for improvisation and realtime composition. With its modular architecture it is well suited for further expansion in terms of more composition modules and a larger selection of audio generating and processing instruments.

 

It would be interesting to implement Dmitri Tymoczkos voice leading principles as described in “The geometry of Musical Chords” (Tymoczko 2006) and in Score Generation in Voice-Leading and Chord Spaces” (Gogins 2006). A further investigation of Markov chains as a generative algorithm could also prove fruitful, keeping in mind that this technique needs a large database of analyzed material to generate interesting results. One approach to this problem might be to analyze audio segments instead of note events, as this would certainly provide a large and diverse material for the database. In this case, the audio input could be segmented according to significant changes in amplitude, pitch and spectral content with a consequent analysis of transitions from segment to segment. In its simplest form, and applied to a speech signal, this technique would conceivably make a poor speech resynthesizer or “talking robot”. In a more refined form, and applied to musical audio material, it might provide an interesting tool for restructuring of live audio input. A further investigation of fuzzy logic routines also seems like a logical next step in terms of developing compositional methods for ImproSculpt. This might also provide a means of algorithm optimization for some existing composition modules in ImproSculpt, such as the intervalVector Harmonizer. Fuzzy logic might conceivably also be utilized as a technique for audio segment reorganization along the same lines as the idea suggested above.

The partikkel opcode provides a wide area of further research projects, and also a resource for building new audio generators. A generic instrument architecture for an “analogue-sounding” synthesizer instrument should be implemented based on the partikkel opcode, as this would provide a very flexible audio generator with a huge amount of timbral modulation parameters. It would also be sensible to write a partikkel usage tutorial with numerous examples, to help other Csound programmers utilize the power of the partikkel opcode.

To assist interested developers in utilizing ImproSculpt4 as a platform for algorithmic composition software, a tutorial for writing new ImproSculpt modules should be written. This would convey the general architecture and signal flow of ImproSculpt in an exemplified manner, and provide simple templates for timed automation and graphical user interface modules.

Finally, it would be sensible to develop ear training methods to familiarize performers (on ImproSculpt or other more conventional musical instruments) with the composition methods used, allowing a better intuitive understanding of the musical processes involved and allowing improvised use of such techniques with or without the aid of software.

back to top

References

The references are to be found in a separate document here.

 



[1] Also called L-systems. http://en.wikipedia.org/wiki/L-system

[2] http://en.wikipedia.org/wiki/Cellular_automata

[3] See for example http://alimomeni.net/interpolation-spaces and http://www.timblackwell.com/

[4] See for example http://place.unm.edu/relational_art.html

[5] The program code specific to the composition can be found in the method eventCaller.executeInputParameterMappings(), on line 1171 of the source code. This method calls other methods that is related to the actual musical composition, and these other methods can be found immediately following executeInputParameterMappings() in the source code, up to and including eventCaller.updateModulesActive() ending on line 1971.

[6] http://www.n-collective.com/index.cgi?article=1&dept=groups

[7] Cage’s use of sound charts as a compositional device also inspired the design of the randPlayer module in ImproSculpt. An example of a Cage piece using this technique is “16 dances”, CD. 09026 61574 2 BMG/RCA 1994.

 

 

[8] As based on Allan Forte’s work with pitch class sets and interval vectors, see for example: http://solomonsmusic.net/setheory.htm#3.%20Interval

[9] See for example: http://solomonsmusic.net/setheory.htm#Why%20is%20the%20Solomon%20Prime

[10] http://prosjekt.idi.ntnu.no/sart/publications.php