I had to spend a bit of time yesterday putting together a script for the Innovation Showcase timetabled for Tuesday. Once I had a script, I then had to set about sourcing some interesting visuals to accompany the script, and then combine the script, the visuals and some demo samples in order to make a presentation introducing the project.
Given that so much of the time so far has been spent in courting other scientists and preparing material for showcasing the material, it is a little frustrating not to be able to just burrow down into the work.
Frustrating also that Lukrecia is still having trouble accessing the data from the AWI server. Hopefully, this will be addressed in the next day or so,
Despite these frustrations I do feel that I made a breakthrough yesterday - and it is partly on account of having to write that script for the showcase.
In the presentation I alluded to a problem that I had only just begun to mentally acknowledge. With the ever growing complexity of the software, and its near complete dependance on the data, the sounds coming out from it, whilst satisfyingly complex, were not sounding like anything I would be happy to put my name to as a composer. This put a hook in my lip and so I spent much of the remaining time yesterday applying myself to the challenge of making the data sound like me, my work, my style, my form, without disturbing the data itself.
I developed an interface that would allow me to mute channels and adjust amplitude of each stream independently. Making the screen much less cluttered allowed me to focus more on the sounds than the architecture.
It transpires that dropping the tempo and replacing some of the instruments has made a huge difference. I have quite accidentally fallen upon an arrangement of virtual instruments that both compliment that sine weave harmonics. and reflect the nature of the project.
Glass armonica, the processed nordic treatments of Olafur Arnald’s Spitfire Library, Imogen Heap’s Waterphone from her Sonic Couture Box of Tricks library. Spitfire’s Tundra strings. They all sit well against one another, and whilst I still have to deal with the obvious looping in the bar structure, I still consider this to be a major step forward.
The next thing requiring attention is how to introduce imbalance and change into the system. Technically, it looks onerous to imagine having to push atonal content into the programming supporting the scales in the current model. So, I think it has to be a combination of filters, and perhaps some field material stored in audio buffers, along with some movement algorithms triggering arrays of comb filters, or granular erosion… Oh, something like that. Something like that.