Ocean ARTic 7/VI/21

Yesterday evening, Susan and I walked down for an evening swim and a beer. I started talking about my concerns, the issues that required addressing. As is usually the case, the act of talking about it changed the perspective and opened possibilities and proposals to follow up on.

First off, given that I am maxing out the CPU with around 7 elements it seems highly unlikely that I’ll be able to develop an engine that will simultaneously run all 100 elements in the coupled model.

This suggests sub groups, smaller narratives within the overall dataset that illustrate key points. Or, a series of different engines for different groups of elements. This suggests a huge amount of work. Lukrecia will have to advise on key findings, and locate illustrations of those findings in the overall dataset.

It seems clear, now, that there are three different modes of presentation for the finished commission; installation, performance, recording.

In installation will be least satisfactory, I think, because there is no live incoming data to add dynamism and surprise. It would need to simply be a captured sub set of model data played on a loop in a space with enough visual information to give the experience some kind of context and resonance.

A recording sits well with the idea of having to locate, isolate and work with highlighted aspects of the overall model. It would have the potential to reach a wider, larger audience.

Similarly, a performance would have to isolate key areas, and develop the software output to sit well with live instrumentation. This will be the most interesting, will likely enable the release of further Liminal funding from Creative Scotland and will enable the ensemble to get back together with something concrete to work towards. It suggests, however, an incredible workload for me over the duration of the work.

Two meetings today have proven helpful. The project team meeting, as hoped, seem happy with the proposal that this work receives some kind of performance at the Queen’s Hall. A later exchange with QH suggests that they are very much behind the idea but the caveat, as ever, remains that their opening date post-lockdown looks like it may shift back again. That is something that is out of every individuals hands.

I raised the idea of trying to seek out folk in the university who would gain something from coming on board in some capacity. The key thing for me at this point is to avoid becoming any further cluttered in administration and definition of terms or re-negotiating outputs. I need to keep my head down and work purely on the sound. I am approaching that point where my head goes under the surface and I engage with the work on the inside. One day I will unpack that and describe it more clearly, but for the moment…

~

So, in defining how folk may add to the party (and as a draft text for Inge, the partnership contact at Creative Informatics)

Can you take a Max Patch and reconfigure it to work in an installation context (rewiring the patch into a computer that can be housed with the installation / converting stereo to multichannel / building in mechanisms to ensure memory is freed up, and system resources remain stable)
Are you a Max MSP super genius who could take a considerable patch and significantly reduce its CPU consumption?
Are you interested in creating data responsive visualisations to provide a new dimension to a piece of audio art?
Are you a PhD musician who would be curious to work with other classical musicians in exploring new performatve processes and techniques with the Black Glass Ensemble, utilising data, semi improvised responsive playing, pushing the boundaries of timbre and articulation?

~

A second meeting, this time with Finlo Cottier at MASTS, explored the potential - hopefully soon to be realised, of integrating his biology observational research into the overall scheme to provide an organic sense of nutrients growing and expanding to the point in May - the Spring bloom - where upon there is a collision with the appearance of new ‘spring bloom’

I am considering whether granular synthesis might provide away of bringing in a sense of frothing growth amassing in density - that suddenly in the springtime collapses in a mirror reaction of the sudden appearance of new zooplankton.

We spoke a lot of seasonal rhythms, and the increases in ‘amplitude’ as biomass rises and falls in relation to the surface, and in relation to how much surface ice is present, how strong the sun - or THE MOON - is, and where the food source is at any given time. Another curious observation at this point is the way that he - as a biologist - refers to the water being lighter because there is more sunlight getting through to it, whereas the AWI team consider this to be dark water. Their perspective is to look down on it from above, and the lack of ice causes it to be known as dark water. For the biologist looking hip from the depths, of course, the lack of shading ice cover makes the water much lighter.

I think it will add a fascinating new dimension to introduce biological material - increasing the complexity, the inter-relationships, the complexity of rhythms impacting and driving every aspect of the territory.

In order to keep the team appraised of the palette of sounds being explored, I’ve made the most recent iteration of the instrument ensemble available to listen to on Soundcloud, using the Antarctic polynya data:

https://soundcloud.com/michaelbegg/ocean-artic-data-model-arrangement

This would be a good point to summarise the construct of the program to this point…

A global tempo is set. The data file is split into its columns; presently this covers ice area, ice volume, ice thickness, air temperature, wind, salt flux, and salinity of the ocean floor. When the new data arrives from AWI the columns will change, and there will be many more - but the scaling and calibration will remain. The transition, therefore, should be painless and enable us to work together on critical consideration of how best to calibrate and articulate the key events in the narrative.

The data values are scaled to musically audible values - either MIDI or frequency values. All the values are then filtered to ensure that they remain within a pre-defined key signature/scale, thereby removing the likelihood of discord and atonality from the ensemble.

With the exception of wind and ice volume each column is fed to its own array of 6 sinusoidal oscillators, independently pitched in direct relation to the root, and with each element of that array changing amplitude independently. This constitutes a full palette of 30 oscillators providing an ever shifting fluid foundation of sound.

In addition, Each element is simultaneously fed to an instance of a VST instrument (at present the Massive X synthesiser, or the Kontakt sampler) These instruments are loaded, as recorded above, with a well balanced series of relevant instruments.

Because the system itself is fastened to a global sound clock I am free to set up different tempo values for individual tracks - quarter notes, quarter note triplets, etc.

What I am trying to achieve at this early point is to create a multitude of possible points of entry for other data streams to come in and directly affect the way that the core set of instruments plays, sounds, performs, falls out of tune, changes in amplitude, etc. I am trying to create a sound ecosystem to reflect the multiple rhythmic elements and interactions of the territory.

Once the ecosystem is secure, the system will be edited to enable individual data streams to be output as a readable musical score. This is the crossover point where data, technology and acoustic playing begin to gather in a liminal territory. This is where the alchemy begins to suggest itself.