Ocean ARTic Journal 15/VI/21

A couple of days ago, two new data dumps appeared in the shared Dropbox folder, along with a textile explaining what was there. This is the data that Lukrecia has been wrestling to get downloaded from the server, and which I have been eagerly anticipating.

This is the coupled model data for the Arctic research project around which the work is to be formed.

We are beginning by taking a core sample of information. I received crystal clear instructions…

The data is a result of the ensemble coupled model simulations for the period of one year (starting 1st of June) conducted by the global model AWI-CM (Stulic 2015, Semmler et al. 2016). It comprises of daily average values for the chosen climate variables over the Arctic Circle (AC, north of 66N) from the control ensemble (AWICM_CTRL.txt) and the ensemble with reduced sea-ice thickness (AWICM_RED.txt).


1=ice, daily average sea-ice concentration (%) 

2= hice, daily average sea-ice thickness (m)

3= tair, average surface temperature (K)

4= mslp, sea level pressure (Pa)

5= prec, precipitation (kgm2/day)

6=evap, evaporation (kgm2/day)

For each column :

1st value= min*

2nd values= max*

3th value= June 1st

...continutation of the daily values over a year....

367th value = May 31st

*note that the range (for the same varaible) differs between the CTRL and RED. For the common range for a variable, take the smaller min and the larger max between the experiments.  

To these columns I added a date identifier to make it easier for us to bounce back and forward through the data to gain a clearer indication of changes.

Because of the prior work done on assembling the sound engine utilising the Antarctic data it only took a couple of hours to swap out the old data, import the new (beginning with the control year) and update the scale values to keep the new data within audible range.

The back end of the program is now so cluttered and confusing that I am going to have to put time aside to clean it up and organise the patch cables. In the meantime I have worked more on a GUI and added a couple of features to make it more usable for sharing and discussing the processing and the sounds of the shifting currents of data.

I added some sliders to each data feed with an indicator as to where in the min - max range the current value sits.

I also imported an mc granular build from yesterday’s tutorials and, whilst waiting fro info from Finlo, I mapped it to the precipitation data, and used a field recording of snowmelt water as a source for the grains. The effect is encouraging, though listening to hour after hour of water samples is making me pee much more often. It seems to be a truism about water recordings that you have to get something really special to NOT remind you of bathrooms, toilet bowls and bladder functions. 

With the sounds now coming out of the engine I am faced with a new series of questions and considerations.

I am enjoying the sense of unity among all the feeds, the sense of ensemble is working well, and I do get a sense of weight and expanse, of scale - an overall force which accommodates these smaller movements. But how far to go in order to alert the listener to what is changing, what is losing stability, what is lost?

By swapping the data source to the RED data set, which has Arctic ice removed for several weeks in the summer, and applying a different scale to the output - super Locrian - the listener is aware that the overall mood is slightly more anxious, for sake of a better word. The full movements of the ensemble are still there, for sure, but things do seem a little more uncertain. Is that just me making this analysis whilst in full control of the tempo and the key signature. I can edit the time easily and bounce back and forward in the dataset to hear these clear and sudden transitions. It is unlikely - unless someone steps forward with a web solution to enable listeners to manipulate the data themselves - that the transitions would be quick enough to be noticed. The old - and false - myth about boiling a frog comes to mind.

Directly because of this, and in order to make the shifting forces more discernible, I have moved the tempo from a stately 60 up to a more frenetic 240. This, of course, makes the samplers work a lot harder, and I am aware of computer fan noise becoming a constant undertone to running the data.