Conversations in Deep Space
This track is how an image sounds like if treated as an audio signal - an example of synesthesia familiar to dolphins and sonar specialists.
In this particular case a GoogleMaps street map of Paris was subjected columnwise to an inverse Fourier transform as it were a spectrogram needed to be decoded in a sound. The resulting matrix with the DC artifact zeroed is then vectorized and saved into an audio format. The persistent beat in the sound is introduced by the Fourier transform. The orientation of street names produce pings with rising or descending pitch, at frequencies depending on their distance from the horizontal center line of the map. The red, green and blue image layers are spatialized as sound sources originating to the center, left and right of the listener. Processing software includes Matlab, Audacity and Horne Spectrogram.
The image below is a portion of the soundtrack's spectrogram, closely ressembling the original street map, with the exception of its folding along the middle. The mono spectrogram emphasizes individual pings (light blue spots), while the stereo spectrogram shows colorcoded spatial predominances in frequencies (vertical axis), time (horizontal) and power (top line).
Les Grands Boulevards
Life in Paris as viewed from behind a computer keyboard.
The Alphabets have their own aural character, given by they order of birth. They can only produce a single utterance and to say anything meaningful they have to conjoin into words. They are very unstable elements and because of this people tend to speak nonsense. Here we hear a discussion between an American robot walking his British fox, a German dwarf and a French judge. As the argument gets heated they change frequencies and positions.