Csound with Live Interaction

I had anticipated using Csound mainly as a compositional framework, for recorded sound. But, it's still useful and a lot of fun to be able to control and shape sounds with live interaction. Perhaps not for performance, but more for sound design for a composition that is intended to be recorded, not performed live.

Late-modern 64-bit computers are so fast, and Csound so capable, that in fact, it really is possible to control live sound generation with Csound, controlled via MIDI. My Pixelbook has a pretty good processor, but it's definitely not one of the latest, more a middle-of-the-road type. Still, it can yield ≈2300 MFLOPs at 64-bit double precision for a LINPACK benchmark. Way, way fast processing power for audio-rate signal processing.

The Model III EMS could be a variety of Csound configurations that offer different synthesis modes. And this could include MIDI control surfaces. So, it appears now that Csound is opening the door to the Glass Cockpit synthesizer concept I had contemplated at various stages during development (this started out in the early 1980s as an infatuation with the Synclavier II).

After many years of dealing with temperature compensation for BJTs, and analog components like opamps and comparators that have ≈10-bit accurate DC specifications (unless of a super expensive type), the use of 64-bit digital floating-point offers a tremendous amount of freedom. Especially since all arithmetic, linear and non-linear, along with numerous transcendental functions, all work drift-free, and repeatably.

One of the other things I've discovered in making this shift in emphasis to software synthesis is that virtually all of the waveform signal processing, frequency synthesis, and various other sound synthesis research I've done over the last few decades is still useable. This is because Csound can emulate all of that, and generally without problems like offset voltages, temperature-induced drift, and other anomalous properties.

Comments