I'm sure the saga of trying to decide analog versus digital, and software versus hardware, must seem like a meandering, or even chaotic path. I suppose the long arc of synthesizer development does not have to make sense. It has been a search, but one that yielded a tremendous amount of learning.
I still miss the idea of constructing a digital sound system with TLL or CMOS logic; with bitslice and microprogramming; or even with a FPGA. And of course, I still miss pure analog, which I worked very hard at using for synthesizers. What has happened is that 64-bit Intel and ARM microprocessor SOCs have become so powerful that the live multitimbral digital sound synthesis and signal processing is now possible at low cost and low power.
While I truly love analog, the "Aha!" for me was realizing that I can get more synthesizer done faster with code than with soldering. The floating-point performance alone has a measureable contribution to sound generation or signal processing, versus other digital hardware. Combine fast arithmetic performance with high-capacity memory, and the ubiquity of inexpensive 24-bit high-sample-rate audio CODECs, ... suddenly it's a different world.
A huge nudge in the general direction of software synthesis happened in 2016, when I got a Raspberry Pi3B. I played with a number of software synthesizers, and MIDI hook ups. It was incredible that it worked so well, even without a realtime Linux kernel! This was the start of thinking about the "glass cockpit" type of synthesizer I wanted build, a kind of homage to the elegance of the Synclavier II, which I totally worshiped during my strong DSP-oriented sound days in the 1980s.
In the Summer of 2019, I got a Raspberry Pi 4B, an incredible upgrade. I liked this little computer so much I got more of them. Two are dedicated to drive Software Defined Radio (SDR) boxes, in my amateur radio shack. With their small HDMI displays, these are basically "glass cockpit" radios. A 3rd unit serves as a general purpose CAD workstation, with a larger HDMI display -- I'm thinking of eventually upgrading to a 4K display, which the 4B supports! And I have a 4th unit sitting unboxed, awaiting any new project for which I could use it.
I further expanded my "glass cockpit" concept of (digital) synthesizer last December by adding a Google Pixel Slate. Later, I added the Google Keyboard, to allow running Linux on Crostini. I was interested in this platform for my photography work. But it's support for both Android and Linux also makes it an excellent platform for sound. The Slate has an elegant glass shape, form factor, and wonderful UX. Chrome OS already had pretty good support for USB audio, but with the 88.* series, it's quite good now. My Focusrite Scarlett 2i4 just plugs in and works.
In this new year of 2021, I keep thinking about software synthesis, signal processing, and sound recording, in various new ways. The FL Studio Mobile and Audio Evolution Mobile apps for my Google Pixel Slate are absolutely incredible! On top of that there is also an awesome Csound App for Android that I really like. And I still kept thinking about the amazing things that could be done by using the Raspberry Pi4B as a building block. Esp. with the possibility for touch interfaces.
But off-putting use of Raspberry Pi4B was the work required for hardware integration, as well as low-level software development to handle I/O expansion. Too much hardware development, like the analog design work, was already slowing down getting to any kind of completed synthesizer. Designs would need to be completed, PCBs would need to be made, and only then really could software integration start.
Several high-performance CODEC HATs are available for Raspberry Pi. These HATs interfere though with also having separate I/O for indicators, valuators, and some kinds displays like 7-segment LEDs, or alphanumeric displays. Now, an exception was the Raspberry Pi LCD touch screen, but that also would require software bundling. There did not seem to be one kind of HAT, or HAT + extension, that did everything that would be needed for a synthesizer. I would have to do all that engineering myself. This looked like more months of development effort. On top of that, the best codec is helped by a low-latency or realtime Linux kernel, and configuring such a kernel is an engaging effort on it's own.
One good compromise I had looked before is the PiSound, which I bought a few years ago. I did like it conceptually, but not the form factor, nor its normalization and valuators. While flexible and yet simple, it was not really what I wanted for a synthesizer: some kind of visual display, and related valuators and controls. In particular, an integrated visual display, not requiring a separate monitor. In fairness, the Raspberry Pi ecosystem does support such a thing, but there was still the matter of sufficient I/O for valuators and other displays (even LEDs) and also for the CODEC.
I'd seen the Zynthian V3 before, but was not interested at first, because it was only supported the Raspberry Pi 3B. I wanted something specifically for the much more powerful Raspberry Pi4B. Finally, that development team released the Zynthian V4! Like the V3, this unit had all the hardware engineering and realtime kernel work pre-done: for valuators, display, and an awesome CODEC. Plus further extendable I/O.
So, this weekend, I ordered the Zynthian V4 from zynthian.org. Excited! Cannot wait to get this SDIY kit, and put it together! LOL: No soldering required!
It now seems that the Model III EMS can become a collection of "glass cockpit" items like the Zynthian V4 and the Google Pixel Slate apps, adding as well the various MIDI controllers I've collected during development.
Experimental sound composition is finally coming into view!