Dan Stevens was the developer of Synther 7 the synthesiser for Dragon, which squezes more out the Dragon you'd think possible.
|
The Story behind Synther7 Because the CoCo's clock was only 973 kilohertz - less than 1/1000th the speed of 2011 computers - I was not able to use a constant sample rate. Frankly, the concept never occurred to me. Instead, the sample rate determined the pitch, and I used the 16-bit capabilities of the CPU to govern the sample rate precisely. This gave every voice a bit of a whine, but there were no other sampling artifacts. One could call this a 'dynamic sample rate'. The Radio Shack Color Computer which I used was a 4k machine that I had expanded to 16k. I still have it, as well as the little TV I used for a monitor, although the cassette recorder is long gone. I saved my compiles onto cassette, developing the program in segments as I went. This allowed me to load the source code for a segment, then edit it, compile it and save the resulting binary with a "load at" instruction. The final product needed to be loaded into the computer from the several binaries on tape, then copied from the computer in one final overall tape save. A first discovery was that everything I did had a 60 hz buzz in it. I had several books with specs for the Motorola MC 6809E cpu, and it was pretty obvious that the designers were using the AC power supply to trigger a vectored interrupt that read the keyboard. Well, I wanted to read the keyboard in my own way, so a little poke let me turn off the interrupt. I pulled the waveform from a table, sending each value successively out into the A/D circuitry. And then waiting for n clock cycles until the next value needed to be sent out the door. That waiting period gave me a chance to check the keyboard. The keyboard logically had an 8 X 8 matrix. Eight "vertical" wires crossed eight "horizontal" wires. When a key was pressed, one of the vertical wires would touch one of the horizontal wires. Reading the keyboard consisted of sending '0000001', then '0000010', then '0000100' into one dedicated memory location, then at each 'send', reading another location to see if there was a non-zero result and if so to see which bit was set. A single step in the first part of this process - outputting one byte and seeing if the other byte was non-zero - used few enough clock cycles that it could fit within a sample cycle. So the computer checked for a non-zero return byte for '00000001', then sent a byte to the A/D interface, then paused for n clock cycles to complete the pause between samples for the particular pitch of the note being played, then checked for a non-zero return byte for '00000010', sent the next byte from the waveform table to the D/A interface, and waited again. There were eight little routines that checked the eight 'send' wires for the keyboard. But it needed to do more things while a note was sounding. So I added more little routines interspersed between the keyboard-check routines. I added a couple of shells that clocked through a whole other set of routines. The computer would cycle through the keyboard-check stuff, then hit a shell, and every time it hit that shell, the shell would take another step through a bunch of other routines. I visualized it as a wheel in a wheel, with the second wheel orthogonal to the first. A procedure in the second wheel would need to see the set of keyboard checks go by several times before it could repeat. The computer had a "NOP" assembler command that told it to do nothing for two clock cycles. It also had a "BRN" command which told it to branch to nowhere for three cycles. Between the two of them, I could construct a delay that was precise to one clock cycle. When a key was pressed, while the computer was
figuring out what to do about it, the sound of the note would, of course,
stop for about a 20th of a second. But if the keypress was for another
musical tone, that mini-pause was quite all right - the pause
psychologically added weight to the next tone sounding. The volume of the sound was controlled by
decrementing or incrementing the sample value so that the effective
waveform faded toward the centerline. The quieter sound had a little less
character, but was still recognizable until almost the very end. This
allowed decay and release effects. The second summer, I added new features to the
product and called it the Synther77. This new version let you save songs
to tape in a format similar to MIDI, with note number and duration for
each note. Syn7's five voices had grown to 50. But it still was limited
by the the waveforms available to the 6-bit D/A interface, as six bits
produced only 64 different analog voltage levels. Along the way, I had fun with a Forth compiler for
the Coco, using it with a 6809 version of the little 4-voice synth that
was popular on the 6502 (Apple) machines. Forth lets you aggregate
primitives into complex commands. I could type in "17 events" and it
would create 17 random chords. Then I typed "intersperse" (I think) and it
would insert chords that interpolated between the random chords, the pitch
for each voice being halfway between the nth random note and the n+1 th
random note for that voice. This would double the length of the piece!
Use the command several times, and there were scales running up and down
in each voice. I developed about a dozen commands. One command put a major
chord at the end. So all this randomness led to a fine cadence. Using the command to put a major chord at the end
of the piece in a 31-tone composition resulted in a closing cadence where
notes just sort of edged themselves into the final position, very dramatic
and something never heard before. I got two 'A's' in music composition
for this project. ![]() |