Computer music at New Mexico Tech
Around this time last year, my friend Eric Sewell said he was going to teach a computer music class at New Mexico Tech. I had been thinking about taking a class, probably a math class like linear algebra, but on a lark I decided this would probably be interesting. I’ve always loved music, but never really learned an instrument, and I thought, maybe if I involve the computer I can get further than if it’s about dexterity and years of tedious practice.
I started the class with no real expectations. We’d be using Csound. The lectures were a mixture of technical stuff about using Csound, basic concepts of synthesis, and listening to works by pioneering composers and musicians in the realm of computer music and synthesis. I didn’t enjoy many of those works except on an intellectual level, but they were all fascinating, and a few I have listened to a couple times since, like The Bull by Morton Subotnik.
I had a “haha embarassing” moment when I realized I could probably replicate the sound from the opening of Regular Show with Csound, and talked to my friend the professor about it. With a hardware synth, this is just opening and closing the filter on a simple saw wave.
Hardware synthesis - Uno Synth
I think I was in class for 3-5 weeks before I realized we were actually talking about synthesizers. I started to think it would be much easier to make music if I had a hardware synthesizer and thus could turn knobs rather than having to create envelopes in Csound, run Csound, listen to the output, and then go back and try to guess what settings had been interesting. So mid-February I bought an Uno synth. If you want to know how it sounds, here’s Jade Wii playing it.
I choose this synth mainly because it had a nice analog sound, was inexpensive, did not have a real keyboard, and wasn’t a Korg. At the time, I was doing a lot of synth hardware comparisons and very nearly got a Korg Monologue, but decided the sound was too clean and cold. I was also intimidated by real keyboards because I had no formal music training, wasn’t planning on getting any formal music training and just wanted to make interesting sound. My friend Drew Medlin came over with his synths and I found myself quite unable to do anything interesting with him, partly because I just had no familiarity with the keyboard.
I found that I did greatly prefer interacting with the synth via the knobs, so much so that I found it very hard to be excited about using Csound again. When Covid hit, I had a nice excuse to drop the class, which was going to culminate in making some kind of 2-minute composition I was also terrified of. I also felt like Csound was so wide-open in terms of what it would let you do that it was impossible to figure out how to do rhythm effectively, and so there was no way to structure a song. I had to basically draw out on paper what I thought I wanted, program it in, compile/run, then tweak it and try again. It was reminding me too much of programming. I decided I wanted to escape from the computer altogether, and get deeper into an instrument.
Do I recommend the Uno Synth?
I realize some people reading this may be looking for reinforcement on the idea that maybe they should get the Uno synth. It is, after all, the one of the most affordable analog synthesizers out there.
I do not recommend the Uno synth. I say that without hate. There are aspects of the design of the Uno that make it hard to enjoy. One is that you want to power it with batteries, because if you use USB you get noise. But there are parameters in the patches that can only be reached by the companion applications, which must connect to it over USB. Ironically, the noise setting does not appear to “stick” unless you reach it from the companion application.
The USB connection, on the other hand, draws a large amount of power—this is an analog synthesizer, after all. Well, I found that the power draw was enough to damage my cheap Aukey USB-C port dongle thing. Maybe you could just power it over batteries? Yes, except the thing has a runtime of 1-2 hours on 4 AA batteries, and they strongly recommend against rechargable batteries. That makes sense, the power draw on this thing is enormous, and it starts acting funny in all sorts of surprising ways when the batteries start going low.
You can get around the USB noise problem by buying a USB isolator (?) for $40. Thanks.
My kids dropped it once, and that was enough to dislodge something in the battery case, so that now it can no longer really take AA batteries. I have worked around that by stuffing a wad of aluminum foil in there, but guess what, I now have noise issues in AA battery power mode.
I think it has a good sound, on the filthy side which I like, but I found these little frustrations depleting, and even getting around these, I started to see that it wasn’t really for me, although I made other mistakes first.
I think you were me, just starting out, and your budget is $200, you should probably allocate your funds elsewhere for your synthesis journey.
In April, I became aware of the Elektron Model:Cycles. I was very intrigued by FM synthesis from the class, and really felt like the complete absense of anything like a rhythm primitive in Csound was driving me away. This device appeared and was built around both FM synthesis and a very powerful rhythm concept, capable of doing polymeter or polyrhythms easily. Online, people talk about “the Elektron workflow” a lot and it was intriguing to me. Plus, it would keep me away from the computer. And then I saw this video, which basically sold me on the sound of it.
I made quite a few songs (or at least songish performances) with the Cycles, which you can find on Soundcloud if you really want to suffer. I didn’t realize it at the time, but it has a feature that novices greatly appreciate: it appears as a USB audio device when you plug it into a computer, and no hardware audio interface is needed for you to record it! This is a significant benefit for sharing what you do without having to throw out another $100–200.
I dove into the Cycles pretty hard for a month or so. I became a little disillusioned that I couldn’t make anything as nice as Jeremy from Red Means Recording, as if a guy playing music for a month on one device really had a chance to catch up to a professional with decades of experience. I didn’t realize it until months later, but a blank pattern on the cycles has one of each “machine,” which kind of implicitly guided me towards trying to make songs with one kick, one snare, one cymbal, one tom, one chord progression, and one melody line. In fact, I ran into the same obstacle as with the Uno pretty quickly, that it does not really have a keyboard and I didn’t really know how to play a keyboard even if it had one, so the beautiful melodic lines that Jeremy and others are able to make on these devices were just out of reach for me unless I bumbled into one on accident, which mostly I did not. I expected that I could make a decent drum line (there was a Pocket Operator PO-12 “Rhythm” purchase that I forgot to mention somewhere between the January and May), and I could, so most of the frustration came from the melody/chord side of the house. And the keyboard problem was getting untenable.
Do I recommend the Model:Cycles?
I have mixed feelings about the Model:Cycles today. I do use it from time to time, but although the presets show you a really wide variety of sounds, I mostly feel like the stuff I am making is all sounding about the same. I find FM synthesis fascinating and thought the kiddie pool version of it in the Cycles would be both powerful and easy to use. That thesis is wrong, but whether it’s because I’m too immature or it’s not that powerful, I don’t know.
I assumed because I enjoyed using the PO-12 that the “Elektron workflow” would be a natural fit for me. I did not find that to be the case. At the time I was trying to avoid using the computer as much as possible and my assumption was that by being marooned away from the computer would force me to get better at playing the instrument. I think considerable effort must go into programming a track on the Model:Cycles. I’m quite open to the possibility that it just requires more work from me than I have been willing to put in.
If you are curious about the Elektron workflow, both the Model:Cycles and the Model:Samples are affordable ways to find out if it’s for you. The instrument is sort of a curiosity for me now. I think of it more as a drum machine, but I don’t enjoy using it as much as the PO-12.
Subtractive synthesis is just one type of synthesis, but it’s widely used in hardware, especially analog synthesizers and analog-modelling digital synthesizers. The “four part” subtractive synth is a classic design, and something that I eagerly tried to replicate in Csound for the class. The idea that presets could create so many distinct and amazingly different sounds from just a few parameters fascinated to me. I discovered Syntorial, which is basically ear training for subtractive synthesis, and became enamored with the concept of hearing a patch and being able to recreate it just by knowing which knobs to twist.
I emailed Joe shortly after the pandemic started asking if he was offering discounts, and indeed, he gave me a discount immediately and then did a big public offer. I got Syntorial, and am about halfway through it now. A major lesson of this year has been that there just isn’t enough time for my hobbies, and I have a burgeoning pile of tutorial material I still need to get through, and Syntorial is in that pile.
I enjoyed taking what I was learning from Syntorial and applying it on the Uno synth, but the Uno synth was showing its limits. I started thinking about a larger synthesizer, one with a real keyboard that would enable me to actually learn how to play a little bit, or at least understand some music theory. I waffled between wanting something small that I could use on the couch, like the Yamaha Reface CS, and something larger, like the Arturia MatrixBrute. But then something awful happened.
I fell in love with the Moog sound.
If it isn’t a Moog, is it really a synthesizer? – Tyler Cecil
I saw two ways forward: I could buy one synth and be done buying synths for my life, or I could screw around with a small one, and keep getting larger ones and spending more and more until eventually I just couldn’t postpone it any more. Thankfully Liz understood and there was a big sale, so in June I got the Matriarch. Here’s Lisa Bella Donna playing it, and here’s another excellent 15 minute demo if you want a sense of the sound.
Syntorial had taught me enough that I knew my way around the synth immediately, as far as sound design. I still did not know how to play the keyboard. And I began to feel an intense guilt that I had this miraculous instrument that I couldn’t really play.
I contacted Eric and he agreed to give me some lessons, not oriented towards piano performance, but more towards music theory, basic chords, scales and dexterity for playing the synth. We did this for a few months. I got to where I can do the scales in various keys, play a major chord in various keys, and improvise in a way that sounds good to me. I am a lot less embarassed now than I was, although I should of course pour more time into this.
Following this, I realized it was pretty dumb to avoid the keyboard just because I did not know how to play it. Even if you can’t play it, music theory is written on the keyboard, and it will make more tactile sense than having capacitive pads, and more musical sense than a totally linear arrangement of keys as on the Model:Cycles.
Do I recommend the Moog Matriarch?
Yes, unquestionably. The sound is just unbelievable. At the time I got it I was also thinking about modular synthesis, and the idea of being able to safely and freely rewire things physically was really alluring to me. I liked the idea of doing extensive modulation through patch cables. Since then, I am mostly satisfied with the built-in routing and I don’t really see much need for manual cable patching, at least so far. The sound is just great, and it looks fucking awesome.
I’m no longer significantly interested in modular synths. I see the appeal. But I just don’t see myself spending $400 on a boutique oscillator or filter or delay module. That said, I don’t think Moog has a model of synth I would rather have at a similar price point. I could see the value of being able to store presets, but I kind of like that I have to turn the knobs to get the sound I want, and “debug” the patch when it doesn’t sound right. The “init patch” doesn’t scare me at all. More than presets, I think I would like having an internal mod matrix. The Sequential Pro 3 has a fun mod matrix routing mechanism that is appealing to me, but the sound isn’t as good to my ears. Moog One has something cool for this too, but it’s $6500 for the 8-voice model. I don’t think I could ever drop that kind of money on a synth. So around this price point, there just isn’t anything comparable in my opinion, and if you think you might want it, you probably do.
When I was in my early-mid teens, before MP3 really became a thing, I discovered these MIDI files on my computer. You could open one of them and the computer would play a song; no lyrics, but it was pretty fun anyway. So you see why until this year I thought MIDI was a file format for music notes. This is a big misunderstanding. MIDI is actually a protocol for high-level music communication, at the abstract level of notes and parameter changes, and it has a central role in synthesis because it wires everything together.
I became aware of the fact that Csound had MIDI support during the in-person portion of the class, but this didn’t turn into any sort of useful information. It started to dawn on me that to make use of MIDI one needed a controller—a physical keyboard—and since I developing a shine for hardware synths, it seemed irrelevant.
Once I got the Model:Cycles, I noticed that in the manual there was information about both sending MIDI to the Cycles and receiving MIDI on the Cycles. So the Cycles could control other synths, like a keyboard, but it could also be controlled by a MIDI controller. This was a bit of a noodle baker for me. I also began to notice that there were, broadly speaking, two groups of people doing YouTube videos about synthesizers: the larger category of people like Andrew Huang who talked about “music production” and, while they maybe had and used hardware synthesizers, they tended to bring everything back into their digital audio workstation (DAW). I had no idea what a DAW was, other than some kind of software, and that meant it was going to be like Csound and therefore garbage and bad. If I was going to use a computer to make music, it would be like programming, and it would be precise and therefore good, but I didn’t want it to be like programming, at least not yet. So I tended to focus on the second group, the so-called “DAWless producers.” But even they were using MIDI and talking about how useful it was, so clearly something was missing.
The first piece of software I used that gave me the feeling I could actually make songs was ORCΛ. It was this program that finally drove home for me what MIDI was about. I could wire up ORCΛ to Pilot, the companion synth, or I could wire it up to GarageBand, or, I discovered, I could wire it up to the Model:Cycles and record the output in GarageBand. So I did this and made a few little demonstrations.
ORCΛ is really fun, but kind of a terrible environment for composition. An interesting idea. Not quite right for me though.
At some point this year I started chatting with Drew Medlin about synths and synth stuff, and he has exerted a small but unyielding amount of pressure on me, to produce bits fit for sharing with him, and to keep the conversation going about music and synthesis. Early on, we were both using GarageBand to record. He invested in different things than me, so he wound up with a flagship OB-6 synthesizer and a couple Roland desktop recreations, a KeyStep Pro and was willing to play software instruments. So to him the DAW was something that existed mainly to crop the performance, which was mostly a one-shot affair with one instrument. I was ardently avoiding the computer because I wanted to allow myself some time to develop depth with the keyboard and synthesis, in fear that it if I involved the computer it would turn into another kind of programming for me.
In November I guess I came around the corner on that and decided that it was time to figure out a DAW. I wasn’t so much interested in software instruments, but in the idea of combining several recordings into one. I had used an iPhone app called Koala Sampler to play some simple sample-based stuff. I started to wonder what a DAW does, if it could be more than a way of gluing together recordings, and what else it could do and how. Andrew Huang’s channel, especially the “four-producers, one sample” videos like this one, was starting to convince me that there was something interesting going on.
I dinked around with Ableton Live and immediately saw something greatly more powerful than GarageBand. The clip launcher seemed like such an obvious improvement. I have no desire to play live music but as a way of organizing the composition into phrases, it made a lot of sense. I was starting to enjoy the idea of it, and then my Mac’s battery died and I was without a Mac for two weeks over the week of Thanksgiving. So I took the opportunity to try and see if there was anything out there for Linux.
I discovered several options. The hideous but widely-loved Reaper, the beautiful but incomprehensible Waveform, and Bitwig Studio. I spent a few days “used car shopping” between these alternatives and I tried them all out, and the only one that survived the process was Bitwig. I wound up actually buying the $30 tutorial course for beginners by Thavius Beck before buying the program itself, which is one of the best decisions I made all year. Simply having someone patiently tell you what all the panels do, how the shortcuts work, and the overall approach to using the program, made a huge difference in my comfort and understanding.
Like Live, Bitwig has a clip launcher/timeline dichotomy. It’s also more cross-platform, and it looks very beautiful. It handled the display resolution in Linux perfectly (Waveform did too). But it was when I started to learn more about it that I became convinced it was not only a powerful and useful program, but actually an amazing feat of engineering that would provide a powerful basis for experimentation long into the future for me.
The sense I get from other DAWs is that, most of your work is going to be done by loading various plugins to do the work. Many of the flagship software synthesizers out there have their own internal LFO and modulation system. Bitwig, on the other hand, understands modulation on a very deep level, so the necessity just isn’t there for you to rely on other soft synths to provide it. Bitwig also is highly recursive. Tracks can be groups of tracks, recursively; instruments can be instrument layers; devices can contain other devices or effects, there are note effects that come before audio effects and can be chained. Many devices have internal effects chains as well. I sense the presense of powerful and intuitive data structures underlying the system, and that makes it very natural to me as a programmer.
The coup de grâce is Poly Grid, which is a fully modular synthesis environment. I had tried VCV Rack and Voltage Modular at some point and found them unintuitive and one of them seemed kind of like a money pit. The Bitwig Grid eliminates a great deal of pain thanks to pre-cords, the ability to swap components out by right-clicking on them, and a plethora of excellent built-in components. It’s not trying to imitate a Eurorack modular synthesizer package, so visually it’s much clearer to see what’s going on. Polymer also makes it easy to convert a configured 4-function synth into a grid setup for further tinkering. The community of youtube people who actually use Bitwig are extremely smart and creative, and their video tutorials are just amazing: Tâsche Teaches on Euclidean Rhythms is a great example, along with Polarity Music on creating chords.
Seeing the world with new eyes
I have sort of a twenty year gap in my musical taste. I would characterize my parents taste in music as Woodstock. They loved the Beatles, and from my early childhood they are the first band I really think about liking. I grew up in the 80s, by the way. By high school I had made it into the early 70s and became quite obsessed with Led Zeppelin. By the time I graduated, my favorite band was Blue Öyster Cult, and I was starting to develop a taste for metal. In college, I had a friend who was into metal, so I discovered Death, who are responsible for one of my favorite songs and the eponym of the site, and had gotten in fairly deep with Megadeth, developed a real love for the band Mekong Delta (I was an early member of their internet forum), and was playing Primal Fear for my friends, who hated it but did begrudgingly admit that the musicians were more talented than Metallica. So it would be safe to say that I had a profound love of electric guitar. I made a point of trying to inculcate in my kids an appreciation of how amazing it is, that this instrument works through electromagnetism and not sound, the pickups are not microphones.
Beneath all that rock and metal, there had been a subterranean current of electronica. Before my friend helped me get deeper into metal at college, my big discovery had been Astral Projection, I had loved Juno Reactor. The first CD I bought with my own money was Ace of Base, which is the sort of deeply embarassing thing you don’t admit to your wife until you’ve been married for a year or so. I had owned albums by Aphex Twin. And by the time I did meet my wife, I was warming to pop music. That has become a full-fledged love, to the point I would say to people for the past ten years or so, that I love metal and pop and that was about it. This last year, for instance, I mostly listened to Charli XCX’s how i’m feeling now, which I think summarized the year for me.
So it came as a bit of a shock to discover this year, that many of the pop bands that I loved, I loved because of their use of synthesizers. Devo, CHVRCHES, Charli XCX, The Naked and Famous. It is blindingly obvious, but I didn’t really realize it until this year, that in fact I have loved the synthesizer for most of my life without realizing it, with considerable ignorance. I had often wondered at amazing “sounds” in different songs, but apparently not deeply enough to figure out how they were made. Even probably my favorite all time metal album, Rising by Rainbow, opens with a lengthy Moog synthesizer solo and has duelling solos between synth and guitar in the final act of the final song.
Musically, this year has been a constant moment of awe and discovery for me. The wonder of it all. And I owe this to Eric Sewell in first for opening my eyes, and to Drew Medlin in second for urging me on to keep them open. And to Tyler Cecil, probably the only one to actually read this blog, for providing guidance and counterpoint at various critical moments, and making it so that this actually gets written down. And to my wife Liz, who has supported my interest with patient ears, financial flexibility, and tolerated many YouTube videos on these topics. Whatever pleasing sounds I have learned to make are only in my mind because she delights me.