I am 73 years old, retired, living in Toronto, Ontario. I took piano lessons for several years as a child, but none of the technical abilities with respect to the piano have survived. My first brush with electronic music was in the mid-late 1980s. The computer was a Commodore 64 and the software was from Dr. T. The Commodore 64 lasted a year and then I switched to an Atari 1040, but still with the Dr. T suite of software. The weak link of the chain was there was no way, given the equipment available to me at the time, to digitally record and master the compositions. The few that I did were stored on cassette tape, which have long since deteriorated. In any event, fast forward to about three or four months ago, when a dear friend from the Yoga Nidra community. motivated me to get back into music composition. I've done more interesting stuff in the past four months than I did in several years when I first started to dabble in this. I can't believe how much functionality can be harnessed just in front of a basic PC laptop. My current project is a bit weird, and probably out of scale for what abilities I have to manage it. Over the past week, I have managed, with the help of the ChatGPT AI, to actually build a device in Max for Live. I'm using this device to help me frame working in a 19-tone equal temperament scale. One reason I chose that is because Ableton comes with a large number of esoteric scale tunings including one of of 19 tones. So I didn't have to manage that on my own. Although you can import any kind of scale in the.SCALA format. I am looking to do a kind of a row/serial type work in the style of Schoenberg or Stockhausen. One of the things I wanted was to generate a random set of 19 notes that only selected each note once. Doing randomization in Ableton is easy, but out of the box it truly does random notes rather than reshuffle a specific serial progression. I wouldn't have been able to get to where I am in Ableton without being able to rely on AI to help troubleshoot and explain how and why I should be doing things a certain way, and I definitely would not have been able to build this device without the over-the-shoulder help provided by ChatGPT, in spite of a number of its mistakes and hallucinations. So that's what I'm working on for the moment.
One of the problems of this hobby is there is so much cool stuff that is out there and is coming available that my attention keeps getting distracted. In the past week, I came across the web-based code front-end to live programming a musical performance by way of "Strudel". I'm having to resist the temptation to poke my nose into that. https://youtu.be/HkgV_-nJOuE?si=ooubalHNWF5pEUNl One of the things I find is these applications are so powerful and complex that it's necessary for me to practice using them every day. I think if I went for more than a day or two without using Ableton, some of what I've learned will simply start to evaporate.
Another absolutely cool piece of software I have my eyes-on is GRM Atelier. This is something that has taken sound design microsurgery to a whole other level. The good news here is that right now it's only available for Mac, and I'm a Windows user, so all I can do is watch the YouTube clips as they come out. But apparently, there is an expectation that a Windows version will be available in a couple of months. https://youtu.be/Ke79pvKxGGA?si=wE0v2dmurS1xysYI And finally, one of the distractions I have incorporated on my journey is getting familiar with the composer's desktop project and particularly the Sound Threads, nodal interface. This is like a more granular and less integrated version of the GRM Atelier, but lets you do sound manipulation at an extremely detailed manner. It's useful for doing interesting manipulations of sample clips, but it's practically impossible to envisage what you're going to get out of the other end. If you're into glitch, this is an incredible tool. https://youtu.be/ydEAYLshiNg?si=fCsWnuBA-anbXU46 In any event, I'm having more fun with all this than I have had in years.