home

Combobulator is the name of an audio effect plugin developed by Datamind Audio that uses generative AI to resynthesize an input signal in the "style" of the audio supplied in a selected dataset or "artist brain".
At the time of writing this (july 3rd 2024), the plugin is still in it's beta phase (i think). But there are already a number of available models trained off of the music of various artists such as woulg, mr bill, encanti, and others.

As for my involvement with this plugin, I have three models that are available. One of them comes free with the plugin, and the other two can be purchased as a bundle for 33 USD.

Acoustic model - This model is included with the plugin by default upon purchase. It was trained on a lot of my own personal acoustic recordings of things like marimba, piano, assorted percussion instruments from around my school's band room, a bunch of syntesized/physically modelled sounds, and a bunch of other assorted percussive/acoustic things. I dont have the folder infront of me rn (i deleted it off my computer cause it took up too much space; it lives on google drive now). I personally much prefer the way this one sounds; just be aware it tends to struggle with louder inputs so maybe turn the input gain down if you're not getting any sound out of it.

Synthetic model - This model is able to be purchased bundled with the combined model for 33 usd. It was trained on a lot of my released music, some of my stems, a lot of my wips, a lot of my sound design mudpies, and assorted other random sound design/music stuff. Basically anything that I've made that isn't speciifcally acoustic. This one tends to produce very chordal and melodic sounds similar to some of my pads made by resampling reverb tails, but also has a lot of potential for percussive outputs.

Combined model - This model is able to be purchased bundled with the synthetic model for 33 usd. Its training data is a combination of the synthetic and acoustic models. I didn't really experiment with this one very much so i don't have the greatest insight about what sounds it tends to output.

If you're interested in learning how this technology works, you should watch this ircam lecture and demonstration: this is the link you should click

Before any red flags are raised about the plethora of concerns related to generative AI in art, this is in my opinion ethical. I voluntarily provided datamind with the rights to the assets that the models were trained on, and will be receiving 50% after expenses. The models were trained on datamind's own supercomputer which primarily uses solar energy, and zero-emission cloud services. The training data I provided is kept securely in the possession of the training team and will never be used for anything else. If you want me to provide additional information, be sure to ask.


thumbnail on the website:




in the future I may upload some videos to either my youtube channel or this page demonstrating what the models sound like, but i'mm currently unable to do that because I'm operating on airplane wifi so dealing with any files over the internet that exceed like... 2 or 3 megabytes is gonna be very painful (although it would be sorta funny-- like imagine hosting like a minecraft server from your laptop over airplane wifi lmfao Hilarious!).