What might be the surprising benefits of AI-generated music?
February 17, 2022 12:03 AM   Subscribe

Whenever my first reaction to something is focused on its downside, I try to consider or imagine the upside. Yes, the songs that AI creates today are barely listenable, but it will inevitably create great music. What amazing things might be created? What experiments would you want to see?

The obvious idea is generating covers of songs using unexpected styles.

A less obvious idea that I thought of: The ability to create new albums for music artists who died at a young age. (E.g., 2pac, Sublime, Nirvana, Jimi Hendrix, Selena.)

What else can you think of?
posted by shrimpetouffee to Media & Arts (9 answers total) 1 user marked this as a favorite
 
Responses to this depend heavily on how you define AI. You might get a lot of insight from how Brian Eno speaks on the subject (there are many, many more interviews like this out there). He's been using AI to generate music for a long time now (and his first generative music app, Bloom, is what made me get my first smartphone). Bloom still exists, and I still have it and listen to it. It makes beautiful, maybe even sublime music in part because it's aiming at a reasonable goal. It's making ambient music--with or without human input--based on front end programming that sets boundaries of what does and doesn't sound good. The AI, then, has the functionally infinite patience to continually make these gentle, slight, evolutionary changes to how a piece of ambient music unfolds. That's an inhuman ability mapped to human tastes, which is a good way to think about AI making music.

Since there's no shortage of humans who are fully capable of making barely listenable music in the spirit of experimentation and discovery, I often hear AI's musical value described as an avenue for welcoming in serendipity and unexpected consequences. In that article linked above, Eno says he's "interested in finding out what new technologies can do, primarily because they so often can do something nobody ever thought they could do. “They were invented to do one thing, but you can be sure they can do something else much better,” said Eno.""
posted by late afternoon dreaming hotel at 12:43 AM on February 17, 2022 [2 favorites]


I just heard plant-wave music for the first time today. It has a human element in the stimulus sound translation. I'd be interested in how AI could interpret music plant-energy stimulation, and AI use of other sensory aparati to collect and interpret stimulus and turn it into earth music.
posted by Thella at 1:06 AM on February 17, 2022 [2 favorites]


Music and acoustics figures in sci-fi e.g. Richard Morgan's Altered Carbon, and while I have a vivid imagination I can't get there.

What would that be like, it would almost certainly be noise to (most of) our ears, like what will another century of ideas, culture change, new noises and deep space travel do to how we make and hear (and feel, see, taste) music? I'd love to see/hear some possible trajectories for Drill and Grime and Gqom over the next century.
posted by unearthed at 1:29 AM on February 17, 2022 [1 favorite]


My understanding is that the exact way the AI is coming up with the new pieces is kind of a black box. The genral parameters can be set and known, but afterwards, how the weights of the nodes or neurons are set can not be interpreted by humans.

If this becomes possible, and humans can truly understand how an AI generates Chopin pieces this might yield new insights about music in general and composition and creativity etc.

And it could be a stepping stone for true AI. If you succesfully simulate enough systems a human mind employs, like creativity, speech, emotion, greed, whatever, and you combine them all, you could wind up with something that behaves quite humanlike.
posted by SweetLiesOfBokonon at 2:23 AM on February 17, 2022 [1 favorite]


The main way that I can think of that AI is applied to (visual) art at the moment is neural style transfer. So you give an AI a photo, and get it back in the style of say- Monet. I imagine something similar would be possible with music, and could be kinda neat. You could play a melody on a midi keyboard and hear it as an Eddie Van Halen guitar solo with his particular nuances, or even a fully fleshed out orchestral arrangement by your favourite composer.
posted by Ned G at 4:03 AM on February 17, 2022 [1 favorite]


I know pretty much nothing about AI so take this with a large grain of salt, but maybe... it could simulate collaborations that could never have happened in history? What if Hendrix lived in Mozart's day or vice versa, or they each lived in their own times with their own influences, but got the gift of time travel and had the chance to jam? (I mean, I guess Hendrix had the option to listen to Mozart and be influenced by his music, but maybe AI could force some kind of a more direct collaboration that might be built out of iterations of back and forth between the two of them?)
posted by penguin pie at 4:31 AM on February 17, 2022 [2 favorites]


Best answer: (note: I teach classes on AI). I think the day where an AI produces a truly original composition of the quality of a three-minute pop song, much less Chopin, is a long long way away. Music is full of implicit structure and subtleties spread over time, which is actually a very hard problem. The current deep learning tools that generate images and text (e.g. OpenAI) are really doing a very sophisticated form of mimicry - you give them a ton of input, and they produce new works that are "like" those works, but like is a pretty shallow definition. They don't have an artistic intention, or a message that they're trying to convey. "Stochastic Parrots" is the phrase that Timnit Gebru and colleagues have used to describe it. It's awesome for a lot of applications, but there's no ghost in the machine.

Where I think we will see some impressive and interesting things is with artists who are using AI as a compositional tool (which is how I see Eno using it). Not so much "create me a sonata". but "let's use this tool to generate some new ideas and snippets and sounds", much in the way that samplers didn't replace music, but gave artists a new medium and palette.

We're also seeing AI and ML applied to more mundane tasks, with really cool results. In the recent Get Back documentary, deep learning was used to pull out the audio of John and Paul talking from a ton of background noise. That sort of very clever adaptive signal processing could be really useful for restoration and remastering, acoustics and other specific audio tasks.
posted by chbrooks at 8:37 AM on February 17, 2022 [5 favorites]


Extrapolation isn't a strength of deep learning. Interpolation from the training data is basically how the linear algebra works, so surprising stuff that it finds will be things already in the training data.

We might get something that finds more mash-up patterns between songs you never put together, again this is data already in the music in the training set.

We won't find new harmonies without changing the rules of the music we train the machine learning with -- not 12=2*2*3 notes in a doubling, but 18=2*3*3 or 20=2*2*5 intervals, which gives different subgroup structure.

(Can I ask about the premise of question: how do we predict surprising things, doesn't that kill the surprise if we can anticipate them? Like, is AI entirely an unknown-unknown thing in that it is all surprising?)
posted by k3ninho at 10:07 AM on February 17, 2022 [1 favorite]


Best answer: As late afternoon dreaming hotel says, the most interesting positive potential is likely when it becomes a tool for composers and musicians and encourages them to break out their box a little, suggests a new direction they otherwise wouldn't have taken, and so on.

I know people who have been using various tools for that kind of purpose for about 30 years now. So it's not an altogether new thing at all. If AI tools can do that "help generate good new ideas" thing better, they will be used - at least by some people.

Another use I could see exploding long before some kind of "AIChopin" takes over at the top of the charts, is as a sort of composer's or arranger's assistant in the sense that the composer or arranger lays out the broad direction of a piece or arrangement, and then the AI fills in a bunch of details according to those instructions. Then the composer/arranger tweaks things large and small until satisfied. Maybe you go back and completely re-do the instructions altogether, or maybe you tweak a few individual notes or lines. In the end, there is your new piece or arrangement - of which all the main ideas are from the composer but the majority of the busywork/gruntwork has been done by AI with input from the computer.

I could especially see this taking root fastest and first in areas like creating music for TV shows and videos - the kind of thing where you need many hours of music. Again I would see a human overseeing and directing this process but there is so much music to be written that I could see AI taking over more of the gruntwork of filling out specific arrangements etc over time, if it is really capable of doing so.

In short, it's far more likely to show up in the industrial music type situation, where you have a need to churn out large quantity of product in a short time frame, as opposed to the pinnacle of art music - where authorship and individually personal expression is valued above all - or even at the top of popular music genres, where the again you're looking at personal expression, trying to say something in the context of existing genres and other works, and so on.

And even more, the musical skill needed to write 2.5 minutes of melody and chord progression is not really the most expensive or vitally missing skill that is just begging to be filled in by technology. You've got 30 or 50 people hanging around any given recording studio who have the technical chops to write a song in an hour or two or certainly a day or two. The question comes down to more, WHY do you want to write a song with given chords, melody, style, etc. That is where the AI is just not going to be that helpful, because just having the technical ability to put something together does not help answer that question. So much of what goes on in the space is responding to (and often, against) very new and fast-moving trends. AI is always going to be a step behind in that arena, because by definition it is derivative of what was already there when it was programmed.

You're going to be reprogramming it again every week or two, and even then it will be behind. At the cutting edge, musicians are always looking to the next new thing and never at the last.

Still, there are plenty of areas where being able to quickly and easily mimic an existing style could come in handy and I suspect that is where AI is more likely to be useful.

Another topioc: Writing & arranging music is very, very time consuming and tedious. We're already using computers to help with a lot of this - programs like MuseScore, Sibelius, etc, take a ton of the busywork out of preparing a score, creating the individual parts, etc. But if AI could help that process along even more that could be tremendously helpful.

An avenue where AI could possibly be tremendously helpful if it could be made to work, is in preparing scores and parts. The programs I mentioned above are like "word processors for music" but it still takes a mountain of hand-tweaking to turn the a finished musical composition into a finished, readable, and beautiful musical score.

Point is, AI might help - and in fact, be most useful - in areas different than just 'being the composer'. There might be a bunch of areas in between "here is my musical idea" and "here is my completed musical score" or "here is my completed performance or recording of the work" where AI type things could help.
posted by flug at 8:50 PM on February 17, 2022 [2 favorites]


« Older Tweens and clothes - parenting best practices?   |   Toast & poem for parent's Golden Wedding Newer »
This thread is closed to new comments.