From the creation of Alan Turing’s melody-making machine in 1951 to David Bowie’s digital lyric randomiser, music and artificial intelligence have long been intertwined, but curiosity around untapped potential remains as strong as ever.
In June this year, NME published an article that piqued my interest: “YouTuber creates fake Nirvana song using artificial intelligence.” I was curious to hear the results, and I was not disappointed.
Funk Turkey, the alias behind the creation, used lyrics.rip to scrape the Genius Lyrics database before utilising a Markov Chain to pen lyrics in the style of the late Kurt Cobain. To the untrained ear, the resultant track - entitled “Smother” - wouldn’t feel out of place on an early album.
And Funk Turkey didn’t stop there… their cheekily titled AC/DC song “Great Balls” boasts over a million views on YouTube. Metallica, Nickelback, Iron Maiden and Red Hot Chilli Peppers have also been subjected to the Turkey treatment, to varying degrees of awesomeness.
In September 2016, “Daddy’s Car” – purportedly the first ever entire pop song written by AI – was unleashed by the Sony CSL Research Laboratory. Described as being “in the style of the Beatles”, it sounds more like ELO meets Supertramp, trying to imitate the Beatles after smoking a bit too much herbal tea… Even so, the process is interesting.
Researchers at Sony developed “FlowMachines” - a system that learns music styles from a huge database of songs. Exploiting unique combinations of style transfer, optimisation and interaction techniques, FlowMachines composes novel songs in many styles. There is human intervention, however, in the form of French composer Benoît Carré who is credited with arranging and producing the song, as well as penning the somewhat questionable lyrics.
The YouTube comments, however, are the gift that keeps giving… not least Greg Letourneau’s ever so apt reference to Back to the Future:
“I guess you guys aren’t ready for that yet. But your kids are gonna love it…”
So, what could this mean for the future of the music industry? Will AI unlock opportunities for up-and-coming musicians to hit new heights, or simply serve to regurgitate the lyrics of yesteryear?
According to a report from the McKinsey Global Institute, 70% of companies will have adopted at least one AI technology by 2030, and the music industry is no exception.
Forbes addressed this topic in a thought piece by Bernard Marr in July 2019, and put an end to the “if / when” debate straight out of the gate:
"The days of debating if artificial intelligence (AI) will impact the music industry are over. Artificial intelligence is already used in many ways. Now it's time to consider how much it will influence how we create and consume music. Just as it does for other industries, in the music industry, AI automates services, discovers patterns and insights in enormous data sets, and helps create efficiencies. Companies in the music industry need to accept and prepare for how AI can transform business; those that won't will be left behind."
Marr’s words fire a warning shot to traditionalists: evolve or die. But surely this sentiment applies to distributors and platforms, rather than musicians themselves? Arguably not...
"AI-based mastering services such as LANDR provide musicians with a more affordable alternative to human-based mastering, and so far more than 2 million musicians have used it to master more than 10 million songs. While there is still a creative component involved in audio mastering and some prefer to rely on humans to do this work, AI makes the services accessible to artists who wouldn’t be able to master their songs otherwise."
And it’s not just bona fide “artists” who can benefit from the advances in AI technology, ordinary folk can too.
Tools such as AIVA allow complete novices like myself to compose so-called “emotional soundtrack music”. To my mind, that conjures up images of employing the puppy dog eyes routine to get out of the washing-up but each to their own!
Simply choose from pre-set algorithms to compose new tunes in your chosen style: modern cinematic, electronic, pop, ambient, rock… and my own personal favourite, sea shanty! The beauty of tools such as AIVA is that they cut out the middle man. Instead of paying one-off fees to platforms such as Audio Jungle for “Generic Ambient Happy Sounds Part IV”, you pay a monthly subscription to create and download as many tracks as you like.
A rock music track entitled “On the Edge” generated by AIVA has over 232k views on YouTube. The user comments are fascinating: “A whole new genre: post-apocalyptic rock”, “It's... Surprisingly good. Now the only thing you've got left to do is to teach the AI to sing”.
However, the downsides to AI are also exposed: “That's why I'm no longer a music composer but a programmer”, “I never knew it was possible to be excited and worried about the future at the same time”, and “Don't you all realize the past dreams are happening now. Pretty soon we will make robots that look and sound like us. And then the music they would create would blow our minds but not theres [sic].”
Whatever your view, AI is here to stay. With tens of thousands of new tracks uploaded to Spotify daily, AI plays a critical role in sorting through them to deliver personalised recommendations to listeners. Marr’s article points to the end of the music genre as we know it, as AI-generated playlists - enabled by big data - focus on determining what is and isn’t “good” music, irrespective of style. And you’d have to agree, the bots are doing a damn good job. I have discovered more exciting new music from lesser known artists in the past 6 months on Spotify than I did in years of watching Top of the Pops.
And it’s great for a shot of nostalgia too. On a recent drive home from visiting family, I kicked off a Spotify Radio playlist with Huey Lewis and the News’ “The Power of Love” (naturally!) and basked in musical gold for the entire 90 minute-journey.
It’s safe to say that at some point in the not-too-distant future, that catchy tune we hear on the radio will have been created entirely by AI. The question is will we be able to differentiate between human-created music and that of AI counterparts? And, more importantly, will we even care?