There’s a lot of buzz around the ramifications of Machine Learning and Artificial Intelligence (A.I.) and a vivid discussion on how these technologies will potentially affect our future. From self-driving car prototypes that will ideally improve road security (but can also result to the loss of millions of driver jobs) to mobile apps that make our photos look like Van Gogh paintings, every new A.I. breakthrough can still surprise us and change our expectations of what an algorithm can do. When algorithms become capable of tasks that we previously thought only humans can perform, our human psychology kicks in: we become emotionally attached and the conversation gets polarized. Lately, such polarized discussions have reached the music industry: Who’s going to lose their jobs? Are we heading to a world where audio engineers will be replaced by machines? Will robots replace musicians or producers? What about post production professionals?
But what are we exactly talking about when we use the terms Machine Learning or A.I.? Before trying to answer any polarizing questions, let’s take a step back and make sure that we all talk about the same thing. Strictly speaking, Machine Learning is a subfield of A.I. that is concerned with algorithms that allow a computer to learn to perform a specific task. By “learning” here we mean that the computer can perform a task, but does not understand “why” or “how” in the way a human does. A.I. is a much broader field (that encompasses machine learning) that aims to create computers that can truly reason, understand and create knowledge. But all these are a bit abstract. Let’s use an example with audio software to understand the fundamental difference between a traditional algorithm and a Machine Learning algorithm.
accusonus drumatom is a drum bleed reduction tool that was released back in 2014. At the time, it was arguably one of the first music software products to use Machine Learning.
Drumatom is a standalone application that analyzes your multitrack drum recording, learns from the microphone signals and after some heavy calculations it identifies microphone bleed in your individual drum tracks. After the analysis is completed, you can adjust the level of bleed in your drum tracks using just two knobs!
What’s going on under the hood, is that drumatom “learns” what part of the microphone signal is useful and what is bleed using just the multitrack drum recording. Since it doesn’t “understand” what microphone bleed is it will identify different things as bleed depending on the input tracks. The results will vary depending on the number of microphones, the type and position of the microphones, etc.Although drumatom has a very straightforward interface in order to get the previously-non-thought-possible functionality of performing drum bleed adjustments in your microphone recordings, you need to experiment and acquire a certain intuition on what data to use (i.e. what combinations of microphone tracks) so that drumatom can “learn” to identify bleed correctly.
Now compare this behaviour with the one of a typical audio effect like an equalizer. Of course you need to be skilled to properly setup your EQ settings in a mix, but when you boost 5 dBs the frequency range around 4 kHz you will always apply the same EQ filter, no matter the audio input. On the other hand, products based on Machine Learning, “learn” and change their behaviour based on the input that you’ll feed them.
Automatic For The People
Now, will these algorithms learn so many things that they will potentially replace the music makers? Three years after the release of drumatom, new Machine Learning products have been released (see for example Audionamix ADX or Izotope’s Neutron). Besides some rare exceptions, most of these products do not try to fully automate sound engineering tasks or replace humans involved in the music making process. On the contrary, they are trying to take advantage of the huge potential of these new algorithms to make the music making process faster, more fun and take sound processing a step further. This is probably a new thing! Despite the fact that music makers are all about inspiration and forward thinking, the music software industry didn’t evolve as it could have. We never took seriously the fact that all humans now carry with them mobile computers that are more powerful than the computer that sent the Apollo 11 to the moon. Instead of experimenting with new ideas, the industry has fixated over recreating our analog past in the digital world. But now these new algorithms have the potential to bring a true paradigm shift. They can enable products to solve real-world problems that were thought unsolvable and open up new creative possibilities for artists.
An example is a recent accusonus product Regroover, which we defined as a beat un-mixing plugin. Regroover uses Artificial Intelligence to offer a workflow that was previously unimaginable: producers can now take what used to be static or even “boring” audio samples and make them sound unique and exciting. When asked about how this makes its way into a product, Alex Tsilfidis, accusonus co-founder and CEO explains, “Our goal with this plugin wasn’t to just un-mix drum loops to their original sources, but to produce musically meaningful layers that open up new creative possibilities. It took us many years to fine-tune the A.I. that looks into the drum loops and “learns” what we think are musically meaningful layers.”
Despite the fact that most music related Machine Learning products aren’t meant to replace jobs, the discussion whether those will be replaced continues. But in this discussion we forget that we are in a special industry. Although the idea that algorithms will replace humans in tedious and boring tasks is appealing, the same idea applied in creative tasks such as music making becomes weak. We are blessed to talk to hundreds of people that are involved in music making (under any possible role) and every single one of them sees their involvement as probably the best thing that happened in their lives. This is why, even if someone invents the perfect algorithm for automatic composition and music production, this poor algorithm will have to compete against millions of passionate and inspired composers and producers. And these great human beings will be also able to communicate with their fans, create relationships and friendships and have fun along the way. So none of them is in danger of losing their jobs to a learning algorithm anytime soon.
When looking at the industry’s current set of tools, Tsilfidis asks a counterpoint question, “This new breed of algorithms can solve actual problems and will bring a paradigm shift in music making. What if innovative companies focus on that instead of spending their resources in emulating historic analog boxes that have been modelled many times over? The reception of accusonus’ products has proven that music creators will embrace the new workflows and love the tools that challenge their perceptions of what is possible with audio. This collective effort will create a far more exciting future for music makers and it’s certainly not the hypothetical jobless dystopia that many people love to talk about.”
Don’t miss Avid’s Special Offer in October: Save 50% On De-Noise and Reverb Removal Plug-ins from Accusonus.