Is Google Displacing Musicians With Its New Generative AI System: Music LM? (Part 1 Of A 2 Part Series)
Although not yet released due to copyright issues, MusicLM is a signal of what’s to come in using AI to generate music from text descriptions.
What is the impact of taking all the musical know-how and ingesting into an AI brain with only asking questions expressing your musical tastes? Who needs to be a musician to entertain when we will soon be able to create our own music even more easily and tap into the human genius of every muscian that has a recording?
Google researchers say MusicLM is based on a model generating high-fidelity music from text descriptions such as “a calming violin melody backed by a distorted guitar riff”. You can find the details on GitHub.
MusicLM is built on a neural network, and trained on a large music data set of over280,000 hours of music, enabling it to automatically produce innovative music tracks of diverse instruments, genres, and concepts based on text descriptions.
Essentially, if an AI attempts to mimic a human brain by ingested all the musical patterns, and sound frequencies that it is exposed to. One only needs to search for AI-generated Carti tracks on YouTube to listen to this type of technology like Digital Butterflies.
Like a magic word wand, MusicLM can more accurately produce higher quality fidelity audio, you can even hum a melody to train the AI algorithm to get the right beat you want to hear. According to Google researchers, the model generates music at 24kHz that remains consistent over several minutes.according to researchers. Google researchers have also published an AI training dataset of 5,500 pieces of music to support others researchers working on automated song generation.
This is indeed a sign of what’s to come in the music world. We must answer harder questions and build strategies for effective policy and legal legislation like:
- What is the risk of AI algorithms creating their own compositions and work, and who owns this work that AI or the human?
- Who owns the music when it’s a blend of everything on the world wide web and creates a new song from the brilliance of our musician’s?
- When you purchase music, are you also purchasing the right to use its audio as an AI training data?
Since YouTube sensation and American Idol, Taryn Southern, started composing music with AI, musicians globally are trying to understand the impacts of AI on musicians.
What is clear is that we need to improve our legislation in regards to ownership of musician’s music and understand how AI algorithms should be treated and managed in the musical industry.
Although Google, Meta, Microsoft, OpenAI and many other AI market leaders will continue to advance the frontiers of every industry using AI. We as humans have an ethical responsibility to think harder about the world we want to create and leave for future generations.
If you are a Board Director or CEO or C-Level in the Musical Industry, learning more about AI is a business imperative to understand the long term effects of AI in the musical industry and shaping the world you want to protect as “human brains and musical talent” have value and we are rapidly commoditizing precious creative DNA into bits and bytes that will have major impacts to our musicians’ creative ownership entitlements.
At least ask the hard question and do some scenario risk analysis?
To support future research, Google has also publicly release MusicCaps, a dataset composed of 5.5k music-text pairs, with rich text descriptions provided by human experts.