A new study shows that artificial intelligence can detect and create music similar to melodies that people have heard by scanning their brains.
AI can harness brain imaging data to create music similar to the songs heard by the subject of the brain scan.
According to a collaborative study between Google and Osaka University, artificial intelligence (AI) can generate a song similar in genre, rhythm, instrumentation, and mood to what a person has recently heard by analyzing their brain activity.
The researchers developed a system based on AI called Brain2Music, which is capable of utilizing brain imaging data to produce music that resembles a recently heard segment of a song by the person being scanned.
Co-author Timo Denk, a software engineer at Google in Switzerland, stated: “The mood aspect of the reconstructed music is about 60% similar to the original. The genre and instruments in both the original and reconstructed pieces match quite well. Among all genres, AI distinguishes best in classical music.”
Co-author Yu Takagi, an assistant professor of computational neuroscience and AI at Osaka University in Japan, mentioned that the ultimate goal of this research is to shed light on how the brain processes music.