The article talks about the issues of artificial intelligence and its relation to mixing and some of the trends emerging in the way individuals mix. The author indicates that mixing in a job that requires subjective preference, artistic judgment, and other creative procedures. The author analyzes some of the projects by different researchers in an effort to use artificial intelligence to automate the mixing process with an aim of making the work of a sound engineer straight forward or even replace the mixing engineer.
The first project analyzed was by Stylianos-Ioannis Mimilakis and a group of researchers. Mimiliakis’s project intended to determine whether the tonal balance of recording can be automated. The researchers suggested that the primary task of mastering are to reduce undesired effects of masking by adjusting the equalization. They added that to achieve this then there would be the need to enhance specific bands of frequency by considering the audio content’s musical key (Rumsey, 2013). In a research conducted by Zheng Ma, Dawn Black, and Josh Reiss they determined that mastering and mixing tend to operate at a certain subconscious frequency when equalizing the music. In another paper by Simpson and Sandler, they determined that by using an auditory analysis and loudness model then mixing can be based primarily on the relative loudness of the mix source that a particular listener actually hears.
According to Gorlow and Marc, the quality of a mix depends on the number of sound sources just as the spatial spread has a similar effect. Merlchior and partners delved into a research to determine how the intuitive controls can be used to make 3-D sound production efficient (Rumsey, 2013). They suggest that while gesture controls can be highly intuitive, they can have poor performance on the workflow integration, space requirements, and accuracy.
Reference
Rumsey, F. (2013). Mixing and Artificial Intelligence. Journal of the Audio Engineering Society, 61 (10), 806-809.