Sciweavers

ICASSP
2011
IEEE

Automatic music tagging via PARAFAC2

12 years 8 months ago
Automatic music tagging via PARAFAC2
Automatic music tagging is addressed by resorting to auditory temporal modulations and Parallel Factor Analysis 2 (PARAFAC2). The starting point is to represent each music recording by its auditory temporal modulations. Then, an irregular third order tensor is formed. The first slice contains the vectorized training temporal modulations, while the second slice contains the corresponding multi-label vectors. The PARAFAC2 is employed to effectively harness the multi-label information for dimensionality reduction. Any vectorized test auditory representation of temporal modulations is first projected onto the semantic space derived via the PARAFAC2 and the coefficient vector is obtained. Then, the annotation vector is obtained by multiplying this coefficient vector by the left singular vectors of the second slice (i.e., the slice associated to the label vector). The proposed framework, outperforms the state-of-the-art auto-tagging systems, when applied to the CAL500 dataset in a 10-fo...
Yannis Panagakis, Constantine Kotropoulos
Added 21 Aug 2011
Updated 21 Aug 2011
Type Journal
Year 2011
Where ICASSP
Authors Yannis Panagakis, Constantine Kotropoulos
Comments (0)