Sound Research WIKINDX
Cano, P., Koppenberger, M., le Groux, S., Ricard, J., Wack, N., & Herrera, P. (2005). Nearest neighbor automatic sound annotation with a WordNet taxonomy. Journal of Intelligent Systems, 24(2/3), 99–111.
Added by: sirfragalot (01/26/2006 11:52:40 AM) Last edited by: sirfragalot
|Resource type: Journal Article
BibTeX citation key: Cano2005
View all bibliographic details
|Categories: General, Typologies/Taxonomies
Keywords: Semantic categorization, Synchresis/Synchrony
Creators: Cano, le Groux, Herrera, Koppenberger, Ricard, Wack
Collection: Journal of Intelligent Systems
Experimental results for a general sound annotator allowing for the selection of sound [FX] by sound [property] categorization rather than text [descriptive] selection.
See also (Xu, Duan, Cai, Chia, Xu, & Tian 2004; Khan, McLeod, & Hovy 2004)
Khan, L., McLeod, D., & Hovy, E. (2004). Retrieval effectiveness of an ontology-based model for information selection. Very Large Data Bases, 13, 71–85.
Xu, M., Duan, L.-Y., Cai, J., Chia, L.-T., Xu, C., & Tian, Q. (2004). HMM-based audio keyword generation. Lecture Notes in Computer Science, 3333, 556–574.
Added by: sirfragalot Last edited by: sirfragalot
Summarizing other research, states that "one of the main problems faced by natural sounds and sound effects classifiers is the lack of a clear taxonomy." For musical instruments "there is a parallelism between semantic and perceptual taxonomies" that does not exist for "every-day sound classification".
e.g. musical instruments are sustained, not sustained, string, brass etc. However, "one can find hissing sounds in categories of "cat", "tea boilers', "snakes." Foley artists exploit this ambiguity" by the use of sounds unconnected with a visual object. Added by: sirfragalot
PHP execution time: 0.08791 s
SQL execution time: 0.09681 s
TPL rendering time: 0.00422 s
Total elapsed time: 0.18894 s
Peak memory usage: 9.5978 MB
Memory at close: 9.4612 MB
Database queries: 82