Sound Research WIKINDX

List Resources

Displaying 1 - 7  of 7 (Bibliography: WIKINDX Master Bibliography)
Parameters
Order by

Ascending
Descending
Use all checked: 
Use all displayed: 
Use all in list: 
Drewes, T. M., Myatt, E. D., & Gandy, M. 2000, April 2–5, Sleuth: An audio experience. Paper presented at 6th International Conference on Auditory Display, Atlanta.   
Last edited by: Mark Grimshaw-Aagaard 16/09/2005, 13:26
When presented with a dense soundtrack in an audiovisual setting, our visual system helps to "disambiguate unclear or complex audio stimuli."
Familant, M. E., & Detweiler, M. C. (1993). Iconic reference: Evolving perspectives and an organizing framework. International Journal of Man-Machine Studies, 39(5), 705–728.   
Last edited by: Mark Grimshaw-Aagaard 24/08/2005, 14:22

Contains a critique of other iconic taxonomies including Gaver and Blattner et al. (Gaver 1986; Blattner, Sumikawa, & Greenberg 1989). The suggestion is that they fail in distinguishing between sign and referent relations. The authors propose a taxonomy including:


  • Direct reference: Signal ----> Sign Referent/Denotative Referent (identical referents).
  • Indirect reference: Signal ---->Sign Referent ----> Denotative Referent.

The referent relation (between Sign and Denotative referents) can be:


  • Part-part: S and D share a subset of features.
  • Part-whole: all the features of S are a subset of D.
  • Whole-part: all the features of D are a subset of S.
  • Identical: S and D have the same set features.
  • Disjoint: S and D have no features in common.

based on commonalities (or not) between feature sets of S and D. The most common signs are part-part and part-whole.



Blattner, M. M., Sumikawa, D. A., & Greenberg, R. M. (1989). Earcons and icons: Their structure and common design principles. Human-computer Interaction, 4, 11–44.
Gaver, W. W. (1986). Auditory icons: Using sound in computer interfaces. Human-computer Interaction, 2, 167–177.
Gaver, W. W. (1986). Auditory icons: Using sound in computer interfaces. Human-computer Interaction, 2, 167–177.   
Last edited by: Mark Grimshaw-Aagaard 08/01/2008, 08:51
Gaver identifies three mappings between data and its representation which he applies to sound:

  1. symbolic -- arbitrary mapping
  2. Metaphoric -- similarities between data and representation which may be structure-mapped (structural similarities) or metonymic.
  3. Nomic -- direct relationship between representation of the sound source and sound. Gavin terms these types of sound auditory icons.
LoBrutto, V. (1994). Sound-on-film: Interviews with creators of film sound. Westport, CT: Praeger.   
Last edited by: Mark Grimshaw-Aagaard 06/09/2005, 13:25
Interviewing Gary Rydstrom (credits include Terminator 2 and Jurassic Park): "When kids make sound effects to talk about things, those aural semantics come from Warner Brothers cartoons."
An interesting section where Mark Mangini disucsses the use of made-up onomatopaeic words to describe sounds.
Moncrieff, S., Venkatesh, S., & Dorai, C. 2003, July 6–9, Horror film genre typing and scene labelling via audio analysis. Paper presented at International Conference on Multimedia and Expo.   
Added by: Mark Grimshaw-Aagaard 15/12/2008, 03:11
Affect events are indexical by nature with a "high level of semantic association between the sound energy and affect events" and this "can be extended to attribute a semantic correlation between affect events and the broader thematic content of the film."
Schafer, R. M. (1994). The soundscape: Our sonic environment and the tuning of the world. Rochester Vt: Destiny Books.   
Last edited by: Mark Grimshaw-Aagaard 14/02/2014, 16:44
"Sounds may be classified in several ways: according to their physical characteristics (acoustics) or the way in which they are perceived (psychoacoustics); according to their function and meaning (semiotics and semantics); or according to their emotional or affective qualities (aesthetics). While it has been customary to treat these classifications seperately, there are obvious limitations to isolated studies."
Velivelli, A., Ngo, C.-W., & Huang, T. S. (2003). Detection of documentary scene changes by audio-visual fusion. Lecture Notes in Computer Science, 2728, 227–238.   
Added by: Mark Grimshaw-Aagaard 09/06/2005, 11:21
For documentaries, they define 6 audio classes:

  • Speech
  • Speech + Music
  • Music
  • Speech + Noise
  • Noise
  • Silence
WIKINDX 6.8.2 | Total resources: 1301 | Username: -- | Bibliography: WIKINDX Master Bibliography | Style: American Psychological Association (APA)