Our spatial hearing ability is truly multimodal, i.e., applies strongly the information / cues obtained from other senses than audition as well. To prove these, there are numerous practical
cases (e.g., motional cues, visual dominance, bone-conduction) as well as scientific studies to prove this.
In a typical spatial hearing experiment only the sense of hearing is under test and the influences of the other senses are suppressed by the experimental design, which is not the natural functioning of (directional) hearing.
There is quite little research on multimodal experiments applying virtual 3-D sounds that allow the full control of natural-like soundscapes represented over headphones.
So far HRTFs have been investigated mostly unimodally applying only the sense of audition, except for the case of motional cues obtained via dynamic head tracking.
This has been due to the technical difficulties, complexity and lack of cross-modal knowledge.
Given the opportunity, I would like
continue my research on multimodal interactions in spatial hearing, applying 3-D sound (based on
measured / modeled HRTFs).
Neurobehavioral research on spatial hearing
Although our pioneering paper on
human cortical representation of virtual auditory space revealed fundamental issues, I think there are
many issues to be investigated in the cortical processing. The modern cortical measurement techniques, such as EEG, MEG, fMRI, TMS and PET, gain
significantly from using natural (three-dimensional) sound stimuli based on HRTFs. However, the 3-D sound production requires high quality equipment (such as
KAR Audio stimulators) and expertise on the stimulus generation. Having already over five years of experience on the matter,
I would like continue collaboration work on this topic matter.