The Institute of Communication Acoustics is actively engaged in the area of speech and audio signal processing, machine learning, and hearing research. We welcome students wishing to do their student projects Bachelor- and Masters' theses with us and offer the opportunity to participate in, and contribute to advancing the technologies in this field.
Speech is not only the most important means of communication between humans, but it also plays an important role in speech controlled human-machine interfaces.
Our research area encompasses a wide range of topics,
e.g.,
- Music Signal Processing for Cochlear Implants
- Melody Recognition in Cochlear Implant Listeners
- Privacy in Acoustic Sensor Networks
- Speech Enhancement
- Statistical Models of Speech and Audio Signals
- Source Localization and Separation
- Robust Speech Transmission and Voice over IP
- Auditory Virtual Environments
- Speech Intelligibility Prediction
Figure 1 shows the interconnection between various aspects of speech signal processing in, for instance, smart phones. The acoustic signal is picked up by a single microphone or an array of
microphones. Using arrays enables spatial selectivity of the signals and requires, in practice, a priori knowledge of the speaker positions, automatic speaker localization algorithms, or so-called 'blind' approaches.
Figure 1: Speech-signal processing in smart phones
Often, noise reduction (separation of information-bearing signals and interference, based on statistical features) and echo compensation are also included at this stage. The signal thus obtained is compressed, and either: transmitted (e.g., over a mobile transmission channel) after performing error-control coding, or input to a speech-controlled human-machine interface.