Sound classification in hearing aids inspired by auditory scene analysis (Q2502767)
From MaRDI portal
| This is the item page for this Wikibase entity, intended for internal use and editing purposes. Please use this page instead for the normal view: Sound classification in hearing aids inspired by auditory scene analysis |
scientific article
| Language | Label | Description | Also known as |
|---|---|---|---|
| English | Sound classification in hearing aids inspired by auditory scene analysis |
scientific article |
Statements
Sound classification in hearing aids inspired by auditory scene analysis (English)
0 references
13 September 2006
0 references
Summary: A sound classification system for the automatic recognition of the acoustic environment in a hearing aid is discussed. The system distinguishes the four sound classes ``clean speech,'' ``speech in noise,'' ``noise,'' and ``music.'' A number of features that are inspired by auditory scene analysis are extracted from the sound signal. These features describe amplitude modulations, spectral profile, harmonicity, amplitude onsets, and rhythm. They are evaluated together with different pattern classifiers. Simple classifiers, such as rule-based and minimum-distance classifiers, are compared with more complex approaches, such as Bayes classifier, neural network, and hidden Markov model. Sounds from a large database are employed for both training and testing of the system. The achieved recognition rates are very high except for the class ``speech in noise.'' Problems arise in the classification of compressed pop music, strongly reverberated speech, and tonal or fluctuating noises.
0 references
hearing aids
0 references
sound classification
0 references
auditory scene analysis
0 references