A group of Russian scientists from the Peter the Great St. Petersburg Polytechnic University (SPbPU) is now focusing on the plan to develop a new digital system that will process speech in a real-life sound environment, for example when many people talk simultaneously during a conversation.
The team of the scientists simulated the process of the sensory sounds coding by modelling the mammalian auditory periphery. In the newly published study, the researchers stated that they developed methods for acoustic signal recognition based on peripheral coding.
They partially reproduced the processes performed by the nervous system while the information in underprocessing and integrated into a decision-making module, which determines the type of the incoming signal.
The lead author Anton Yakovenko from SPbPU said, "The main goal is to give the machine human-like hearing to achieve the corresponding level of machine perception of acoustic signals in the real-life environment."
As per Yakovenko, the responses to the vowel phonemes given by the auditory nerve model created by the scientists are represented in the source dataset.
The research also added that the data processing was conducted by a special algorithm which carried out structural analysis to identify the neural activity patterns the model used to recognise each phoneme and the proposed approach combines self-organising neural networks, as well as graph theory.
The team of the Russian scientists said that the analysis of the reaction of the auditory nerve fibres allowed to identify vowel phonemes correctly under some important noise exposure and surpassed the most common methods for parameterisation of acoustic signals. They believe that the newly developed methods should help create a new generation of neurocomputer interfaces and should be able to "provide better human-machine interaction".
It should be noted that this research has a great potential for practical application also, particularly in cochlear implantation (surgical restoration of hearing), the creation of new bioinspired approaches for speech processing, separation of sound sources, recognition and computational auditory scene analysis based on the machine hearing principles.
As per the author, Yakovenko, "The algorithms for processing and analysing big data implemented within the research framework are universal and can be implemented to solve the tasks that are not related to acoustic signal processing."