PARAMETER MASKS FOR CLOSE TALK SPEECH SEGREGATION USING DEEP NEURAL NETWORKS

Parameter masks for close talk speech segregation using deep neural networks

Parameter masks for close talk speech segregation using deep neural networks

Blog Article

A deep neural networks (DNN) based close talk speech segregation algorithm is introduced.One nearby microphone is used to collect the target speech as close talk indicated, and another microphone is used to get the noise in environments.The time and energy difference between the two microphones signal is used as the sophie allport zebra segregation cue.A DNN estimator on each frequency channel is used to calculate the parameter masks.

The parameter masks represent the target speech energy in each time frequency (T-F) units.Experiment results show the good performance of the proposed system.The signal to noise ratio synovex one grass (SNR) improvement is 8.1 dB on 0 dB noisy environment.

Report this page