News-20141021
Dr. Andrew Abel from University of Stirling, UK visit CSLT from 10/23-10/28, based on our NSFC-RSE Joint research project.
Time: 2012/10/24, 10:00--11:00AM Location: FIT 1-414
Cognitively Inspired Multimodal Speech Filtering
Dr Andrew Abel – University of Stirling, Scotland In recent years, the established link between the various human communication production domains has become more widely utilised in the field of speech processing. Research has demonstrated that intelligently integrated audio and visual information can have a vital role to play in speech enhancement. The background and limitations of current hearing aid technology is discussed, and this forms the basis of a discussion into the cognitive aspects of speech, considering more than simply the mechanical actions of hearing, but also taking into account aspects such as body language, familiarity, and vision. Of particular interest is the potential use of visual information in future designs of hearing aid technology. A novel two-stage speech enhancement system, making use of audio only beamforming, automatic lip tracking, and visually derived speech filtering, was initially developed and its potential evaluated. However, it is worth taking account of the intermittent way that people make use of audio and visual speech filtering, and the different scenarios that a speech filtering system may be presented with (intermittent audio and visual, varying noise levels) and so one single speech filtering approach may produce inadequate results when applied. To alleviate this, a cognitively inspired fuzzy logic based multimodal speech filtering system that considers audio noise level and visual signal quality in order to carry out more intelligent, automated, speech filtering has been developed and evaluated. Finally, some ideas for future developments, taking into account more cognitive aspects of speech as part of a comprehensive speech processing system are discussed.
Biography:
Dr Andrew Abel graduated from Stirling University in Scotland, UK, with a first Class Honours in Computing Science in 2007. He received his PhD from the same University in 2012, after conducting pioneering research in the COSIPRA Laboratory, led by Prof. Amir Hussain, into developing cognitive multimodal signal-image processing algorithms for enabling next-generation multi-modal hearing-aid and listening device technologies. His inter-disciplinary research has led to a growing number of research publications in leading journals and international Conferences. Since 2012, he has been employed as a postdoctoral Research Assistant, working on electronics development and testing as part of the MEMS/CMOS microphone project (EPSRC Grant EP/G062609/1), focusing particularly on evaluation and experimentation with evolutionary microphone technologies. In addition, Dr Abel maintains an interest in signal and image processing, recently developing neuro-inspired techniques for speech segmentation, with a view to application to further cognitively inspired processing research.