| 2009 |
| 17 | EE | Shoko Araki,
Tomohiro Nakatani,
Hiroshi Sawada,
Shoji Makino:
Stereo Source Separation and Source Counting with MAP Estimation with Dirichlet Prior Considering Spatial Aliasing Problem.
ICA 2009: 742-750 |
| 2008 |
| 16 | EE | Dorothea Kolossa,
Shoko Araki,
Marc Delcroix,
Tomohiro Nakatani,
Reinhold Orglmeister,
Shoji Makino:
Missing feature speech recognition in a meeting situation with maximum SNR beamforming.
ISCAS 2008: 3218-3221 |
| 15 | EE | Tomohiro Nakatani,
Biing-Hwang Juang,
Takuya Yoshioka,
Keisuke Kinoshita,
Marc Delcroix,
Masato Miyoshi:
Speech Dereverberation Based on Maximum-Likelihood Estimation With Time-Varying Gaussian Source Model.
IEEE Transactions on Audio, Speech & Language Processing 16(8): 1512-1527 (2008) |
| 14 | EE | Tomohiro Nakatani,
Shigeaki Amano,
Toshio Irino,
Kentaro Ishizuka,
Tadahisa Kondo:
A method for fundamental frequency estimation and voicing decision: Application to infant utterances recorded in real acoustical environments.
Speech Communication 50(3): 203-214 (2008) |
| 2007 |
| 13 | EE | Tomohiro Nakatani,
Takafumi Hikichi,
Keisuke Kinoshita,
Takuya Yoshioka,
Marc Delcroix,
Masato Miyoshi,
Biing-Hwang Juang:
Robust blind dereverberation of speech signals based on characteristics of short-time speech segments.
ISCAS 2007: 2986-2989 |
| 12 | EE | Biing-Hwang Juang,
Tomohiro Nakatani:
Joint Source-Channel Modeling and Estimation for Speech Dereverberation.
ISCAS 2007: 2990-2993 |
| 11 | EE | Tomohiro Nakatani,
Keisuke Kinoshita,
Masato Miyoshi:
Harmonicity-Based Blind Dereverberation for Single-Channel Speech Signals.
IEEE Transactions on Audio, Speech & Language Processing 15(1): 80-95 (2007) |
| 2006 |
| 10 | EE | Kentaro Ishizuka,
Tomohiro Nakatani:
A feature extraction method using subband based periodicity and aperiodicity decomposition with noise robust frontend processing for automatic speech recognition.
Speech Communication 48(11): 1447-1457 (2006) |
| 9 | EE | Tomohiro Nakatani,
Masato Miyoshi,
Keisuke Kinoshita:
Blind dereverberation of monaural speech signals based on harmonic structure.
Systems and Computers in Japan 37(6): 1-12 (2006) |
| 2005 |
| 8 | EE | Keisuke Kinoshita,
Tomohiro Nakatani,
Masato Miyoshi:
Harmonicity Based Dereverberation for Improving Automatic Speech Recognition Performance and Speech Intelligibility.
IEICE Transactions 88-A(7): 1724-1731 (2005) |
| 2004 |
| 7 | EE | Kazushi Ishihara,
Tomohiro Nakatani,
Tetsuya Ogata,
Hiroshi G. Okuno:
Automatic Sound-Imitation Word Recognition from Environmental Sounds Focusing on Ambiguity Problem in Determining Phonemes.
PRICAI 2004: 909-918 |
| 2003 |
| 6 | EE | Tomohiro Nakatani,
Masato Miyoshi,
Keisuke Kinoshita:
One Microphone Blind Dereverberation Based on Quasi-periodicity of Speech Signals.
NIPS 2003 |
| 1998 |
| 5 | | Tomohiro Nakatani,
Hiroshi G. Okuno:
Sound Ontology for Computational Auditory Scence Analysis.
AAAI/IAAI 1998: 1004-1010 |
| 1997 |
| 4 | | Hiroshi G. Okuno,
Tomohiro Nakatani,
Takeshi Kawabata:
Understanding Three Simultaneous Speeches.
IJCAI (1) 1997: 30-35 |
| 1996 |
| 3 | | Hiroshi G. Okuno,
Tomohiro Nakatani,
Takeshi Kawabata:
Interfacing Sound Stream Segregation to Automatic Speech Recognition - Preliminary Results on Listening to Several Sounds Simultaneously.
AAAI/IAAI, Vol. 2 1996: 1082-1089 |
| 1995 |
| 2 | | Tomohiro Nakatani,
Hiroshi G. Okuno,
Takeshi Kawabata:
Residue-Driven Architecture for Computational Auditory Scene Analysis.
IJCAI 1995: 165-174 |
| 1994 |
| 1 | | Tomohiro Nakatani,
Hiroshi G. Okuno,
Takeshi Kawabata:
Auditory Stream Segregation in Auditory Scene Analysis with a Multi-Agent System.
AAAI 1994: 100-107 |