2008 |
7 | EE | Takeshi Mizumoto,
Ryu Takeda,
Kazuyoshi Yoshii,
Kazunori Komatani,
Tetsuya Ogata,
Hiroshi G. Okuno:
A robot listens to music and counts its beats aloud by separating music from counting voice.
IROS 2008: 1538-1543 |
6 | EE | Kazumasa Murata,
Kazuhiro Nakadai,
Kazuyoshi Yoshii,
Ryu Takeda,
Toyotaka Torii,
Hiroshi G. Okuno,
Yuji Hasegawa,
Hiroshi Tsujino:
A robot uses its own microphone to synchronize its steps to musical beats while scatting and singing.
IROS 2008: 2459-2464 |
5 | EE | Kazuyoshi Yoshii,
Masataka Goto,
Kazuhiro Komatani,
Tetsuya Ogata,
Hiroshi G. Okuno:
An Efficient Hybrid Music Recommender System Using an Incrementally Trainable Probabilistic Generative Model.
IEEE Transactions on Audio, Speech & Language Processing 16(2): 435-447 (2008) |
2007 |
4 | EE | Kazuyoshi Yoshii,
Kazuhiro Nakadai,
Toyotaka Torii,
Yuji Hasegawa,
Hiroshi Tsujino,
Kazunori Komatani,
Tetsuya Ogata,
Hiroshi G. Okuno:
A biped robot that keeps steps in time with musical beats while listening to music with its own ears.
IROS 2007: 1743-1750 |
3 | EE | Kazuyoshi Yoshii,
Masataka Goto,
Hiroshi G. Okuno:
Drum Sound Recognition for Polyphonic Audio Signals by Adaptation and Matching of Spectrogram Templates With Harmonic Structure Suppression.
IEEE Transactions on Audio, Speech & Language Processing 15(1): 333-345 (2007) |
2006 |
2 | | Kazuyoshi Yoshii,
Masataka Goto,
Kazunori Komatani,
Tetsuya Ogata,
Hiroshi G. Okuno:
Hybrid Collaborative and Content-based Music Recommendation Using Probabilistic Model with Latent User Preferences.
ISMIR 2006: 296-301 |
2004 |
1 | EE | Kazuyoshi Yoshii,
Masataka Goto,
Hiroshi G. Okuno:
Automatic Drum Sound Description for Real-World Music Using Template Adaptation and Matching Methods.
ISMIR 2004 |