| 2008 |
| 18 | EE | Kazuhiro Otsuka,
Shoko Araki,
Kentaro Ishizuka,
Masakiyo Fujimoto,
Martin Heinrich,
Junji Yamato:
A realtime multimodal system for analyzing group meetings by combining face pose tracking and speaker diarization.
ICMI 2008: 257-264 |
| 17 | EE | Kazuhiro Otsuka,
Junji Yamato:
Fast and Robust Face Tracking for Analyzing Multiparty Face-to-Face Meetings.
MLMI 2008: 14-25 |
| 2007 |
| 16 | EE | Shiro Kumano,
Kazuhiro Otsuka,
Junji Yamato,
Eisaku Maeda,
Yoichi Sato:
Pose-Invariant Facial Expression Recognition Using Variable-Intensity Templates.
ACCV (1) 2007: 324-334 |
| 15 | EE | Kazuhiro Otsuka,
Hiroshi Sawada,
Junji Yamato:
Automatic inference of cross-modal nonverbal interactions in multiparty conversations: "who responds to whom, when, and how?" from gaze, head gestures, and utterances.
ICMI 2007: 255-262 |
| 2006 |
| 14 | EE | Kazuhiro Otsuka,
Junji Yamato,
Yoshinao Takemae,
Hiroshi Murase:
Quantifying interpersonal influence in face-to-face conversations based on visual attention patterns.
CHI Extended Abstracts 2006: 1175-1180 |
| 13 | EE | Kazuhiro Otsuka,
Junji Yamato,
Yoshinao Takemae,
Hiroshi Murase:
Conversation Scene Analysis with Dynamic Bayesian Network Basedon Visual Head Tracking.
ICME 2006: 949-952 |
| 2005 |
| 12 | EE | Yoshinao Takemae,
Kazuhiro Otsuka,
Junji Yamato:
Automatic video editing system using stereo-based head tracking for multiparty conversation.
CHI Extended Abstracts 2005: 1817-1820 |
| 11 | EE | Yoshinao Takemae,
Kazuhiro Otsuka,
Junji Yamato:
Effects of Automatic Video Editing System Using Stereo-Based Head Tracking for Archiving Meetings.
ICME 2005: 185-188 |
| 10 | EE | Kazuhiro Otsuka,
Yoshinao Takemae,
Junji Yamato:
A probabilistic inference of multiparty-conversation structure based on Markov-switching models of gaze patterns, head directions, and utterances.
ICMI 2005: 191-198 |
| 9 | EE | Kazuhiro Otsuka,
Yoshinao Takemae,
Junji Yamato,
Hiroshi Murase:
Probabilistic Inference of Gaze Patterns and Structure of Multiparty Conversations from Head Directions and Utterances.
JSAI Workshops 2005: 353-364 |
| 2004 |
| 8 | EE | Yoshinao Takemae,
Kazuhiro Otsuka,
Naoki Mukawa:
Impact of video editing based on participants' gaze in multiparty conversation.
CHI Extended Abstracts 2004: 1333-1336 |
| 7 | EE | Kazuhiro Otsuka,
Naoki Mukawa:
Multiview Occlusion Analysis for Tracking Densely Populated Objects Based on 2-D Visual Angles.
CVPR (1) 2004: 90-97 |
| 6 | EE | Kazuhiro Otsuka,
Naoki Mukawa:
A Particle Filter for Tracking Densely Populated Objects Based on Explicit Multiview Occlusion Analysis.
ICPR (4) 2004: 745-750 |
| 2003 |
| 5 | EE | Yoshinao Takemae,
Kazuhiro Otsuka,
Naoki Mukawa:
Video cut editing rule based on participants' gaze in multiparty conversation.
ACM Multimedia 2003: 303-306 |
| 2000 |
| 4 | | Kazuhiro Otsuka,
Tsutomu Horikoshi,
Satoshi Suzuki,
Haruhiko Kojima:
Memory-Based Forecasting for Weather Image Patterns.
AAAI/IAAI 2000: 330-336 |
| 1999 |
| 3 | EE | Kazuhiro Otsuka,
Tsutomu Horikoshi,
Satoshi Suzuki,
Haruhiko Kojima:
Memory-Based Forecasting of Complex Natural Patterns by Retrieving Similar Image Sequences.
ICIAP 1999: 874- |
| 1998 |
| 2 | EE | Kazuhiro Otsuka,
Tsutomu Horikoshi,
Satoshi Suzuki:
Image Sequence Retrieval for Forecasting Weather Radar Echo Pattern.
MVA 1998: 238-241 |
| 1997 |
| 1 | EE | Kazuhiro Otsuka,
Tsutomu Horikoshi,
Satoshi Suzuki:
Image velocity estimation from trajectory surface in spatiotemporal space.
CVPR 1997: 200-205 |