Future person localization in first-person videos

Future person localization in first-person videos

Takuma Yagi, Karttikeya Mangalam, Ryo Yonetani, Yoichi Sato: “Future Person Localization in First-Person Videos”, [arXiv preprint]

Read More

First-Person Vision with Multiple Wearable Cameras

First-person vision is one of the emerging topics in computer vision, which makes use of wearable cameras (e.g., Google Glass, GoPro HERO) for solving various vision problems. We envision a future where people have such wearable cameras as daily necessities, like a smartphone that most of us own today, and have been developing techniques and applications that can be enabled by collectively using multiple wearable cameras.

Read More

Privacy-preserving visual learning

Privacy-preserving visual learning

Ryo Yonetani, Vishnu Naresh Boddeti, Kris M. Kitani, Yoichi Sato: “Privacy-Preserving Visual Learning Using Doubly Permuted Homomorphic Encryption”, International Conference on Computer Vision (ICCV2017), Venice, Itary, 2017 [arXiv preprint]

Read More

Quickly scanning first-person videos

Quickly scanning first-person videos

Keita Higuchi, Ryo Yonetani, Yoichi Sato: “EgoScanning: Quickly Scanning First-Person Videos with Egocentric Elastic Timelines”, ACM Conference on Human Factors in Computing Systems (CHI2017), Denver, CO, USA, May 2017 [pdf]

Read More

Visual motif discovery

Visual motif discovery

Ryo Yonetani, Kris Kitani, Yoichi Sato: “Visual Motif Discovery via First-Person Vision”, European Conference on Computer Vision (ECCV2016), Amsterdam, Netherlands, Oct 2016 [PDF]

Read More

Ego-surfing first-person videos

Ego-surfing first-person videos

Ryo Yonetani, Kris Kitani, Yoichi Sato: “Ego-Surfing: Person Localization in First-Person Videos Using Ego-Motion Signatures”, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) (in press) [arXiv preprint]

Ryo Yonetani, Kris Kitani, Yoichi Sato: “Ego-Surfing First-Person Videos”, IEEE Conference on Computer Vision and Pattern Recognition (CVPR2015), Boston, MA, USA, Jun 2015 [CVPR2015 version] [Extended version (arXiv)]

Read More

Discovering objects of joint attention

Discovering objects of joint attention

Hiroshi Kera, Ryo Yonetani, Keita Higuchi, Yoichi Sato: “Discovering Objects of Joint Attention via First-Person Sensing”, IEEE CVPR Workshop on Egocentric (First-Person) Vision (EGOV2016), Las Vegas, NV, USA, Jun 2016 [PDF]

Read More

Recognizing micro-actions in interaction

Recognizing micro-actions in interaction

Ryo Yonetani, Kris Kitani, Yoichi Sato: “Recognizing Micro-Actions and Reactions from Paired Egocentric Videos”, IEEE Conference on Computer Vision and Pattern Recognition (CVPR2016), Las Vegas, NV, USA, Jun 2016 [PDF] [errata]

Read More

How does gaze help remote collaboration?

How does gaze help remote collaboration?

Keita Higuchi, Ryo Yonetani, Yoichi Sato: “Can Eye Help You?: Effects of Visualizing Eye Fixations on Remote Collaboration Scenarios for Physical Tasks”, ACM Conference on Human Factors in Computing System (CHI2016), San Jose, CA, USA, May 2016 [PDF]

Read More

Predicting where we look in videos

Predicting where we look in videos

Ryo Yonetani, Hiroaki Kawashima, Takashi Matsuyama: “Predicting Where We Look from Spatiotemporal Gaps”, International Conference on Multimodal Interaction (ICMI2013), Sydney, Australia, Dec 2013

Read More

Recognizing attentive states from gaze

Ryo Yonetani, Hiroaki Kawashima, Takashi Matsuyama: “Multi-mode Saliency Dynamics Model for Analyzing Gaze and Attention”, Eye Tracking Research & Applications (ETRA2012), Santa Barbara, CA, USA, Mar 2012

Read More