Federated Learning with Heterogeneous Resouces in Mobile Edge

Federated Learning with Heterogeneous Resouces in Mobile Edge

Takayuki Nishio and Ryo Yonetani: “Client Selection for Federated Learning with Heterogeneous Resources in Mobile Edge”, arXiv:1804.08333, 2018 [arXiv preprint]

Read More

Future Person Localization in First-Person Videos

Future Person Localization in First-Person Videos

Takuma Yagi, Karttikeya Mangalam, Ryo Yonetani, Yoichi Sato: “Future Person Localization in First-Person Videos”, IEEE Conference on Computer Vision and Pattern Recognition (CVPR, spotlight presentation), 2018, [arXiv preprint]

Read More

Privacy-Preserving Visual Learning with Homomorphic Encryption

Privacy-Preserving Visual Learning with Homomorphic Encryption

Ryo Yonetani, Vishnu Naresh Boddeti, Kris M. Kitani, Yoichi Sato: “Privacy-Preserving Visual Learning Using Doubly Permuted Homomorphic Encryption”, International Conference on Computer Vision (ICCV), 2017 [cvf page] [arXiv preprint]

Read More

Quickly Scanning First-Person Videos with Elastic Timeline

Quickly Scanning First-Person Videos with Elastic Timeline

Keita Higuchi, Ryo Yonetani, Yoichi Sato: “EgoScanning: Quickly Scanning First-Person Videos with Egocentric Elastic Timelines”, ACM Conference on Human Factors in Computing Systems (CHI), 2017 [pdf]

Read More

Discovering Scenes of Common Interest from First-Person Videos

Discovering Scenes of Common Interest from First-Person Videos

Ryo Yonetani, Kris Kitani, Yoichi Sato: “Visual Motif Discovery via First-Person Vision”, European Conference on Computer Vision (ECCV), 2016 [Springer] [preprint]

Read More

Person Localization in First-Person Videos with Ego-Motion Signatures

Person Localization in First-Person Videos with Ego-Motion Signatures

Ryo Yonetani, Kris Kitani, Yoichi Sato: “Ego-Surfing: Person Localization in First-Person Videos Using Ego-Motion Signatures”, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) (in press) [arXiv preprint]

Ryo Yonetani, Kris Kitani, Yoichi Sato: “Ego-Surfing First-Person Videos”, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015 [cvf page]

Read More

Discovering Objects of Joint Attention with Wearable Eye Trackers

Discovering Objects of Joint Attention with Wearable Eye Trackers

Yifei Huang, Minjie Cai, Hiroshi Kera, Ryo Yonetani, Keita Higuchi, Yoichi Sato: “Temporal Localization and Spatial Segmentation of Joint Attention in Multiple First-Person Video”, International Workshop on Egocentric Perception, Interaction, and Computing (EPIC), 2017 [cvf page]

Hiroshi Kera, Ryo Yonetani, Keita Higuchi, Yoichi Sato: “Discovering Objects of Joint Attention via First-Person Sensing”, IEEE CVPR Workshop on Egocentric (First-Person) Vision (EGOV), 2016 [cvf page]

Read More

Recognizing Micro-Actions from Paired First-Person Videos

Recognizing Micro-Actions from Paired First-Person Videos

Ryo Yonetani, Kris Kitani, Yoichi Sato: “Recognizing Micro-Actions and Reactions from Paired Egocentric Videos”, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016 [cvf page] [errata]

Read More

How Can Gaze Help Remote Collaborations?

How Can Gaze Help Remote Collaborations?

Keita Higuchi, Ryo Yonetani, Yoichi Sato: “Can Eye Help You?: Effects of Visualizing Eye Fixations on Remote Collaboration Scenarios for Physical Tasks”, ACM Conference on Human Factors in Computing System (CHI), 2016 [PDF]

Read More

Learning to Predict Where We Look in Videos

Learning to Predict Where We Look in Videos

Ryo Yonetani, Hiroaki Kawashima, Takashi Matsuyama: “Predicting Where We Look from Spatiotemporal Gaps”, International Conference on Multimodal Interaction (ICMI), 2013

Read More

Recognizing Attentive States from Gaze

Ryo Yonetani, Hiroaki Kawashima, Takashi Matsuyama: “Multi-mode Saliency Dynamics Model for Analyzing Gaze and Attention”, Eye Tracking Research & Applications (ETRA), 2012

Read More