Publications

DBLP | Google Scholar

Conference Papers

  • Takayuki Nishio and Ryo Yonetani: “Client Selection for Federated Learning with Heterogeneous Resources in Mobile Edge”, arXiv:1804.08333, 2018 [project]
  • Takuma Yagi, Karttikeya Mangalam, Ryo Yonetani, Yoichi Sato: “Future Person Localization in First-Person Videos”, IEEE Conference on Computer Vision and Pattern Recognition (CVPR, spotlight presentation), 2018 [project]
  • Rie Kamikubo, Keita Higuchi, Ryo Yonetani, Hideki Koike, Yoichi Sato, “Exploring the Role of Tunnel Vision Simulation in the Design Cycle of Accessible Interfaces”, International Cross-Disciplinary Conference on Web Accessibility (Web4All), 2018
  • Ryo Yonetani, Vishnu Naresh Boddeti, Kris Kitani, Yoichi Sato: “Privacy-Preserving Visual Learning Using Doubly Permuted Homomorphic Encryption”, International Conference on Computer Vision (ICCV), 2017 [project]
  • Keita Higuchi, Ryo Yonetani, Yoichi Sato: “EgoScanning: Quickly Scanning First-Person Videos with Egocentric Elastic Timelines”, ACM Conference on Human Factors in Computing Systems (CHI), 2017 [project]
  • Ryo Yonetani, Kris Kitani, Yoichi Sato: “Visual Motif Discovery via First-Person Vision”, European Conference on Computer Vision (ECCV), 2016 [project]
  • Ryo Yonetani, Kris Kitani, Yoichi Sato: “Recognizing Micro-Actions and Reactions from Paired Egocentric Videos”, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016 [project]
  • Keita Higuchi, Ryo Yonetani, Yoichi Sato: “Can Eye Help You?: Effects of Visualizing Eye Fixations on Remote Collaboration Scenarios for Physical Tasks”, ACM Conference on Human Factors in Computing Systems (CHI), 2016 [project]
  • Ryo Yonetani, Kris Kitani, Yoichi Sato: “Ego-Surfing First-Person Videos”, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015 [project]
  • Ryo Yonetani, Hiroaki Kawashima, Takashi Matsuyama: “Predicting Where We Look from Spatiotemporal Gaps”, International Conference on Multimodal Interaction (ICMI), 2013 [project]
  • Ryo Yonetani, Akisato Kimura, Hitoshi Sakano, Ken Fukuchi: “Single Image Segmentation with Estimated Depth”, British Machine Vision Conference (BMVC), 2012
  • Ryo Yonetani, Hiroaki Kawashima, Takashi Matsuyama: “Multi-mode Saliency Dynamics Model for Analyzing Gaze and Attention”, Eye Tracking Research & Applications (ETRA), 2012 [project]
  • Ryo Yonetani, Hiroaki Kawashima, Takatsugu Hirayama, Takashi Matsuyama: “Gaze Probing: Event-Based Estimation of Objects Being Focused On”, International Conference on Pattern Recognition (ICPR, IBM Best Student Paper Award), 2010

Journal Papers

  • Ryo Yonetani, Kris Kitani, Yoichi Sato: “Ego-Surfing: Person Localization in First-Person Videos Using Ego-Motion Signatures”, accepted to IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)
  • Kei Shimonishi, Erina Ishikawa, Ryo Yonetani, Hiroaki Kawashima, Takashi Matsuyama: “Learning Aspects of Interest from Gaze (IN JAPANESE)”, Human Interface, 16(2), pp.103-114, 2014
  • Ryo Yonetani, Hiroaki Kawashima, Takashi Matsuyama: “Learning Spatiotemporal Gaps between Where We Look and What We Focus on”, IPSJ Transactions on Computer Vision and Applications, 5, pp. 75-79, 2013
  • Ryo Yonetani, Hiroaki Kawashima, Takekazu Kato, Takashi Matsuyama: “Modeling Saliency Dynamics for Viewer State Estimation (IN JAPANESE)”, IEICE Transactions on Information and Systems, J96-D(8), pp.1675-1687, 2013
  • Akisato Kimura, Ryo Yonetani, Takatsugu Hirayama: “Computational Models of Human Visual Attention and Their Implementations: A Survey”, IEICE Transactions on Information and Systems, E96-D(3), pp.562-578, 2013
  • Ryo Yonetani, Hiroaki Kawashima, Takatsugu Hirayama, Takashi Matsuyama: “Mental Focus Analysis Using the Spatio-temporal Correlation between Visual Saliency and Eye Movements”, IPSJ Journal, 52(12), 2012
  • Erina Ishikawa, Ryo Yonetani, Takatsugu Hirayama, Takashi Matsuyama: “Analysis of Gaze Mirroring Effects (IN JAPANESE)”, IPSJ Journal, 52(12), pp.3637-3646, 2011
  • Ryo Yonetani, Hiroaki Kawashima, Takatsugu Hirayama, Takashi Matsuyama: “Gaze Probing: Event-based Estimation of Objects Being Focused on (IN JAPANESE)”, Human Interface, 12(3), pp. 125-135, 2010

Workshop Papers, Extended Abstracts

  • Navyata Sanghvi, Ryo Yonetani, Kris Kitani, “Learning Group Communication from Demonstration”, RSS Workshop on Models and Representations for Natural Human-Robot Communication, 2018
  • Seita Kayukawa, Keita Higuchi, Ryo Yonetani, Maanori Nakamura, Yoichi Sato, Shigeo Morishima: “Dynamic Object Scanning: Object-Based Elastic Timeline for Quickly Browsing First-Person Videos”, ACM Conference on Human Factors in Computing Systems Late Breaking Work (CHI-LBW), 2018
  • Keita Higuchi, Ryo Yonetani, Yoichi Sato: “EgoScanning: Quickly Scanning First-Person Videos with Egocentric Elastic Timelines”, ACM SIGGRAPH Asia Emerging Technologies (SIGGRAPH-ASIA-ETECH), 2017
  • Yifei Huang, Minjie Cai, Hiroshi Kera, Ryo Yonetani, Keita Higuchi, Yoichi Sato: “Temporal Localization and Spatial Segmentation of Joint Attention in Multiple First-Person Video”, International Workshop on Egocentric Perception, Interaction, and Computing (EPIC), 2017
  • Rie Kamikubo, Keita Higuchi, Ryo Yonetani, Hideki Koike, Yoichi Sato: “Rapid Prototyping of Accessible Interfaces with Gaze- Contiguent Tunnel Vision Simulation”, ACM SIGACCESS International Conference on Computers and Accessibility (ASSETS), 2017
  • Ryo Yonetani, Vishnu Naresh Boddeti, Kris Kitani, Yoichi Sato: “Privacy-Preserving Visual Learning Using Doubly Permuted Homomorphic Encryption”, International Workshop on The Bright and Dark Sides of Computer Vision: Challenges and Opportunities for Privacy and Security (CV-COPS), 2017
  • Hiroshi Kera, Ryo Yonetani, Keita Higuchi, Yoichi Sato: “Discovering Objects of Joint Attention via First-Person Sensing”, IEEE CVPR Workshop on Egocentric (First-Person) Vision (EGOV), 2016
  • Kei Shimonishi, Hiroaki Kawashima, Ryo Yonetani, Erina Ishikawa, Takashi Matsuyama: “Learning Aspects of Interest from Gaze”, ICMI Workshop on Eye Gaze in Intelligent Human Machine Interaction: Gaze in Multimodal Interaction (GazeIn), 2013
  • Ryo Yonetani: “Modeling Video Viewing Behaviors for Viewer State Estimation”, ACM Multimedia Doctoral Symposium (ACMMM-DS), 2012
  • Erina Ishikawa, Ryo Yonetani, Hiroaki Kawashima, Takatsugu Hirayama, Takashi Matsuyama: “Semantic Interpretation of Eye Movements Using Designed Structures of Displayed Contents”, ICMI Workshop on Eye Gaze in Intelligent Human Machine Interaction: Eye Gaze, Multimodality (GazeIn), 2012