I am a Research Scientist at CyberAgent, working on machine learning, computer vision, and the integration of these with other fields such as robotics and HCI.

Twitter: @RYonetani / Google Scholar / GitHub

đź’ˇ Research Interests

  • Machine Learning: Neural Planner; Federated Learning; Transfer Learning
  • Computer Vision: First-Person Vision; Visual Forecasting; Human Sensing
  • Multi-Agent Systems: Multi-Agent Path Planning; Multi-Agent Reinforcement Learning

🎓 Education

  • Ph.D. in Informatics @ Kyoto University (2013.11)
  • M.S in Informatics @ Kyoto University (2011.3)
  • B.E in Electrical and Electronic Engneering @ Kyoto University (2009.3)

đź’Ľ Work Experience

  • Research Scientist at CyberAgent AI Lab (2023.4-Present)
  • Project Associate Professor at Hyper Vision Research laboratory, Keio University (2024.4-Present)
  • Project Senior Assistant Professor at Hyper Vision Research laboratory, Keio University (2021.4-2024.3)
  • Principal Investigator at OMRON SINIC X (2020.4-2023.3)
  • Senior Researcher at OMRON SINIC X (2019.1-2020.3)
  • Visiting Scholar at Carnegie Mellon University (2016.9-2017.8)
  • Assistant Professor at University of Tokyo (2014.4-2018.12)

đź“„ Publications

Peer-reviewed papers

  • Hikaru Asano, Ryo Yonetani, Mai Nishimura, Tadashi Kozuno, “Counterfactual Fairness Filter for Fair-Delay Multi-Robot Navigation”, International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), 2023 [paper] [project page]
  • Kazumi Kasaura, Shuwa Miura, Tadashi Kozuno, Ryo Yonetani, Kenta Hoshino, Yohei Hosoe, “Benchmarking Actor-Critic Deep Reinforcement Learning Algorithms for Robotics Control with Action Constraints”, IEEE Robotics and Automation Letters (RA-L), 2023 [paper] [project page]
  • Masafumi Endo, Tatsunori Taniai, Ryo Yonetani, Genya Ishigami, “Risk-aware Path Planning via Probabilistic Fusion of Traversability Prediction for Planetary Rovers on Heterogeneous Terrains”, International Conference on Robotics and Automation (ICRA), 2023 [paper] [project page]
  • Kazumi Kasaura, Ryo Yonetani, Mai Nishimura, “Periodic Multi-Agent Path Planning”, AAAI Conference on Artificial Intelligence (AAAI), 2023 [paper] [project page]
  • Kazumi Kasaura, Mai Nishimura, Ryo Yonetani, “Prioritized Safe Interval Path Planning for Multi-Agent Pathfinding With Continuous Time on 2D Roadmaps”, IEEE Robotics and Automation Letters (RA-L), 2022 [paper] [project page]
  • Keisuke Okumura, Ryo Yonetani, Mai Nishimura, Asako Kanezaki, “CTRMs: Learning to Construct Cooperative Timed Roadmaps for Multi-agent Path Planning in Continuous Spaces”, International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), 2022 [paper] [project page]
  • Ryo Yonetani, Tatsunori Taniai, Mohammadamin Barekatain, Mai Nishimura, Asako Kanezaki, “Path Planning using Neural A* Search”, International Conference on Machine Learning (ICML), 2021 [paper] [project page]
  • Kazutoshi Tanaka, Ryo Yonetani, Masashi Hamaya, Robert Lee, Felix von Drigalski, Yoshihisa Ijiri, “TRANS-AM: Transfer Learning by Aggregating Dynamics Models for Soft Robotic Assembly”, International Conference on Robotics and Automation (ICRA), 2021 [paper] [project page]
  • Felix von Drigalski, Masashi Hayashi, Yifei Huang, Ryo Yonetani, Masashi Hamaya, Kazutoshi Tanaka, Yoshihisa Ijiri, “Precise Multi-Modal In-Hand Pose Estimation using Low-Precision Sensors for Robotic Assembly”, International Conference on Robotics and Automation (ICRA), 2021 [paper]
  • Hiroaki Minoura, Ryo Yonetani, Mai Nishimura, Yoshitaka Ushiku, “Crowd Density Forecasting by Modeling Patch-based Dynamics”, IEEE Robotics and Automation Letters (RA-L), 2020. [paper]
  • Jiaxin Ma, Ryo Yonetani, Zahid Iqbal, “Adaptive Distillation for Decentralized Learning from Heterogeneous Clients”, International Conference on Pattern Recognition (ICPR), 2020 [paper]
  • Mai Nishimura, Ryo Yonetani, “L2B: Learning to Balance the Safety-Efficiency Trade-off in Interactive Crowd-aware Robot Navigation”, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020 [paper]
  • Mohammadamin Barekatain, Ryo Yonetani, Masashi Hamaya, “MULTIPOLAR: Multi-Source Policy Aggregation for Transfer Reinforcement Learning between Diverse Environmental Dynamics”, International Joint Conference on Artificial Intelligence (IJCAI), 2020 [paper] [project page]
  • Rie Kamikubo, Naoya Kato, Keita Higuchi, Ryo Yonetani, Yoichi Sato, “Studying Effective Agents in Remote Sighted Guidance for People Navigating with Visual Impairments”, ACM Conference on Human Factors in Computing Systems (CHI), 2020
  • Navyata Sanghvi, Ryo Yonetani, Kris Kitani, “Modeling Social Group Communication with Multi-Agent Imitation Learning”, International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), 2020 [paper]
  • Naoya Yoshida, Takayuki Nishio, Masahiro Morikura, Koji Yamamoto, Ryo Yonetani, “Hybrid-FL for Wireless Networks: Cooperative Learning Mechanism Using Non-IID Data”, IEEE International Conference on Communications (ICC), 2020 [paper]
  • Takayuki Nishio, Ryo Yonetani: “Client Selection for Federated Learning with Heterogeneous Resources in Mobile Edge”, IEEE International Conference on Communications (ICC), 2019 [paper]
  • Nathawan Charoenkulvanich, Rie Kamikubo, Ryo Yonetani, and Yoichi Sato, “Assisting Group Activity Analysis through Hand Detection and Identification in Multiple Egocentric Videos”, ACM Conference on Intelligent User Interface (IUI), 2019.
  • Ryo Yonetani, Kris Kitani, Yoichi Sato: “Ego-Surfing: Person Localization in First-Person Videos Using Ego-Motion Signatures”, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), Vol.40, Issue 11, pp.2749-2761, 2018. [paper]
  • Yuki Sugita, Keita Higuchi, Ryo Yonetani, Rie Kamikubo, Yoichi Sato, “Browsing Group First-Person Videos with 3D Visualization”, accepted to ACM International Conference on Interactive Surfaces and Spaces (ISS), 2018
  • Takuma Yagi, Karttikeya Mangalam, Ryo Yonetani, Yoichi Sato, “Future Person Localization in First-Person Videos”, IEEE Conference on Computer Vision and Pattern Recognition (CVPR, spotlight presentation), 2018 [paper] [code]
  • Rie Kamikubo, Keita Higuchi, Ryo Yonetani, Hideki Koike, Yoichi Sato, “Exploring the Role of Tunnel Vision Simulation in the Design Cycle of Accessible Interfaces”, International Cross-Disciplinary Conference on Web Accessibility (Web4All), 2018
  • Ryo Yonetani, Vishnu Naresh Boddeti, Kris M. Kitani, Yoichi Sato, “Privacy-Preserving Visual Learning Using Doubly Permuted Homomorphic Encryption”, International Conference on Computer Vision (ICCV), 2017 [paper]
  • Keita Higuchi, Ryo Yonetani, Yoichi Sato, “EgoScanning: Quickly Scanning First-Person Videos with Egocentric Elastic Timelines”, ACM Conference on Human Factors in Computing Systems (CHI), 2017
  • Ryo Yonetani, Kris Kitani, Yoichi Sato, “Visual Motif Discovery via First-Person Vision”, European Conference on Computer Vision (ECCV), 2016 [paper]
  • Ryo Yonetani, Kris Kitani, Yoichi Sato, “Recognizing Micro-Actions and Reactions from Paired Egocentric Videos”, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016 [paper]
  • Keita Higuchi, Ryo Yonetani, Yoichi Sato, “Can Eye Help You?: Effects of Visualizing Eye Fixations on Remote Collaboration Scenarios for Physical Tasks”, ACM Conference on Human Factors in Computing Systems (CHI), 2016
  • Ryo Yonetani, Kris Kitani, Yoichi Sato, “Ego-Surfing First-Person Videos”, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015 [paper]
  • Ryo Yonetani, Hiroaki Kawashima, Takashi Matsuyama, “Predicting Where We Look from Spatiotemporal Gaps”, International Conference on Multimodal Interaction (ICMI), 2013
  • Ryo Yonetani, Akisato Kimura, Hitoshi Sakano, Ken Fukuchi: “Single Image Segmentation with Estimated Depth”, British Machine Vision Conference (BMVC), 2012
  • Ryo Yonetani, Hiroaki Kawashima, Takashi Matsuyama, “Multi-mode Saliency Dynamics Model for Analyzing Gaze and Attention”, Eye Tracking Research & Applications (ETRA), 2012
  • Ryo Yonetani, Hiroaki Kawashima, Takatsugu Hirayama, Takashi Matsuyama, “Gaze Probing: Event-Based Estimation of Objects Being Focused On”, International Conference on Pattern Recognition (ICPR, IBM Best Student Paper Award), 2010

Workshop papers, extended abstracts, and preprints

  • Toshinori Kitamura, Ryo Yonetani, “ShinRL: A Library for Evaluating RL Algorithms from Theoretical and Practical Perspectives”, NeurIPS Deep RL Workshop, 2021 [paper] [code]
  • Ryo Yonetani, Tomohiro Takahashi, Atsushi Hashimoto, Yoshitaka Ushiku, “Decentralized Learning of Generative Adversarial Networks from Multi-Client Non-iid Data”, arXiv preprint, 2019 [paper]
  • Navyata Sanghvi, Ryo Yonetani, Kris Kitani, “Learning Group Communication from Demonstration”, RSS Workshop on Models and Representations for Natural Human-Robot Communication, 2018
  • Seita Kayukawa, Keita Higuchi, Ryo Yonetani, Maanori Nakamura, Yoichi Sato, Shigeo Morishima: “Dynamic Object Scanning: Object-Based Elastic Timeline for Quickly Browsing First-Person Videos”, ACM Conference on Human Factors in Computing Systems Late Breaking Work (CHI-LBW), 2018
  • Keita Higuchi, Ryo Yonetani, Yoichi Sato: “EgoScanning: Quickly Scanning First-Person Videos with Egocentric Elastic Timelines”, ACM SIGGRAPH Asia Emerging Technologies (SIGGRAPH-ASIA-ETECH), 2017
  • Yifei Huang, Minjie Cai, Hiroshi Kera, Ryo Yonetani, Keita Higuchi, Yoichi Sato: “Temporal Localization and Spatial Segmentation of Joint Attention in Multiple First-Person Video”, International Workshop on Egocentric Perception, Interaction, and Computing (EPIC), 2017
  • Rie Kamikubo, Keita Higuchi, Ryo Yonetani, Hideki Koike, Yoichi Sato: “Rapid Prototyping of Accessible Interfaces with Gaze- Contiguent Tunnel Vision Simulation”, ACM SIGACCESS International Conference on Computers and Accessibility (ASSETS), 2017
  • Ryo Yonetani, Vishnu Naresh Boddeti, Kris Kitani, Yoichi Sato: “Privacy-Preserving Visual Learning Using Doubly Permuted Homomorphic Encryption”, International Workshop on The Bright and Dark Sides of Computer Vision: Challenges and Opportunities for Privacy and Security (CV-COPS), 2017
  • Hiroshi Kera, Ryo Yonetani, Keita Higuchi, Yoichi Sato: “Discovering Objects of Joint Attention via First-Person Sensing”, IEEE CVPR Workshop on Egocentric (First-Person) Vision (EGOV), 2016
  • Kei Shimonishi, Hiroaki Kawashima, Ryo Yonetani, Erina Ishikawa, Takashi Matsuyama: “Learning Aspects of Interest from Gaze”, ICMI Workshop on Eye Gaze in Intelligent Human Machine Interaction: Gaze in Multimodal Interaction (GazeIn), 2013
  • Ryo Yonetani: “Modeling Video Viewing Behaviors for Viewer State Estimation”, ACM Multimedia Doctoral Symposium (ACMMM-DS), 2012
  • Erina Ishikawa, Ryo Yonetani, Hiroaki Kawashima, Takatsugu Hirayama, Takashi Matsuyama: “Semantic Interpretation of Eye Movements Using Designed Structures of Displayed Contents”, ICMI Workshop on Eye Gaze in Intelligent Human Machine Interaction: Eye Gaze, Multimodality (GazeIn), 2012

Academic activities

Awards

  • Outstanding Reviewer @ CVPR2017
  • IBM Best Student Paper Award (Track IV: Biometrics and Human Computer Interaction) @ ICPR2010

Chair experience

Reviewer experience

  • Computer vision: CVPR, ICCV, ECCV, BMVC, ACCV, WACV, PAMI, IJCV
  • Robotics: ICRA, IROS
  • Machine learning and AI: IJCAI, AAAI, NeurIPS, ICML, ICLR
  • Others: CHI, ICPR