I am a Senior Research Scientist leading the Activity Understanding Team at CyberAgent AI Lab. My expertise lies in machine learning, robotics, and computer vision, especially related to human activity sensing and understanding. I am also a Project Associate Professor at Hyper Vision Research Laboratory, Keio University.

X: @RYonetani / Google Scholar / GitHub

đź’ˇ Research Interests

  • Machine Learning: Neural Planner; Federated Learning; Transfer Learning
  • Wearable Sensing: Human Sensing; First-Person Vision; Visual Forecasting
  • Multi-Agent Systems: Multi-Agent Path Planning; Multi-Agent Reinforcement Learning

🎓 Education

  • 2013: Ph.D. in Informatics @ Kyoto University
  • 2011: M.S in Informatics @ Kyoto University
  • 2009: B.E in Electrical and Electronic Engneering @ Kyoto University

đź’Ľ Work Experience

  • 2023-Present: Seinor Research Scientist at CyberAgent AI Lab
  • 2024-Present: Project Associate Professor at Hyper Vision Research Laboratory, Keio University
  • 2021-2024: Project Senior Assistant Professor at Hyper Vision Research laboratory, Keio University
  • 2020-2023: Principal Investigator at OMRON SINIC X
  • 2019-2020: Senior Researcher at OMRON SINIC X
  • 2016-2017: Visiting Scholar at Carnegie Mellon University
  • 2014-2018: Assistant Professor at University of Tokyo

đź“„ Publications

Peer-reviewed papers

  • Akira Kasuga, Ryo Yonetani, “CXSimulator: A User Behavior Simulation using LLM Embeddings for Web-Marketing Campaign Assessment”, ACM International Conference on Information and Knowledge Management (CIKM), 2024 [paper]
  • Hikaru Asano, Ryo Yonetani, Taiki Sekii, Hiroki Ouchi, “Text2Traj2Text: Learning-by-Synthesis Framework for Contextual Captioning of Human Movement Trajectories”, International Natural Language Generation Conference (INLG), 2024 [paper] [code]
  • Ryo Yonetani, Jun Baba, Yasutaka Furukawa, “RetailOpt: Opt-In, Easy-to-Deploy Trajectory Estimation from Smartphone Motion Data and Retail Facility Information”, ACM International Symposium on Wearable Computing (ISWC) Notes, 2024 [paper] (Best Paper Honorable Mention)
  • Kohei Honda, Ryo Yonetani, Mai Nishimura, Tadashi Kozuno, “When to Replan? an Adaptive Replanning Strategy for Autonomous Navigation Using Deep Reinforcement Learning”, International Conference on Robotics and Automation (ICRA), 2024 [paper] [project page]
  • Hikaru Asano, Ryo Yonetani, Mai Nishimura, Tadashi Kozuno, “Counterfactual Fairness Filter for Fair-Delay Multi-Robot Navigation”, International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), 2023 [paper] [project page]
  • Kazumi Kasaura, Shuwa Miura, Tadashi Kozuno, Ryo Yonetani, Kenta Hoshino, Yohei Hosoe, “Benchmarking Actor-Critic Deep Reinforcement Learning Algorithms for Robotics Control with Action Constraints”, IEEE Robotics and Automation Letters (RA-L), 2023 [paper] [project page]
  • Masafumi Endo, Tatsunori Taniai, Ryo Yonetani, Genya Ishigami, “Risk-aware Path Planning via Probabilistic Fusion of Traversability Prediction for Planetary Rovers on Heterogeneous Terrains”, International Conference on Robotics and Automation (ICRA), 2023 [paper] [project page]
  • Kazumi Kasaura, Ryo Yonetani, Mai Nishimura, “Periodic Multi-Agent Path Planning”, AAAI Conference on Artificial Intelligence (AAAI), 2023 [paper] [project page]
  • Kazumi Kasaura, Mai Nishimura, Ryo Yonetani, “Prioritized Safe Interval Path Planning for Multi-Agent Pathfinding With Continuous Time on 2D Roadmaps”, IEEE Robotics and Automation Letters (RA-L), 2022 [paper] [project page]
  • Keisuke Okumura, Ryo Yonetani, Mai Nishimura, Asako Kanezaki, “CTRMs: Learning to Construct Cooperative Timed Roadmaps for Multi-agent Path Planning in Continuous Spaces”, International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), 2022 [paper] [project page]
  • Ryo Yonetani, Tatsunori Taniai, Mohammadamin Barekatain, Mai Nishimura, Asako Kanezaki, “Path Planning using Neural A* Search”, International Conference on Machine Learning (ICML), 2021 [paper] [project page]
  • Kazutoshi Tanaka, Ryo Yonetani, Masashi Hamaya, Robert Lee, Felix von Drigalski, Yoshihisa Ijiri, “TRANS-AM: Transfer Learning by Aggregating Dynamics Models for Soft Robotic Assembly”, International Conference on Robotics and Automation (ICRA), 2021 [paper] [project page]
  • Felix von Drigalski, Masashi Hayashi, Yifei Huang, Ryo Yonetani, Masashi Hamaya, Kazutoshi Tanaka, Yoshihisa Ijiri, “Precise Multi-Modal In-Hand Pose Estimation using Low-Precision Sensors for Robotic Assembly”, International Conference on Robotics and Automation (ICRA), 2021 [paper]
  • Hiroaki Minoura, Ryo Yonetani, Mai Nishimura, Yoshitaka Ushiku, “Crowd Density Forecasting by Modeling Patch-based Dynamics”, IEEE Robotics and Automation Letters (RA-L), 2020. [paper]
  • Jiaxin Ma, Ryo Yonetani, Zahid Iqbal, “Adaptive Distillation for Decentralized Learning from Heterogeneous Clients”, International Conference on Pattern Recognition (ICPR), 2020 [paper]
  • Mai Nishimura, Ryo Yonetani, “L2B: Learning to Balance the Safety-Efficiency Trade-off in Interactive Crowd-aware Robot Navigation”, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020 [paper]
  • Mohammadamin Barekatain, Ryo Yonetani, Masashi Hamaya, “MULTIPOLAR: Multi-Source Policy Aggregation for Transfer Reinforcement Learning between Diverse Environmental Dynamics”, International Joint Conference on Artificial Intelligence (IJCAI), 2020 [paper] [project page]
  • Rie Kamikubo, Naoya Kato, Keita Higuchi, Ryo Yonetani, Yoichi Sato, “Studying Effective Agents in Remote Sighted Guidance for People Navigating with Visual Impairments”, ACM Conference on Human Factors in Computing Systems (CHI), 2020
  • Navyata Sanghvi, Ryo Yonetani, Kris Kitani, “Modeling Social Group Communication with Multi-Agent Imitation Learning”, International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), 2020 [paper]
  • Naoya Yoshida, Takayuki Nishio, Masahiro Morikura, Koji Yamamoto, Ryo Yonetani, “Hybrid-FL for Wireless Networks: Cooperative Learning Mechanism Using Non-IID Data”, IEEE International Conference on Communications (ICC), 2020 [paper]
  • Takayuki Nishio, Ryo Yonetani: “Client Selection for Federated Learning with Heterogeneous Resources in Mobile Edge”, IEEE International Conference on Communications (ICC), 2019 [paper]
  • Nathawan Charoenkulvanich, Rie Kamikubo, Ryo Yonetani, and Yoichi Sato, “Assisting Group Activity Analysis through Hand Detection and Identification in Multiple Egocentric Videos”, ACM Conference on Intelligent User Interface (IUI), 2019.
  • Ryo Yonetani, Kris Kitani, Yoichi Sato: “Ego-Surfing: Person Localization in First-Person Videos Using Ego-Motion Signatures”, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), Vol.40, Issue 11, pp.2749-2761, 2018. [paper]
  • Yuki Sugita, Keita Higuchi, Ryo Yonetani, Rie Kamikubo, Yoichi Sato, “Browsing Group First-Person Videos with 3D Visualization”, ACM International Conference on Interactive Surfaces and Spaces (ISS), 2018
  • Takuma Yagi, Karttikeya Mangalam, Ryo Yonetani, Yoichi Sato, “Future Person Localization in First-Person Videos”, IEEE Conference on Computer Vision and Pattern Recognition (CVPR, spotlight presentation), 2018 [paper] [code]
  • Rie Kamikubo, Keita Higuchi, Ryo Yonetani, Hideki Koike, Yoichi Sato, “Exploring the Role of Tunnel Vision Simulation in the Design Cycle of Accessible Interfaces”, International Cross-Disciplinary Conference on Web Accessibility (Web4All), 2018
  • Ryo Yonetani, Vishnu Naresh Boddeti, Kris M. Kitani, Yoichi Sato, “Privacy-Preserving Visual Learning Using Doubly Permuted Homomorphic Encryption”, International Conference on Computer Vision (ICCV), 2017 [paper]
  • Keita Higuchi, Ryo Yonetani, Yoichi Sato, “EgoScanning: Quickly Scanning First-Person Videos with Egocentric Elastic Timelines”, ACM Conference on Human Factors in Computing Systems (CHI), 2017
  • Ryo Yonetani, Kris Kitani, Yoichi Sato, “Visual Motif Discovery via First-Person Vision”, European Conference on Computer Vision (ECCV), 2016 [paper]
  • Ryo Yonetani, Kris Kitani, Yoichi Sato, “Recognizing Micro-Actions and Reactions from Paired Egocentric Videos”, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016 [paper]
  • Keita Higuchi, Ryo Yonetani, Yoichi Sato, “Can Eye Help You?: Effects of Visualizing Eye Fixations on Remote Collaboration Scenarios for Physical Tasks”, ACM Conference on Human Factors in Computing Systems (CHI), 2016
  • Ryo Yonetani, Kris Kitani, Yoichi Sato, “Ego-Surfing First-Person Videos”, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015 [paper]
  • Ryo Yonetani, Hiroaki Kawashima, Takashi Matsuyama, “Predicting Where We Look from Spatiotemporal Gaps”, International Conference on Multimodal Interaction (ICMI), 2013
  • Ryo Yonetani, Akisato Kimura, Hitoshi Sakano, Ken Fukuchi: “Single Image Segmentation with Estimated Depth”, British Machine Vision Conference (BMVC), 2012
  • Ryo Yonetani, Hiroaki Kawashima, Takashi Matsuyama, “Multi-mode Saliency Dynamics Model for Analyzing Gaze and Attention”, Eye Tracking Research & Applications (ETRA), 2012
  • Ryo Yonetani, Hiroaki Kawashima, Takatsugu Hirayama, Takashi Matsuyama, “Gaze Probing: Event-Based Estimation of Objects Being Focused On”, International Conference on Pattern Recognition (ICPR, IBM Best Student Paper Award), 2010

Workshop papers, extended abstracts, and preprints

  • Toshinori Kitamura, Ryo Yonetani, “ShinRL: A Library for Evaluating RL Algorithms from Theoretical and Practical Perspectives”, NeurIPS Deep RL Workshop, 2021 [paper] [code]
  • Ryo Yonetani, Tomohiro Takahashi, Atsushi Hashimoto, Yoshitaka Ushiku, “Decentralized Learning of Generative Adversarial Networks from Multi-Client Non-iid Data”, arXiv preprint, 2019 [paper]
  • Navyata Sanghvi, Ryo Yonetani, Kris Kitani, “Learning Group Communication from Demonstration”, RSS Workshop on Models and Representations for Natural Human-Robot Communication, 2018
  • Seita Kayukawa, Keita Higuchi, Ryo Yonetani, Maanori Nakamura, Yoichi Sato, Shigeo Morishima: “Dynamic Object Scanning: Object-Based Elastic Timeline for Quickly Browsing First-Person Videos”, ACM Conference on Human Factors in Computing Systems Late Breaking Work (CHI-LBW), 2018
  • Keita Higuchi, Ryo Yonetani, Yoichi Sato: “EgoScanning: Quickly Scanning First-Person Videos with Egocentric Elastic Timelines”, ACM SIGGRAPH Asia Emerging Technologies (SIGGRAPH-ASIA-ETECH), 2017
  • Yifei Huang, Minjie Cai, Hiroshi Kera, Ryo Yonetani, Keita Higuchi, Yoichi Sato: “Temporal Localization and Spatial Segmentation of Joint Attention in Multiple First-Person Video”, International Workshop on Egocentric Perception, Interaction, and Computing (EPIC), 2017
  • Rie Kamikubo, Keita Higuchi, Ryo Yonetani, Hideki Koike, Yoichi Sato: “Rapid Prototyping of Accessible Interfaces with Gaze- Contiguent Tunnel Vision Simulation”, ACM SIGACCESS International Conference on Computers and Accessibility (ASSETS), 2017
  • Ryo Yonetani, Vishnu Naresh Boddeti, Kris Kitani, Yoichi Sato: “Privacy-Preserving Visual Learning Using Doubly Permuted Homomorphic Encryption”, International Workshop on The Bright and Dark Sides of Computer Vision: Challenges and Opportunities for Privacy and Security (CV-COPS), 2017
  • Hiroshi Kera, Ryo Yonetani, Keita Higuchi, Yoichi Sato: “Discovering Objects of Joint Attention via First-Person Sensing”, IEEE CVPR Workshop on Egocentric (First-Person) Vision (EGOV), 2016
  • Kei Shimonishi, Hiroaki Kawashima, Ryo Yonetani, Erina Ishikawa, Takashi Matsuyama: “Learning Aspects of Interest from Gaze”, ICMI Workshop on Eye Gaze in Intelligent Human Machine Interaction: Gaze in Multimodal Interaction (GazeIn), 2013
  • Ryo Yonetani: “Modeling Video Viewing Behaviors for Viewer State Estimation”, ACM Multimedia Doctoral Symposium (ACMMM-DS), 2012
  • Erina Ishikawa, Ryo Yonetani, Hiroaki Kawashima, Takatsugu Hirayama, Takashi Matsuyama: “Semantic Interpretation of Eye Movements Using Designed Structures of Displayed Contents”, ICMI Workshop on Eye Gaze in Intelligent Human Machine Interaction: Eye Gaze, Multimodality (GazeIn), 2012

Academic activities

Awards

  • Best Paper Honorable Mention, ACM International Symposium on Wearable Computing (ISWC) Notes and Briefs, 2024
  • CVPR2017 Outstanding Reviewer, IEEE Conference on Computer Vision and Pattern Recognition, 2017
  • IBM Best Student Paper Award (Track IV: Biometrics and Human Computer Interaction), International Conference on Pattern Recognition, 2010

Chair experience