Ryo Yonetani, Ph.D.

Senior Researcher at OMRON SINIC X Corporation

Cooperative Research Fellow at Sato Lab in Institute of Industrial Science, The University of Tokyo

mail: ryo.yonetani +at+ sinicx.com

» Introduction in Japanese


Research Interest

Computer Vision (First-Person Vision), and Machine Learning (Federated Learning)

I like to use wearable cameras for various human sensing problems. Much of my recent work has focused on how we can utilize multiple wearable cameras collectively for analyzing group activities.

» Project: First-Person Vision with Multiple Wearable Cameras

» Datasets for First-Person Vision

I am also interested in decentralized training such as Federated Learning, which utilizes distributed data and computational resources of multiple client devices to enable a large-scale training.

» Project: Practical Considerations of Decentralized Training

News

  • (JUne 14, 2019) We will present our recent work on decentralized GAN at CV-COPS workshop in conjunction with CVPR’19. Poster here.
  • (May 24, 2019) New work on decentralized learning of GANs is now available on arXiv.
  • (March 6, 2019) New work on multi-agent imitation learning is available on arXiv.
  • (February 7, 2019) Paper on Federated Learning was accepted by International Conference on Communications.
  • (January 12, 2019) I will serve as a publicity chair at ACCV 2020. See you in Kyoto!
  • (January 1, 2019) After five wonderful years at UTokyo, I joined OMRON SINIC X as a senior researcher.
  • (December 16, 2018) One paper accepted by ACM IUI.
  • (October 4, 2018) One paper accepted by ACM ISS.
  • (June 26, 2018) Released a demo code of the correlation-based person search presented in our TPAMI paper.
  • (June 19, 2018) We will organize a workshop on human attention and intention understanding in conjunction with ACCV’18. See you in Perth! [Workshop Website].
  • (April 24, 2018) We posted a new article to introduce our activities on decentralized training.
  • (April 24, 2018) We posted a new work on federated learning combined with mobile edge computing (project page)
  • (April 17, 2018) We have released a part of our data used in our ECCV’16 work at Datasets for First-Person Vision
  • (February 19, 2018) Our work on future localization has been accepted to CVPR’18! (Project page).
  • (February 19, 2018) One paper accepted to CHI’18 Late Breaking Work.
  • (February 16, 2018) I will serve as a program committee member for the CVPR2018 workshop on Challenges and Opportunities for Privacy and Security.
  • (January 10, 2018) We have updated [PEV dataset], a dataset of first-person videos recorded during dyadic conversations presented at CVPR’16.
  • (December 1, 2017) We posted a new work on future localization in first-person videos on arXiv (Project page).
  • (October 24, 2017) Our paper on people identification for first-person videos has been accepted to IEEE TPAMI.
  • (August 28, 2017) Our paper on multi-camera first person vision has been accepted to ICCV workshop.
  • (July 16, 2017) Our paper on privacy-preserving visual learning has been accepted to ICCV2017.
  • (May 31, 2017) We will present our work on privacy-preserving visual learning at CVPR workshop on Challenges and Opportunities for Privacy and Security.
  • (December 11, 2016) Our paper has been accepted to CHI2017.
  • (November 19, 2016) We will be organizing the First International Workshop on Human Activity Analysis with Highly Diverse Cameras at WACV2017.
  • (September 1, 2016) I will work as a visiting scholar at the CMU Robotics Institute for a year.
  • (July 12, 2016) Our paper has been accepted to ECCV2016.
  • (June 16, 2016) The extended version of our CVPR2015 work (ego-surfing first-person videos) is now available on the arXiv.
  • (May 17, 2016) I will be serving as an organizing committee member for IAPR MVA2017.
  • (April 20, 2016) Our paper has been accepted to EGOV2016.
  • (March 6, 2016) Our paper has been accepted to CVPR2016.
  • (January 18, 2016) I will be serving as a sponsorship chair for ICMI2016.
  • (January 15, 2016) Our paper has been accepted to CHI2016.