Ryo Yonetani, Ph.D.

Research Associate in the Institute of Industrial Science at the University of Tokyo

Technical Advisor at OMRON SINIC X Corporation (effective June 21, 2018)

mail: yonetani +at+ iis.u-tokyo.ac.jp | ryo.yonetani +at+ sinicx.com

» Introduction in Japanese


Call for Interns at OMRON SINIC X

OMRON SINIC X Corporation (OSX) is the OMRON’s brand-new start-up company located next to the University of Tokyo’s Hongo Campus. We are looking for intern students excellent in either computer vision, machine learning, or robotics. Accepted students will be offered a base salary of 200k-400k JPY per month (full-time) with residence and transport fully covered. If you are interested in research stays in Tokyo, please feel free to contact internships@sinicx.com. More information available at: [Call for Interns].


Research Interest

Computer Vision (First-Person Vision), and Machine Learning (Federated Learning)

I like to use wearable cameras for various human sensing problems. Much of my recent work has focused on how we can utilize multiple wearable cameras collectively for analyzing group activities.

» Project: First-Person Vision with Multiple Wearable Cameras

» Datasets for First-Person Vision

I am also interested in decentralized training such as Federated Learning, which leverages highly-distributed clients for training deep neural networks while preserving data privacy.

» Project: Practical Considerations of Decentralized Training

News

  • (December 16, 2018) One paper accepted by ACM IUI.
  • (October 4, 2018) One paper accepted by ACM ISS.
  • (June 26, 2018) Released a demo code of the correlation-based person search presented in our TPAMI paper.
  • (June 19, 2018) We will organize a workshop on human attention and intention understanding in conjunction with ACCV’18. See you in Perth! [Workshop Website].
  • (April 24, 2018) We posted a new article to introduce our activities on decentralized training.
  • (April 24, 2018) We posted a new work on federated learning combined with mobile edge computing (project page)
  • (April 17, 2018) We have released a part of our data used in our ECCV’16 work at Datasets for First-Person Vision
  • (February 19, 2018) Our work on future localization has been accepted to CVPR’18! (Project page).
  • (February 19, 2018) One paper accepted to CHI’18 Late Breaking Work.
  • (February 16, 2018) I will serve as a program committee member for the CVPR2018 workshop on Challenges and Opportunities for Privacy and Security.
  • (January 10, 2018) We have updated [PEV dataset], a dataset of first-person videos recorded during dyadic conversations presented at CVPR’16.
  • (December 1, 2017) We posted a new work on future localization in first-person videos on arXiv (Project page).
  • (October 24, 2017) Our paper on people identification for first-person videos has been accepted to IEEE TPAMI.
  • (August 28, 2017) Our paper on multi-camera first person vision has been accepted to ICCV workshop.
  • (July 16, 2017) Our paper on privacy-preserving visual learning has been accepted to ICCV2017.
  • (May 31, 2017) We will present our work on privacy-preserving visual learning at CVPR workshop on Challenges and Opportunities for Privacy and Security.
  • (December 11, 2016) Our paper has been accepted to CHI2017.
  • (November 19, 2016) We will be organizing the First International Workshop on Human Activity Analysis with Highly Diverse Cameras at WACV2017.
  • (September 1, 2016) I will work as a visiting scholar at the CMU Robotics Institute for a year.
  • (July 12, 2016) Our paper has been accepted to ECCV2016.
  • (June 16, 2016) The extended version of our CVPR2015 work (ego-surfing first-person videos) is now available on the arXiv.
  • (May 17, 2016) I will be serving as an organizing committee member for IAPR MVA2017.
  • (April 20, 2016) Our paper has been accepted to EGOV2016.
  • (March 6, 2016) Our paper has been accepted to CVPR2016.
  • (January 18, 2016) I will be serving as a sponsorship chair for ICMI2016.
  • (January 15, 2016) Our paper has been accepted to CHI2016.