Perceptual User Interfaces Logo
University of Stuttgart Logo

Biography

Lei Shi is a post-doctoral researcher in the Perceptual User Interfaces group since March 2022. He received his PhD degree from the University of Antwerp, Belgium, in the InViLab research group. He received his master's degree (cum laude) in Mechatronics from the Tallinn University of Technology, Estonia and his bachelor's degree in Electrical Engineering and Automation from Shanghai Maritime University, China. His research interests include eye tracking, intention prediction and graph neural networks.


Teaching

2022
Machine Perception and Learning (Tutor) (summer)Master
 
Machine Perception and Learning (Teaching Assistant) (winter)Master

Open Thesis Projects

 
Evaluating Embedding Methods of Action in Goal Prediction TaskBachelor/Master
 
Next Question Prediction in Goal-oriented Visual Question Answering by a Theory of Mind modelBachelor/Master
 
Inferring Other Agents' Goal in Collaborative Environments Using GraphsBachelor

Publications

  1. Improving Neural Saliency Prediction with a Cognitive Model of Human Visual Attention

    Improving Neural Saliency Prediction with a Cognitive Model of Human Visual Attention

    Ekta Sood, Lei Shi, Matteo Bortoletto, Yao Wang, Philipp Müller, Andreas Bulling

    Proc. the 45th Annual Meeting of the Cognitive Science Society (CogSci), pp. 3639–3646, 2023.

    Abstract Links BibTeX Project

  2. Exploring Natural Language Processing Methods for Interactive Behaviour Modelling

    Exploring Natural Language Processing Methods for Interactive Behaviour Modelling

    Guanhua Zhang, Matteo Bortoletto, Zhiming Hu, Lei Shi, Mihai Bâce, Andreas Bulling

    Proc. IFIP TC13 Conference on Human-Computer Interaction (INTERACT), pp. 1–22, 2023.

    Abstract Links BibTeX Project

  1. Evaluating Dropout Placements in Bayesian Regression Resnet

    Lei Shi, Cosmin Copot, Steve Vanlanduit

    Journal of Artificial Intelligence and Soft Computing Research, 12(1), pp. 61–73, 2022.

    Abstract Links BibTeX Project

  1. Gaze Gesture Recognition by Graph Convolutional Networks

    Lei Shi, Cosmin Copot, Steve Vanlanduit

    Frontiers in Robotics and AI, 8, 2021.

    Abstract Links BibTeX Project

  2. GazeEMD: Detecting Visual Intention in Gaze-Based Human-Robot Interaction

    Lei Shi, Cosmin Copot, Steve Vanlanduit

    Robotics, 10(2), pp. 1–18, 2021.

    Abstract Links BibTeX Project

  3. A Bayesian Deep Neural Network for Safe Visual Servoing in Human–Robot Interaction

    Lei Shi, Cosmin Copot, Steve Vanlanduit

    Frontiers in Robotics and AI, 8, pp. 1–13, 2021.

    Abstract Links BibTeX Project

  1. Visual Intention Classification by Deep Learning for Gaze-based Human-Robot Interaction

    Lei Shi, Cosmin Copot, Steve Vanlanduit

    IFAC-PapersOnLine, 53(5), pp. 750-755, 2020.

    Abstract Links BibTeX Project

  1. A Deep Regression Model for Safety Control in Visual Servoing Applications

    Lei Shi, Cosmin Copot, Steve Vanlanduit

    Proc. IEEE International Conference on Robotic Computing (IRC), pp. 360–366, 2020.

    Abstract Links BibTeX Project

  1. A Performance Analysis of Invariant Feature Descriptors in Eye Tracking based Human Robot Collaboration

    Lei Shi, Cosmin Copot, Stijn Derammelaere, Steve Vanlanduit

    Proc. International Conference on Control, Automation and Robotics (ICCAR), pp. 256–260, 2019.

    Abstract Links BibTeX Project

  2. Application of Visual Servoing and Eye Tracking Glass in Human Robot Interaction: A case study

    Lei Shi, Cosmin Copot, Steve Vanlanduit

    Proc. International Conference on System Theory, Control and Computing (ICSTCC), pp. 515–520, 2019.

    Abstract Links BibTeX Project

  3. Automatic Tuning Methodology of Visual Servoing System Using Predictive Approach

    Cosmin Copot, Lei Shi, Steve Vanlanduit

    Proc. IEEE International Conference on Control and Automation (ICCA), pp. 776–781, 2019.

    Abstract Links BibTeX Project