In the context of Moonshot Goal 1, robotic avatars are envisioned as tools for enhanced autonomy, accessibility, and superhuman action. However, user acceptance and control efficiency depend not only on performance, but on the felt experience of control and on the user trust of the system. Loss of sense of agency leads to loss of sense of trust, disengagement, and system rejection.
This workshop addresses a pressing gap: how to design, evaluate, and prototype avatar control systems that support a robust sense of agency, especially under constraints of latency, AI assistance, or limited embodiment; from a multidisciplinary and transdisciplinary perspective.
Aim & Scope
This workshop explores how the Sense of Agency (SoA), the perceptual feeling of being in control of one's actions and their effects, shapes the experience, usability, and adoption of robotic avatars.
Robotic avatars are intelligent physical or virtual bodies that users can project themselves into, enabling remote action, communication, or augmentation beyond biological limitations. The workshop connects cognitive neuroscience with robotics, presenting SoA as a measurable and designable quality that determines whether users identify with and effectively use robotic embodiments.
Key Topics
Perceptual feedback and synchronization in avatar control
Measuring and designing sense of agency
Trust in post-biological control systems
Latency compensation and AI assistance
Multimodal interfaces for robotic embodiment
Workshop Structure
This workshop is a hands-on workshop where participants will experiment, engineer, and experience
differences in perceived sense of agency depending on control parameters.
The workshop is structured into three cohesive sessions, balancing theory, experimentation, and synthesis:
Session 1: Theoretical Foundations of Avatar Agency (60 minutes)
Invited Talk 1:The Neuroscience of Agency in Embodied Systems (TENTATIVE TITLE)
Invited Talk 2:Liberating the Body: Cognitive Symbiosis with Robotic Agents (TENTATIVE TITLE)
Talk + Live Demo:A ROS 2 Platform for Synchronized Multimodal Data Replay and Perceptual Prototyping
Participants self‑organize in teams and select from three tracks, designed for scalability and inclusivity:
Track A: GUI‑based simulation of action–effect latency and visual/motor distortion
Track B: ROS 2 node tuning using pre‑recorded EMG, motion, and video data
Track C: Conceptual design of robotic avatar systems with enhanced perceptual feedback
All tracks aim to explore how system‑level design choices affect perceived control and embodiment.
A modular ROS2-based platform will be at available for participants to download and use to create, modify,
and experiment first-hand with a user-avatar system. Coding skills are welcomed but not required.
Bringing our own laptop for experimenting with the platform is strongly encouraged.
Session 3: Presentations and Theoretical Integration (90 minutes)
Team presentations, demo showcases
Interactive panel discussion:What Makes an Avatar Feel Like Me?
Summary and guidelines for designing SoA‑aware control architecture
Submission Information
Beside regular papers, position papers and survey papers are also welcome.
Posters are also accepted. All the contributions to the workshop must be submitted according to the standard
IEEE format as specified in the AIxRobotics guidelines.
Papers will be refereed and accepted on the basis of their merit, originality, and relevance. Each paper will be reviewed by
at least two Program Committee members. Papers can be submitted via email at: arkairosworkshop@gmail.com