Problem
Human–Robot Collaboration (HRC) promises performance gains in domains where humans and robots must act simultaneously, yet most autonomy frameworks treat control as discrete or hierarchical rather than shared
Existing shared autonomy approaches often lack a principled, continuous formulation, making it difficult to reason about autonomy, trust, and cognitive load in a measurable and repeatable way
Why it matters
If shared autonomy is to be deployed in safety-critical domains (space operations, surgery, tri-manual assembly), autonomy must be continuous, interpretable, and tunable, not binary or opaque
Understanding how varying autonomy levels affect human performance, trust, and cognitive load is essential to designing systems that assist rather than overwhelm human operators
A mathematically grounded autonomy formulation enables systematic experimentation instead of ad-hoc tuning or post-hoc explanations
Approach
Adopted a Plan–Act interpretation of the Sense–Plan–Act taxonomy, extending prior work that focused solely on execution-level autonomy
Designed a task environment that forces concurrent collaboration, preventing sequential task decomposition and exposing real coordination effects
Implemented autonomy as a continuous control blending problem, allowing smooth transitions between human-led and robot-led behaviour
Key Insight
Reframed acting autonomy as a signal-fusion problem, not a role-assignment problem
Demonstrated that continuous autonomy can be cleanly expressed as a convex combination of human and robot intent, providing both mathematical rigour and intuitive interpretability
Showed that autonomy gains emerge not from replacing human control, but from attenuating noise and hesitation in human input while preserving user intent
Contribution
My specific contribution to the project:
Implemented acting autonomy in software as a convex combination of human and robot control signals. This formulation guaranteed:
Smooth interpolation between human-dominant and robot-dominant control
Predictable system behaviour across autonomy levels
A direct experimental handle for studying autonomy–trust–performance trade-offs
Result
Higher acting autonomy correlated with improved task performance, increased trust, and reduced cognitive load
Planning autonomy showed similar but weaker effects, suggesting execution-level assistance delivers the strongest immediate benefits
Participants perceived autonomy changes consistently with the mathematical formulation, supporting the interpretability of the approach
Despite a limited pilot study size, trends were stable enough to justify further investigation and scaling
The novelty and stability of the results led to this project being selected for development into a publishable research paper
endeavour day! we got to show off our project to university student, staff, industry professionals and wider audiences
task-bed setup with colour coded 'pucks' and 'goals' for our human-robot collaboration experiment
i ran a live demo in the lab with the robot and joystick to record pilot results for our study