These are examples of research projects currently funded by TRAILS that illustrate the sorts of research questions and methodologies the team of TRAILS-SURF Fellows may work on. They are grouped by the research thrusts on which the summer research will focus, Participatory Design, Methods and Metrics, and Evaluating Trust. (See the TRAILS website for descriptions of these focus areas.)

Participatory Design

Enhancing Trust Building in Blind-Sighted Teamwork: Through the Design of Embodied AI

Trust building is crucial for teamwork and is largely driven by nonverbal cues. However, blind individuals face challenges perceiving and utilizing these cues, causing imbalances in blind-sighted teamwork. This proposal addresses: RQ1, designing embodied AI to enable the exchange of nonverbal cues between blind and sighted individuals; and RQ2, understanding both groups’ values concerning teamwork facilitated by embodied AI.

Participatory Design of Trustworthy Embodied AI for Team-based At-home Support for Autistic Children

Access to healthcare support and medical assistance still pose severe barriers to most families of autistic children, and inequalities in service and accessibility for patients with diverse racial, ethnic, and family-income backgrounds still pose severe challenges. To address these, this TRAILS pilot project proposes to explore strategies to bridge the gap in basic autism-specific support for home environments and assess whether the intelligent companion animal-robot (i-CAR) can serve as part of a caregiver support team and create more trust through socio-emotional interactions and personalized assistance. The proposed Trustworthy Human Animal-Robot Interaction (THARI) framework will be developed using participatory design processes, and aim to enhance emotional and behavioral perception, and interaction policy learning to provide team-based personalized health support and developmental assistance in response to diverse stakeholder needs.

Methods and Metrics

Effort-aware Fairness: Measures and Mitigations in AI-assisted Decision Making

Although popularized AI fairness metrics have uncovered bias in AI-assisted decision-making outcomes, they do not consider how much effort one has spent to get to one’s current input feature values. We propose a new philosophy/law-grounded way to conceptualize, evaluate, and achieve Effort-aware Fairness. Our goal is to understand how to develop a philosophy/law-grounded Effort-aware Fairness metric, how to evaluate Effort-aware Fairness on fairness datasets with time-series features, and how to design novel algorithm(s) to achieve long-term Effort-aware Fairness. Answers to these RQs will enable AI model auditors to uncover and potentially correct for unfair decisions against individuals who spent significant efforts to improve but are still stuck with undesirable feature values due to systematic discrimination against their groups.

Copyright and IP Issues in Generative AI: A Combined Law and CS Perspective