Home

I am Zahra a Ph.D in Computer Science at Arizona State University. I work at Yochan research group directed by Prof. Subbarao Kambhampati. My general research interest is in developing cutting-edge computational models and frameworks for trust and understanding human cognitive states to enhance human-AI interaction. I have extensive research experience in diverse areas such as automated planning, AI, human-AI interaction, human-robot interaction, reinforcement learning, machine learning, behavior modeling of humans using data, statistical modeling, decision-making frameworks, and game theory. You can find the latest version of my CV here.

Publications

  • A Mental Model based Theory of Trust
  • Z. Zahedi, S. Sreedharan, S. Kambhampati
    XAI Workshop, IJCAI 2023
  • Trust-Aware Planning: Modeling Trust Evolution in Iterated Human-Robot Interaction
  • Z. Zahedi, M. Verma, S. Sreedharan, S. Kambhampati
    HRI 2023
  • Explicable or Optimal: Trust Aware Planning in Iterated Human Robot Interaction
  • Z. Zahedi, S. Sreedharan, K. Valmeekam, S. Kambhampati
    Accepted as a stand-alone video, ICRA 2023
  • `Why didn't you allocate this task to them?' Negotiation-Aware Explicable Task Allocation and Contrastive Explanation Generation
  • Z. Zahedi, S. Sengupta, S. Kambhampati
    Proceedings of the 2023 International Conference on Autonomous Agents and Multiagent Systems (AAMAS)
  • A Mental-Model Centric Landscape of Human-AI Symbiosis
  • Z. Zahedi, S. Sreedharan, S. Kambhampati
    R2HCAI Workshop, AAAII 2023
  • Modeling the Interplay between Human Trust and Monitoring
  • Z. Zahedi, S. Sreedharan, M. Verma, S. Kambhampati
    Proceedings of 2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI)
  • Trust-Aware Planning: Modeling Trust Evolution in Longitudinal Human-Robot Interaction
  • Z. Zahedi, M. Verma, S. Sreedharan, S. Kambhampati
    XAIP and PlanRob Workshop, ICAPS 2021
  • Game-theoretic Model of Trust to Infer Human’s Observation Strategy of Robot Behavior
  • S. Sengupta*, Z. Zahedi*, S. Kambhampati
    R4P Workshop, R:SS 2021
  • `Why not give this work to them?' Negotiation-Aware Task-Allocation and Contrastive Explanation Generation
  • Z. Zahedi*, S. Sengupta*, S. Kambhampati
    Cooperative AI Workshop, NeurIPS 2020
  • To Monitor or Not: Observing Robot's Behavior based on a Game-Theoretic Model of Trust
  • S. Sengupta*, Z. Zahedi*, S. Kambhampati
    21st International Workshop on Trust in Agent Societies (co-located with AAMAS), (2019)
  • Towards Understanding User Preferences for Explanation Types in Model Reconciliation
  • Z. Zahedi*, A. Olmo*, T. Chakraborti, S. Sreedharan, S. Kambhampati
    HRI Late Breaking Report, (2019)
  • Seeking Nash equilibrium in non-cooperative differential games
  • Z. Zahedi, A. Khayatian, M. M. Arefi, S, Yin
    Journal of Vibration and Control, (2022)
  • Fast convergence to Nash equilibria without steady-State oscillation
  • Z. Zahedi, M. M. Arefi, A. Khayatian
    Systems and Control Letters, (2019)
  • Fast seeking of Nash equilibria without steady-state oscillation in games with non-quadratic payoffs
  • Z. Zahedi, M. M. Arefi, A. Khayatian, and H. Modares
    in proc. 2018 American Control Conference
  • Convergence without oscillation to Nash equilibria in non-Cooperative games with quadratic payoffs
  • Z. Zahedi, M. M. Arefi, and A. Khayatian
    25th Iranian Conference on Electrical Engineering (ICEE), published in IEEE Xplorer, May 2017
  • Real-time, Simultaneous Multi-Channel Data Acquisition Systems with no time skews between input channels
  • F. Zahedi, Z. Zahedi
    International Journal of Signal Processing Systems (IJSPS), vol. 4, no. 1, pp. 17-21, 2016
  • A review of Neuro-fuzzy Systems based on Intelligent Control
  • F. Zahedi, Z. Zahedi
    Journal of Electrical and Electronic Engineering, vol. 3, no. 2-1, pp. 58-61, 2015
  • Real-time, Simultaneous Multi-Channel Data Acquisition Systems with no time skews between input channels
  • F. Zahedi, Z. Zahedi
    6th International Conference on Signal Processing Systems (ICSPS), 2014
  • Review of Neuro-fuzzy based on Intelligent Control
  • F. Zahedi, Z. Zahedi
    8th Symposium on Advances in Science & Technology, 2014 (Persian)

    Experiences

    Research Scientist Intern

    September 2022 - May 2023
    Honda Research Institute, San Jose, CA

    In this role, I was responsible for developing modeling framework to understand and predict human cognitive states, modeling dynamics of human behavior for human-automation interactions, and creating and validating tools to optimize system performance based on predicted human states.

    Awards and Honors

    ASU Iranian American Alumni Academic Scholarship

    2023

    Grace Hopper's Scholarship, AnitaB.org

    2022

    Fulton Fellowship Award

    2019-2020

    Graduate Fellowship Award

    2019 and 2020

    2nd rank among the Control students in Shiraz University

    2016-2017

    Granted merit-based admission to M.Sc. in Shiraz University

    2015

    Honored as Active Student, Shiraz University

    2015

    Current Research Projects

    Mental model based framework of trust - In this work, we contextualize the user’s trust and their consequent choice on whether or not to rely on the AI agent in terms of their mental model of the agent, and their preexisting expectations about the optimal way of solving the task. The agent on the other hand can either use behavior generated through its model, or explanations about its model, to influence both the user’s expectations about the agent and their expectations about the task.
    Trust-Aware Planning in longitudinal human robot interaction - In this work, we have propose a computational model for capturing and modulating trust in longitudinal human-robot interaction where the robot integrates human’s trust and their expectations into planning to build and maintain trust over the interaction horizon. By establishing the required level of trust, the robot can focus on maximizing the team goal by eschewing explicit explanatory or explicable behavior. The human also with a high level of trust in the robot, might choose not to monitor the robot, or not to intervene by stopping the robot and save their resources.
    Game-theoretic model of trust - In this work, we modeled a game theoretic model of trust in Human robot interaction setting and introduced a solution as a trust boundary which is a proper human’s monitoring strategy given the inferred trust. By choosing a monitoring strategy in that boundary, the human supervisor can assure that the robot or the artificial intelligent system will show a safe and interpretable behavior.
    Negotiation-aware task allocation - We considered a task-allocation problem where an AI Task Allocator (AITA) comes up with a fair allocation for a group of humans. And the AITA is able to provide explanation in the form of negotiation tree to convince humans who thinks a counterfactual allocation is more fair.