Designing AI for Trust and Collaboration in Time-Constrained Medical Decisions: A Sociotechnical Lens

Maia Jacobs
4 min readFeb 19, 2021

--

Prototype designed to support antidepressant treatment selection in primary care, which includes interactive personalized treatment recommendations.

This post summarizes our research paper, available here. The paper will be presented at the 2021 ACM Conference on Human Factors in Computing Systems.

How can we make machine learning tools helpful and usable in the clinical setting?

The promise of AI in medicine is alluring, but few tools are actually being used in clinical practice. Machine learning (ML) and AI models for healthcare often don’t get used in daily care. This is typically due to low user acceptance and a failure to account for user expectations in the design.

Towards the goal of creating intelligent tools that can account for doctors’ needs and the complexity of medical work, we ran a series of co-design sessions with primary care providers. Through these sessions, we learned what doctors want from ML models, and how they imagine these tools can support their healthcare decisions.

Our findings indicate that current trends in explainable AI may be inappropriate for clinical environments, and we consider paths towards designing these tools for real-world medical systems.

Why we focused on Major Depressive Disorder

In the co-design sessions, we focused on how to design ML tools to support treatment decisions for Major Depressive Disorder (MDD). MDD is an important and interesting context because 1) Treatment decisions for MDD are complex tasks, and 2) ML models are already being developed to improve MDD treatment decisions.

Selecting an effective treatment for a patient with MDD is difficult for several reasons. The majority of mental health care is initiated in primary care settings, yet the amount of training primary care providers receive in managing MDD can vary widely. Further, doctors and patients often use trial and error to find an effective treatment. An estimated one-third of patients fail to reach remission even after four antidepressant trials. In response, the psychiatry community has called for more information on which treatments will be most effective for an individual patient.

What we learned from clinicians:

  1. Include patient preferences. All of the clinicians in this study talked about treatment decisions being a collaborative process. Clinicians wanted an interactive tool that could account for patient preferences (such as avoiding a particular side effect).
  2. Recommend appropriate clinical processes. Clinicians wanted tools that went further than providing a prediction, by connecting the prediction to appropriate actions. For example, for dropout risk (which is a probability of early treatment discontinuation), clinicians recommended the tool connect that to actionable behaviors, such as lowering the drug titration, and reducing follow-up times. Participants said that these steps could be useful for patients at risk of dropout and use resources and procedures already set up within the clinic.
  3. Understand healthcare system resource constraints. All clinicians stressed that they have very limited time with each patient, and would not have the time to determine if they thought a tool was trust-worthy. Participants said they would make a one-time decision about if the tool was helpful, and expected that trust in the technology will not be decided at each decision point.
  4. Engage with Domain Knowledge. We found that when the recommendations diverged from expectations or clinical guidelines, participants became confused, and found it challenging to identify appropriate next steps. We see a need for tools that adapt for instances in which the machine learning model output contrasts with existing domain knowledge.

Recommendations for designing ML interfaces for primary care:

Based on the above findings, we reflect on lessons for how we design AI for healthcare systems, and offer the following recommendations:

  1. Create multi-user systems for collaborative decision-making. Clinicians’ feedback challenged the idea that AI-driven tools in healthcare will be single user systems. Designing such tools for patients can help patients have a greater voice in their healthcare decisions. Yet, few studies have looked at creating AI tools for patients or for patient-provider collaboration.
  2. Connect tools to existing healthcare processes. We see a clear need for tools to explicitly draw the connections between the model output and actionable next steps. A consistent theme within this study was that clinicians wanted tools that provided actionable interventions, connecting predictions to appropriate clinical processes.
  3. Design for resource constraints. Due to time constraints, clinicians wanted DSTs to display the evidence-based methods used to validate the tool (such as randomized controlled trial results), rather than individual explanations that focus on model features. Without the time to review a prediction in detail, greater responsibility needs to be placed up front to determine when the algorithm is likely to err, and when the tool should perhaps not be shown altogether.
  4. Adapt decision support for contrasting information. Based on our results, we see an opportunity to present on-demand explanations that state when, how, and why predictions differ from existing clinical guidelines. We do not yet have established best practices for dealing with contrasting information, but helping clinicians identify the best way to proceed is critical.

Want to know more?

For more details about this study, you can find the full research paper here.

--

--

Maia Jacobs

Postdoctoral Fellow at Harvard University. I study how health technology can support people’s changing needs and goals.