top of page

An AI-Driven Clinical Decision Support System for Lung Disease Physicians

Leading ongoing UX research and foundational AI-driven design within a complex multi-partner clinical environment

Patienrt-Quick-Findings.png
Role

UX Research

Design lead

 

Stage / Scope

Early UX direction & evaluable concept definition

Methods / Tools

Requirements Analysis

Interaction & workflow structuring

Interface Design

Cliant / Framework

EU-funded clinical AI consortium / Reichman University

Background

Project Motivation

Joining this EU-funded clinical AI project meant entering a space where much of the early work focused on understanding physicians’ attitudes toward AI rather than shaping a product. Valuable psychological insights existed, but they hadn’t yet been translated into workflows or an initial user experience.

My first responsibility was to build that bridge: define an early direction for the system and create initial interface concepts that physicians could meaningfully respond to. Working within a multi-partner environment that combined clinical, academic, and industry perspectives, I focused on turning existing research into a practical foundation for the product while leaving room for ongoing discovery and refinement.

From a UX perspective, the challenge was to support physicians’ decision-making under time pressure, fragmented data, and limited transparency in AI reasoning.

Background

Project Motivation

Joining this EU-funded clinical AI project meant entering a space where much of the early work focused on understanding physicians’ attitudes toward AI rather than shaping a product. Valuable psychological insights existed, but they hadn’t yet been translated into workflows or an initial user experience.

My first responsibility was to build that bridge: define an early direction for the system and create initial interface concepts that physicians could meaningfully respond to. Working within a multi-partner environment that combined clinical, academic, and industry perspectives, I focused on turning existing research into a practical foundation for the product while leaving room for ongoing discovery and refinement.

From a UX perspective, the challenge was to support physicians’ decision-making under time pressure, fragmented data, and limited transparency in AI reasoning.

Approach & Process

A non-linear, research-informed and evaluation-driven workflow

This project evolved through overlapping phases shaped by existing research, tight timelines, and limited access to physicians. My focus was to create structure within these conditions while keeping the process flexible and responsive to new insights.

Approach & Process

A non-linear, research-informed and evaluation-driven workflow

This project evolved through overlapping phases shaped by existing research, tight timelines, and limited access to physicians. My focus was to create structure within these conditions while keeping the process flexible and responsive to new insights.

Requirements Analysis

01

Extracted relevant insights from existing psychological research to define initial product needs and trust-related requirements.

Concept exploration (AI-assisted)

02

Used AI tools (Base44, Lovable, LLMs) to rapidly explore structural and interaction directions, helping surface early patterns worth developing.

Interaction & workflow structuring

03

Shaped the system’s core logic; how information appears, unfolds, and supports different physician decision-making styles.

Interface Design

04

Created initial interface concepts in Figma to make the system tangible and ready for physician feedback.

User evaluations (remotely)

04

Reviewed early screens with physicians in remote sessions to validate direction, surface friction points, and guide iteration.

Challenges & Constraints

Defining the core issues shaping early design decisions

Within this context, my role was to build on existing psychological insights while extending them through UX research, translating both into interaction flows and interface concepts physicians could meaningfully engage with. Although prior work examined attitudes toward AI, it had not yet been expressed as workflows or interface structures.

At this stage, the work progressed from research insights to an initial design foundation, with interface concepts defined to enable evaluation and discussion with physicians and consortium partners.

Supporting Different Decision-Making Styles

How might we support different clinical decision styles without overwhelming or underserving any physician?

“Some cases only need a quick summary, but others require digging deeper into the details before I can decide.”

User Pain Point

Physicians differ in how much information they need before trusting an AI-generated suggestion. Some prefer a concise overview, while others require deeper evidence before making a decision. A single pathway risks overwhelming one group or underserving the other.

UX Solution

I structured the experience around two complementary modes. Quick Mode presents a high-level view for fast assessment, while Deep Analysis provides a detailed breakdown of the AI’s reasoning, evidence, and supporting data. This dual-mode structure ensures both decision-making styles are fully supported without compromising clarity.

Usage preferences.png

Maintaining Physician Autonomy

How might we provide AI guidance while keeping clinical judgment clearly in the physician’s hands?

“I believe AI can be helpful, but it shouldn’t make the decision for me; the final judgment has to stay mine.

User Pain Point

Physicians are wary of AI systems that appear authoritative or prescriptive. Since they do not directly input data into the system, the interface must make clear that final judgment remains with them and that the AI is an assistive tool, not a decision-maker.

UX Solution

To support autonomy, the system presents multiple treatment suggestions rather than a single “correct” path, each labeled with a “High / Medium / Low Match” indicator. This reinforces that the clinician—not the AI—selects the appropriate treatment. When the system detects outdated or insufficient data, it flags this limitation and recommends additional testing, signaling uncertainty rather than authority. UI phrasing such as “AI Suggested Diagnosis” and “AI Suggested Treatment” further underscores that the system offers support, not directives.

Patienrt-Quick-Findings.png
Treatment-Options.png

Building Trust Through Explainability

How might we help physicians validate AI reasoning without increasing cognitive load?

“If AI suggests something, I need to understand what it’s based on before I can trust it.”

User Pain Point

Physicians cannot rely on AI-generated recommendations without understanding the reasoning behind them. A suggestion without justification is clinically unusable and undermines trust, especially in complex decision pathways.

UX Solution

I designed a structured “AI Reasoning” section within Deep Analysis that surfaces contributing factors, interpretation layers, and evidence the AI used to reach a suggestion. In Quick Mode, contextual panels such as “Targeted Therapy” provide a lighter-weight rationale. Together, these elements give physicians the transparency needed to validate the system’s output without overwhelming them.

Reasononing-Diagram.png
Treatment-Reasoning.png

Ethical Transparency and Appropriate Reliance

How might we make AI transparency clear enough for clinicians to rely on?

“How can I feel confident in a result, or how well it fits my specific patient, if I can’t tell whether the underlying data or evidence is solid?”

User Pain Point

Physicians need visibility into how confident the AI is, how well the patient aligns with specific models or treatments, and where the AI’s information originates. Without clear transparency cues, appropriate reliance becomes impossible and clinical risk increases.

UX Solution

I incorporated transparency signals throughout the interface, including an “AI Confidence” bar to communicate uncertainty, a “Patient Matching” score showing relevance to specific treatment models, and links to external clinical resources to enable traceability. These cues promote responsible use and help physicians understand when the system’s output is strong, uncertain, or requires further validation.

Early Evaluation

Evaluating early concepts with lung disease physicians

Early interface concepts were reviewed remotely with lung disease physicians to assess clarity, clinical relevance, and alignment with real decision-making needs. The goal was not formal usability testing, but early directional feedback to validate the approach and guide ongoing refinement.

What was evaluated
  • Overall structure and information flow

  • Clarity of reasoning and evidence presentation

  • Suitability of Quick vs. Deep modes for different decision styles

  • Communication of uncertainty and trust cues

  • Alignment with clinical autonomy and decision ownership

What we learned
  • Physicians appreciated having both high-level summaries and deeper evidence pathways

  • Reasoning structures helped clinicians understand and validate suggestions

  • Autonomy-supportive content and tone resonated strongly

  • Some phrasing and information density areas require refinement

  • Feedback confirmed the direction as a strong foundation for ongoing UX development

These early evaluations validated the core design direction and informed the next stage of UX research, where workflow-level testing and iterative refinement will play a central role.

Key Contributions

Establishing the UX groundwork

My work translated early research into a usable UX foundation that physicians and partners could meaningfully engage with.

Created the project’s first evaluable UX direction

Provided a structured foundation that replaced ambiguity and gave the consortium a clear starting point for design and discussion.

Facilitated early physician feedback

Created evaluable interface concepts that allowed the consortium to gather meaningful clinical feedback much earlier than planned.

Helped bridge research and product thinking

Translated psychological insights into actionable product considerations, ensuring the project stayed clinically relevant and grounded.

Takeaways and What next

A Work In Progress

This project has highlighted how deeply clinical decision-making varies, and how design must flex to support both fast, high-level orientation and deeper evidence-driven reasoning. Working at the intersection of psychological research, clinical expertise, and industry expectations reinforced the importance of translating abstract insights into structures clinicians can immediately evaluate. It also emphasized the value of transparency, both in how the system reasons and in the reliability and completeness of its underlying data.

Moving forward, the next phase focuses on conducting more structured evaluations with physicians to refine workflows, phrasing, and information density, and to test how the dual-mode experience supports real diagnostic practices. As technical integration across partners progresses, the UX foundation will continue to evolve through iterative feedback, deeper usability testing, and closer alignment with clinical workflows and model capabilities.

bottom of page