CASE STUDY: MetroHealth System
Implementing AI for Heart Failure Readmission Prediction
Module 1 Case Study • Cardio AI Global Leadership Program

Background

Organization Overview

MetroHealth System is a 600-bed academic medical center serving an urban population of 2.5 million in the Midwest United States. As a safety-net hospital, MetroHealth serves a diverse patient population, with 45% of patients on Medicaid, 25% on Medicare, and 30% with commercial insurance or uninsured.

Metric Data
Annual Admissions 45,000
Heart Failure Admissions 2,800 per year
30-Day Readmission Rate 27% (national average: 23%)
Annual HF Penalty (Medicare) $1.8 million (Maximum: 3% reduction)
Cardiology Department 35 cardiologists, 8 advanced practice providers, 50 nurses

The Challenge

In 2023, MetroHealth faced mounting pressure from multiple stakeholders:

Key Challenges

  • Medicare penalties: The hospital's 27% heart failure readmission rate exceeded the national average, resulting in $1.8 million in annual penalties
  • Quality metrics: Public reporting of readmission rates hurt the hospital's reputation and market position
  • Resource constraints: Limited care coordination staff couldn't provide intensive follow-up to all HF patients
  • Patient outcomes: Readmitted patients experienced worse outcomes and higher mortality
  • Physician burden: Cardiologists struggled to identify which patients needed the most intensive post-discharge support

Traditional risk stratification tools (like the LACE index and HOSPITAL score) had limited predictive accuracy, with AUC values around 0.65. The care coordination team could only manage about 150 high-risk patients per month, representing just 20% of HF discharges.

• • •

The AI Opportunity

Leadership Decision Point

In January 2024, Dr. Sarah Mitchell, the newly appointed Chief Innovation Officer, proposed implementing an AI-powered heart failure readmission prediction system. Dr. Mitchell had recently completed the Cardio AI Global Leadership Program and was enthusiastic about applying AI to address the readmission challenge.

She presented three potential approaches to the executive leadership team:

Option 1
Commercial AI Vendor Solution
Approach: Purchase a pre-built AI readmission prediction tool from a leading healthcare AI company
Cost: $250,000 annual license fee
Timeline: 3-6 months to implement
âś“ Pros:
  • FDA-cleared
  • Proven accuracy (claimed AUC 0.78)
  • Quick deployment
  • Vendor support
âś— Cons:
  • Black box model
  • May not be calibrated for MetroHealth's population
  • Ongoing costs
Option 2
Build In-House AI Model
Approach: Develop custom AI model using MetroHealth's 10 years of EHR data
Cost: $400,000 upfront (data scientists, infrastructure, development)
Timeline: 12-18 months to develop and validate
âś“ Pros:
  • Customized for local population
  • Explainable model
  • Institutional ownership
  • Continuous improvement
âś— Cons:
  • Long timeline
  • Requires data science team
  • Regulatory pathway unclear
  • Higher upfront investment
Option 3
Academic Partnership
Approach: Partner with nearby university medical school to co-develop AI model
Cost: $150,000 grant to university + staff time
Timeline: 18-24 months (includes research, publication, deployment)
âś“ Pros:
  • Lower cost
  • Access to research expertise
  • Publishable results
  • Educational mission
âś— Cons:
  • Longest timeline
  • Academic priorities may not align
  • Sustainability questions

Leadership Team Perspectives

I support innovation, but my primary concern is patient safety and clinical validity. We need to ensure any AI tool is accurate for our patient population, which is more diverse and has more social risk factors than the populations where these tools were developed. I want to see external validation data before we commit. Also, our cardiologists are already overwhelmed—will this actually reduce their burden or create more work?
— Dr. James Chen, Chief Medical Officer
We need to reduce readmissions, period. That $1.8 million penalty is unacceptable, and we risk losing more if we don't improve. But I need to see a clear ROI. If we can reduce readmissions by even 3-4 percentage points, we save millions. However, I'm concerned about the build option—$400,000 upfront with an 18-month timeline means no benefit for over a year. Can we afford to wait that long?
— Angela Rodriguez, Chief Financial Officer
My team needs better tools to identify high-risk patients, but we're skeptical of AI black boxes. If we can't understand why the model flags a patient as high-risk, we won't trust it. We need transparency and explainability. I also want to make sure this integrates smoothly into our EHR workflow—we don't need another system to log into or another screen to check.
— Dr. Lisa Thompson, Chief of Cardiology
Our care coordination team is the bottleneck. With better predictions, we can focus our limited resources on the patients who need it most. But the prediction is only part of the solution—we also need expanded resources for post-discharge follow-up, medication reconciliation, and social support. AI alone won't solve this problem.
— Marcus Johnson, VP of Population Health
From an IT perspective, I prefer the vendor solution for speed and support, but I worry about data privacy and vendor lock-in. If we build in-house, we maintain control and can continuously improve the model as we collect more data. However, my team is already stretched thin—we'd need to hire data scientists and engineers, which takes time and money.
— Dr. Patricia Lee, Chief Information Officer
• • •

Additional Context

Current State Assessment

Data Infrastructure

  • EHR: Epic (fully implemented 2015)
  • 10 years of structured and unstructured clinical data
  • Data warehouse established but limited analytics capability
  • No dedicated data science team

Clinical Context

  • Heart failure clinic sees 150 patients per week
  • Care coordination team: 3 FTE nurses, 2 FTE social workers
  • Current risk stratification: Nurse assessment + LACE score
  • Post-discharge protocol: 72-hour phone call, 7-day clinic visit (for high-risk patients)

Patient Population Characteristics

  • Average age: 68 years
  • Demographics: 45% African American, 30% White, 15% Hispanic, 10% other
  • Comorbidities: High rates of diabetes (65%), hypertension (85%), kidney disease (40%)
  • Social factors: 35% food insecurity, 25% housing instability, 40% medication non-adherence

Industry Context

Several commercial AI readmission prediction tools have entered the market:

Vendor Claimed AUC Deployments Annual Cost
HealthPredict AI 0.78 (HF readmission) 120 hospitals $250K
CareAI Solutions 0.76 (All-cause) 85 hospitals $300K
Predictive Health 0.79 (HF readmission) 200+ hospitals $275K
⚠️ Important Note: Published studies show that commercial AI tools often underperform when deployed in real-world settings, particularly in diverse patient populations. One recent study found that AUC dropped from 0.78 to 0.68 when a widely-used tool was externally validated across 5 health systems.
• • •

Discussion Questions

Part 1: Strategic Analysis (30 minutes)

Question 1:

What are the core business and clinical problems MetroHealth is trying to solve? Prioritize the top three.

Question 2:

Which option (vendor, build, or partnership) would you recommend, and why? Consider timeline, cost, and strategic fit.

Question 3:

How would you address the concerns raised by each member of the leadership team? Develop specific responses to Dr. Chen (CMO), Rodriguez (CFO), Dr. Thompson (Cardiology Chief), Johnson (Pop Health VP), and Dr. Lee (CIO).

Part 2: Stakeholder Mapping (30 minutes)

Question 4:

Map the key stakeholders in this scenario. Who are the champions, skeptics, and neutral parties? How would you engage each group?

Question 5:

What value does the AI solution create for patients, cardiologists, care coordinators, and the hospital system? Are there any stakeholder groups whose needs might conflict?

Part 3: Implementation Planning (30 minutes)

Question 6:

What are the key implementation risks for your chosen option? How would you mitigate them?

Question 7:

Design a 90-day pilot study. What metrics would you track? How would you define success?

Question 8:

How would you ensure the AI tool doesn't perpetuate or worsen existing healthcare disparities for MetroHealth's diverse patient population?

Part 4: Leadership & Change Management (30 minutes)

Question 9:

Apply Kotter's 8-Step Change Model to this scenario. What specific actions would you take at each step?

Question 10:

How would you gain buy-in from skeptical cardiologists who may view AI as a threat or distrust black box algorithms?

Question 11:

What communication strategy would you use to keep all stakeholders informed and engaged throughout the implementation?

• • •

Facilitator Guide

Case Study Objectives

This case study is designed to help participants:

  • Apply strategic decision-making frameworks to real-world AI implementation scenarios
  • Understand multi-stakeholder perspectives in healthcare innovation
  • Evaluate build vs. buy decisions for healthcare AI
  • Practice change management and leadership communication
  • Consider ethical implications including algorithmic bias and health equity

Suggested Discussion Format

Total Time: 2 hours

0:00 - 0:20
Individual Reading & Reflection
Participants read case and formulate initial responses
0:20 - 1:00
Small Group Discussion
Break into groups of 4-5 to discuss questions and develop recommendations
1:00 - 1:30
Large Group Presentations
Each group presents their recommendation (5-7 min each)
1:30 - 2:00
Facilitator-Led Debrief
Synthesize insights, highlight key themes, connect to module concepts

Teaching Notes & Key Points

Strategic Analysis

  • There is no single 'right' answer—each option has merits depending on organizational priorities
  • Vendor solution provides speed but less control; build option offers customization but takes longer
  • Consider hybrid approaches: start with vendor pilot while developing in-house capability
  • ROI calculation should include avoided penalties, reduced readmissions, and improved outcomes—not just direct costs

Stakeholder Management

  • CMO concerns about clinical validity are legitimate—external validation is critical
  • CFO wants ROI but may underestimate total cost of ownership (training, integration, maintenance)
  • Cardiologists need transparency and workflow integration—involve them early in design
  • Care coordinators are end users—their buy-in is essential for success

Ethical Considerations

  • MetroHealth's diverse, high-risk population makes algorithmic bias a serious concern
  • Models trained on healthier populations may perform poorly for safety-net hospitals
  • Social determinants of health must be incorporated—not just clinical data
  • Pilot should include subgroup analysis by race, age, insurance status

Implementation Risks

  • Over-reliance on AI without clinical judgment could lead to errors
  • Alert fatigue if too many patients flagged as high-risk
  • Integration challenges with EHR workflow
  • Insufficient post-discharge resources to act on predictions
  • Model drift over time as patient population and practices evolve