AI in Healthcare: We’ve Been Here Before
- Chris Grasso
- Oct 9
- 4 min read
In my years as a healthcare CIO, I have seen new technologies arrive with a mix of anticipation and skepticism. One of the clearest examples is when cloud computing began to enter the healthcare space.
The concerns were real:
Would patient data be secure?
Who controls the data once it leaves our servers?
Could we meet HIPAA compliance requirements?
What if the internet went down?
At the time, most of our core systems lived in server rooms we could physically touch, maintain, and lock behind doors. Moving to the cloud meant trusting vendors, encrypting data in transit, and creating entirely new safeguards. It was not just a technical shift, it was a cultural one. Even my mom got in on the conversation. She asked me, “What is this cloud thing, and where exactly is it?” I told her, “It’s not in the sky, Mom, but it is where a lot of our data is going to live from now on.” She still looked suspicious.
Over time, cloud technology has become the backbone of healthcare operations. Coming across an organization that has not moved to the cloud signals potential vulnerabilities and raises concerns about whether it can keep pace with industry standards. Cloud computing gave us:
Scalable computing power without constant hardware upgrades or capital expenses
Lower hardware and maintenance costs, freeing resources for patient care
Faster innovation cycles from our vendors
Greater system reliability, resilience, and improved disaster recovery
This journey from skepticism to reliance is very similar to the moment we are in now with Artificial Intelligence (AI) in healthcare.
The AI Moment Feels Familiar
Much like the early days of cloud adoption, AI brings both excitement and caution. We are hearing the same kinds of questions:
Can we trust the outputs?
Who controls the data?
Will AI introduce new security or compliance risks?
How do we ensure it is used ethically?
Will AI replace humans, or will it support them?
These questions are valid and deserve careful answers. If the cloud taught us anything, it is that the organizations willing to adopt responsibly and create strong governance early are often the ones who gain the most value. The truth is that AI should never replace the human expertise at the heart of healthcare. AI’s real value is in augmenting clinicians and staff, giving them more time for patient care, reducing the burden of repetitive tasks, and helping us better serve patients.
Lessons from the Cloud Era That Apply to AI
Start with Low-Risk, High-Value Use Cases
Cloud adoption began with email, backups, and collaboration tools rather than starting with EHR migrations. AI should follow the same path. Start with opportunities like automating meeting summaries or enhancing patient communication before moving into more complex areas like clinical decision-making.
Invest in Governance Early
Just as we developed security protocols, vendor due diligence, and access controls for the cloud, we now need strong policies for AI. That means governance frameworks, guardrails, clear accountability, privacy protection, bias monitoring, and human oversight.
Bring People Along the Journey
Technology adoption is as much about people as it is about systems. Staff training, transparency, and clear communication are critical. We need to teach staff how to use AI responsibly in healthcare settings, reinforce accuracy, and frame AI as a tool that supports not replaces them.
Measure, Iterate, and Scale
We did not move everything to the cloud overnight. AI should follow a similar phased approach: identify use cases, pilot them, measure impact, refine workflows, and then expand adoption once benefits are clear.
Additional Lessons for AI That Go Beyond the Cloud
Patient Impact and Equity
AI must serve patients, not just operations. It holds the potential to improve access, support population health, and personalize care. But if not carefully designed, it could unintentionally widen disparities. Equity and inclusion must be front and center.
Vendor Management and Interoperability
Cloud taught us the importance of vendor due diligence and SLAs. With AI, integration with core systems like EHRs is just as critical. Leaders need to ask vendors how models are trained, how algorithms are validated, where data is stored, and whether data can move seamlessly across platforms.
Regulatory and Legal Readiness
Unlike cloud, the regulatory environment for AI is still evolving. Federal and state guidance is emerging, and tools that cross into decision support may face FDA oversight. CIOs must stay proactive to ensure compliance and anticipate the next wave of regulations.
Workforce Evolution and Change Management
AI will reshape roles, requiring reskilling and upskilling. Rather than fearing replacement, staff should see AI as a partner that reduces burnout and gives back time for patient care. Leaders must invest in training and highlight success stories to build trust.
Financial Sustainability and ROI
Cloud shifted IT from capital to operating expenses. AI will create similar shifts, requiring new budgeting models. CIOs will need to balance upfront investments in governance, training, and tools against measurable returns like cost savings, efficiency, and improved outcomes.
Ethics and Trust Beyond Compliance
Compliance is the floor, not the ceiling. Patients and staff deserve transparency about how AI is used, especially in decision-making. AI must be explainable and trustworthy, not a black box. A culture of ethics will be as important as technical safeguards.
Why This Time Is Different But the Playbook Still Works
AI is advancing much faster than cloud computing did. Tools are improving almost weekly, and vendors are embedding AI into nearly every product. This makes governance, risk assessment, and strategic prioritization even more urgent. Just like with the cloud, initial discomfort will give way to capabilities we will eventually consider essential. We will see faster insights, more efficient workflows, cost savings, and ultimately, better patient care.
Final Thought
As healthcare leaders, we have two choices:
Wait until the technology feels completely safe and proven, which carries the risk of falling behind.
Or take deliberate, measured steps into AI, applying the lessons we learned from previous technology transformations.
We have navigated this kind of change before. We can do it again.
Comments