Risks of Remote Patient Monitoring with AI
Remote patient monitoring (RPM) is having a moment. As healthcare pivots toward digital-first, always-on care, RPM devices and platforms are proliferating—promising better outcomes, lower costs, and more convenience for patients and providers alike. But for all the hype, there’s a dark underbelly. The risks of RPM are real, and as of May 2025, they’re more pressing than ever. Let’s unpack what’s happening at the intersection of remote patient monitoring, artificial intelligence, and healthcare regulation—especially in the wake of high-profile stumbles from companies like MAHA, ambitious moves by OpenAI, and a surge of AI startups emerging from Mayo Clinic’s innovation pipeline.
The Promise and Peril of Remote Patient Monitoring
RPM, at its core, is about continuous health monitoring outside traditional clinical settings. Devices like glucose monitors, blood pressure cuffs, and wearable ECG patches collect data in real time, feeding it securely—or so we hope—to healthcare providers[4]. The clinical benefits are clear: earlier intervention, reduced hospitalizations, and more personalized care. But with great data comes great responsibility.
Recent years have seen RPM adoption skyrocket, driven by pandemic-era urgency and a growing acceptance of telehealth. But as adoption grows, so do the risks—and regulators are paying close attention. The Office of Inspector General (OIG) is auditing RPM programs throughout 2025, scrutinizing everything from billing practices to data security[1]. The stakes are high, and the margin for error is slim.
The Regulatory Crackdown: OIG Audits and Enforcement
If you’re a healthcare provider or vendor in the RPM space, 2025 is the year to tighten up your compliance game. The OIG is looking for red flags: unnecessary device orders, billing for non-existent services, and lax data protection. Operation Happy Clickers, a recent DOJ initiative, exposed mass fraud schemes involving telehealth and durable medical equipment (DME)—and the fallout is still unfolding[3]. The message is clear: if you’re not documenting every interaction and vetting every claim, you’re risking enforcement action.
State licensure laws add another layer of complexity. Telehealth and RPM providers often operate across state lines, but each state has its own rules about patient consent, prescribing, and physician licensing. Navigating this patchwork is a headache for even the most seasoned compliance officers[3].
Cybersecurity: The Elephant in the (Virtual) Room
Let’s face it: patient data is a prime target for hackers. And as RPM devices become more connected, the attack surface grows. Recent headlines have highlighted a particularly thorny issue: many RPM devices route sensitive patient data through servers in countries like China before it reaches U.S. healthcare systems[5]. This isn’t just a privacy concern—it’s a national security risk.
In response, the Department of Justice issued a new rule in April 2025, restricting data transfers to countries of concern, including China. The deadline for compliance is July 8, 2025. Vendors like Smart Meter are now performing forensic analyses of new products to ensure data isn’t exposed to adversarial nations[5]. “Any company not performing this level of analysis is putting patient data at risk,” warns Derek Trauger, CTO of Smart Meter[5].
Healthcare organizations are also under pressure to strengthen cybersecurity protocols. The FTC and HHS are tightening regulations, requiring multi-layered safeguards for patient data. With AI-driven analytics playing a bigger role in RPM, the need for robust encryption, access controls, and continuous monitoring has never been greater[3].
AI in RPM: Opportunities and Pitfalls
Artificial intelligence is turbocharging RPM. AI algorithms can spot trends, predict deterioration, and even trigger automated interventions. But AI also introduces new risks—algorithmic bias, opaque decision-making, and over-reliance on automated systems.
OpenAI is making waves with its healthcare offensive, partnering with providers to integrate large language models (LLMs) into RPM workflows. The goal? Smarter triage, better documentation, and more efficient care coordination. But as anyone who’s followed AI for years knows, the road to seamless integration is littered with potholes. Model hallucinations, data drift, and explainability challenges are just a few of the hurdles.
Mayo Clinic, meanwhile, is incubating a new generation of AI startups focused on RPM. These ventures are exploring everything from AI-powered wearables to predictive analytics for chronic disease management. The innovation is exciting, but the regulatory and ethical landscape is still catching up.
MAHA’s AI Stumble: A Cautionary Tale
Not every AI story in RPM has a happy ending. MAHA, a once-promising RPM vendor