The Evolving Regulatory Landscape of Artificial Intelligence in Healthcare: Balancing Innovation, Regulation, and the Human Factor
PDF
Cite
Share
Request
Commentary
VOLUME: 15 ISSUE: 3
P: 107 - 108
December 2025

The Evolving Regulatory Landscape of Artificial Intelligence in Healthcare: Balancing Innovation, Regulation, and the Human Factor

J Acad Res Med 2025;15(3):107-108
1. Galatasaray University Faculty of Law Department of IT Law and Affiliated Fellow at Yale Law School, Information Society Project, İstanbul, Türkiye
No information available.
No information available
Received Date: 10.11.2025
Accepted Date: 27.11.2025
Online Date: 02.12.2025
Publish Date: 02.12.2025
PDF
Cite
Share
Request

Artificial intelligence (AI) is reshaping healthcare, from diagnostic imaging to predictive analytics and personalised medicine (1). Its potential to improve outcomes, reduce administrative burden, and transform clinical practice is immense. Yet, as algorithms begin to influence life-and-death decisions, society faces a pressing question: how can we regulate AI responsibly without stifling innovation or eroding trust? The answer lies not in choosing between progress and protection, but in creating laws that are both technically informed and firmly grounded in humanity. Regulation is essential, but it must recognise technical constraints, support innovation where appropriate, and, above all, protect patients.

Unlike AI used in purely commercial or logistical settings, healthcare AI operates in an environment where errors can be devastating. Many advanced models function as so-called “black boxes”, offering results without a clear explanation of how they were reached. When a system misdiagnoses a tumour or fails to flag an allergy, the legal question of who is responsible, the developer, clinician, or institution, becomes murky. This lack of transparency complicates accountability and poses a real challenge for both law and ethics. Such incidents are not theoretical abstractions. Real-world examples have already revealed racial and socio-economic bias in commercial algorithms that allocate healthcare resources. In some cases, systems offered less care to equally sick patients from under-represented groups, exposing how bias in training data can amplify inequality (2). AI may also cause false diagnosis or treatment. In a recent example in the United Kingdom a patient was sent a letter for a diabetic eye screening despite not being diagnosed due to AI-driven operations (3). For the individual patient misdiagnosed or misinformed, even psychological distress from a false or delayed diagnosis is a tangible form of harm.

Existing legal frameworks were not designed with such complexity in mind. Traditional doctrines of negligence and product liability assume human agency and direct causation, yet AI blurs both. A radiologist who misses a lesion might face malpractice proceedings, but what if the fault lies in the algorithm’s flawed training data or insufficient validation? Existing regimes rarely address this nuance. The challenge is not merely to patch old laws but to reinterpret them and, when they fall short, rethink accountability for a world in which human and machine decisions are intertwined. The solution cannot be a rigid regulatory code but rather a flexible, adaptive framework capable of evolving with technological change.

Recent regulatory efforts such as the European Union’s Artificial Intelligence Act (4), despite critics arguing otherwise, signal movement in this direction. The Act’s risk-based approach, which categorises AI systems by their potential impact on safety and fundamental rights classifies certain medical AI use cases as “high risk”, demanding stricter oversight, such as risk management, human supervision, and post-market monitoring. Yet this is only one facet. The AI Act, for instance, is rooted largely in product safety law, focusing on the AI system as a marketable item rather than on how it is used over time in clinical contexts (5). Real-world deployment often diverges from testing conditions; algorithms may drift, and datasets may age. Regulation must therefore extend beyond static oversight, ensuring that safety and fairness persist long after an AI tool leaves the laboratory.

Overregulation, however, carries its own risks. Excessive procedural hurdles or inflexible rules can discourage innovation, preventing potentially life-saving tools from reaching patients. Healthcare AI develops rapidly, and laws that take years to adapt may already be obsolete by the time they are implemented. The challenge, then, is to create governance mechanisms that are responsive but not reckless, frameworks that encourage experimentation under supervision, with strong auditing and feedback loops rather than static compliance checklists. Regulatory sandboxes exemplify this approach by allowing iterative learning between developers and regulators, helping both sides refine their understanding of safe innovation. Indeed, AI Act contains specific provisions on regulatory sandboxes while the United Kingdom has “AI Airlock” sandbox allows innovators to test medical AI tools in controlled conditions, balancing safety with experimentation (6).

Data governance remains another area of tension. Health data is the raw material of AI in health. Regulations such as the General Data Protection Regulation (7) and Health Insurance Portability and Accountability Act (8) impose strict rules on collection, storage, and processing, rightly protecting individuals’ privacy and autonomy. Yet AI systems thrive on large and diverse datasets, and overly restrictive access rules can hamper model accuracy, limit representativeness, and inadvertently perpetuate bias. The challenge is not to weaken privacy protections but to enable ethical innovation within them. Privacy-preserving technologies, such as federated learning and differential privacy, can allow insights to be drawn from distributed data without compromising individual identities. Regulatory models should encourage such approaches, demonstrating that privacy and progress need not be opposing forces (9).

Even with sophisticated legal instruments, the success of AI regulation depends on something less technical but more fundamental: understanding. Regulations only work if the people they are designed to protect can comprehend them. For most patients, and indeed for many clinicians, the mechanisms of AI and the meaning of data disclosures are opaque. Patients are often unaware that an algorithm contributed to their diagnosis or treatment, and even when they are informed, the language used is frequently impenetrable. If individuals cannot understand their rights, the limitations of AI tools, or how to challenge an adverse outcome, then the protections offered by law remain largely theoretical. True transparency is not achieved by publishing complex technical documents but by ensuring that information is conveyed in accessible, human terms. Clinicians need clear guidance on how to explain AI’s role in care, and patients deserve communication that respects both their intelligence and their anxiety. There are studies demonstrating that communicating AI error rates to patients may also decrease perceived liability (10).

Ultimately, regulation is not only a technical exercise. AI in healthcare must be governed by principles that place human welfare, dignity, and comprehension at the centre. Innovation is vital, and it drives progress and can save lives. However, in the health domain it should not take precedence over safety or psychological well-being. Every regulation, every ethical guideline, must begin with the patient, not the algorithm. The measure of effective governance is not how many rules are written but how well those rules are understood and trusted by those they aim to protect.

Keywords:
Artificial intelligence, privacy, legislation
Financial Disclosure: The author declared that this study has received no financial support.

References

1
Maleki Varnosfaderani S, Forouzanfar M. The role of AI in hospitals and clinics: transforming healthcare in the 21st century. Bioengineering (Basel). 2024; 11: 337.
2
Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science. 2019; 366: 447-53.
3
Great Ormond Street Hospital for Children (GOSH). GOSH‑led trial of AI‑scribe technology shows ‘transformative’ benefits for patients and clinicians across London [Internet]. London: GOSH; 2025 Sep 4 (cited 2025 Nov 1). Available from: URL: https://www.gosh.nhs.uk/news/researchgosh-led-trial-of-ai-scribe-technology-shows-transformative-benefits-for-patients-and-clinicians-across-london/
4
European Parliament, Council of the European Union. Regulation (EU) 2024/1689 of the European Parliament and the Council of the European Union (13 June 2024) laying down harmonised rules on artificial intelligence and amending certain Union legislative acts. Official Journal of the European Union. 2024.
5
Güçlütürk OG. Kodlar ve kanunlar: yapay zekânın regülasyon rotası ve AB yapay zekâ tüzüğü. Yapay zekâ teknolojilerine akademik bakış. Ankara: Adalet Yayınevi; 2025. p.109-78.
6
Medicines and Healthcare Products Regulatory Agency. AI Airlock: the regulatory sandbox for AIaMD. GOV.UK; 2024 (cited 2025 Nov 1). Available from: URL: https://www.gov.uk/government/collections/ai-airlock-the-regulatory-sandbox-for-aiamd
7
European Parliament, Council of the European Union. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data. Official Journal of the European Union. 2016; L 119/1-88
8
U.S. Department of Health & Human Services. Health Insurance Portability and Accountability Act of 1996: Combined Regulation Text of All Rules (45 CFR Parts 160, 162, 164). U.S. Department of Health & Human Services; 2025 (cited 2025 Nov 1). Available from: URL: https://www.hhs.gov/hipaa/for-professionals/privacy/laws-regulations/combined-regulation-text/index.html
9
OECD (2025), Sharing trustworthy AI models with privacy-enhancing technologies, OECD Artificial Intelligence Papers, No. 38, OECD Publishing, Paris,
10
Bernstein MH, Sheppard B, Bruno MA, Lay PS, Baird GL. Randomized study of the impact of AI on perceived legal liability for radiologists. NEJM AI. 2025; 2: AIoa2400785.