The Use of Artificial Intelligence by the Administration in Health Services and the Resulting Liability in Turkish Law
PDF
Cite
Share
Request
Commentary
VOLUME: 15 ISSUE: 3
P: 109 - 111
December 2025

The Use of Artificial Intelligence by the Administration in Health Services and the Resulting Liability in Turkish Law

J Acad Res Med 2025;15(3):109-111
1. Galatasaray University Faculty of Law Department of Administrative Law, İstanbul, Türkiye
No information available.
No information available
Received Date: 05.11.2025
Accepted Date: 27.11.2025
Online Date: 02.12.2025
Publish Date: 02.12.2025
PDF
Cite
Share
Request

INTRODUCTION

In recent years, artificial intelligence (AI) has initiated a transformation in the delivery of health services. Utilizing AI alongside traditional methods is a requirement of the principle of adapting public services to modern needs (1). AI has extended from imaging analysis to early and accurate diagnosis, from personalized treatments to drug development, and from home healthcare to hospital management. It has influenced the organization and operation of healthcare institutions and ensured effective use of limited resources (2).

The status of health services is explicitly provided under Article 56 of the Turkish Constitution, which mandates the State to ensure that everyone lives in physical and mental health and to establish and operate the necessary organizational structure. Thus, the integration and regulation of AI in health services also fall under the responsibility of the State.

The purpose of this article is to discuss the legal basis, limits, and potential liabilities of the administration’s use of AI in the delivery of healthcare services under Turkish law. The use of AI in health services may conflict with the traditional concept of public service and raises new questions of liability.

I. The Use of AI in Health Services

AI is defined as “software that can generate outputs such as content, predictions, recommendations, or decisions for human-defined objectives, influencing the environment with which it interacts” (3). According to the Council of Europe’s 2018 declaration, AI systems “demonstrate intelligent behavior by analyzing their environment and taking action to achieve specific goals, functioning with a certain degree of autonomy” (4). Thus, AI—like a human—has the ability to make decisions independently based on available data (5).

AI offers opportunities that go beyond assisting physicians in organizing, diagnosing, or treating patients; it allows for direct diagnosis or treatment without human medical personnel.

A. The Legal Framework of AI Use in Health Services

Turkish legislation does not currently contain explicit provisions on whether or to what extent AI may be used in healthcare. According to Article 1 of the Law No. 1219 on the Practice of Medicine and Medical Sciences, only graduates of medical faculties are authorized to diagnose and treat patients. Therefore, the delegation of such functions entirely to AI is not legally permissible (6). AI may serve only as a decision-support tool to assist or complement the physician’s judgment (7).

Constitutionally, Article 128 provides that essential and permanent duties of the State must be performed by public officials. Therefore, in health services, AI can be used only when the final authority remains with the physician (8). For non-clinical tasks—such as data management, administration, or organization—there is no restriction on the use of AI.

However, the use of AI must also comply with the principle of legality: every administrative act and public service must have a legal basis (9). Thus, AI integration into public healthcare must rest on a statutory foundation.

B. The Classification of AI Use in Health Services

A key legal issue concerns whether AI used to assist in diagnosis or treatment can be classified as a medical device. A “medical device” refers to any instrument, apparatus, software, or accessory used for medical purposes whose principal intended effect is not achieved through pharmacological, immunological, or metabolic means (10).

To qualify as a medical device, a product must aim at diagnosis, prevention, monitoring, prediction, prognosis, treatment, control, or alleviation of disease or injury, or the modification of an anatomical structure or physiological process (11).

Although software is included in the definition, Turkish law lacks explicit provisions regarding software as a medical device. However, Turkish regulations are largely harmonized with the European Union (EU) Medical Device Regulation (2017/745) (12), and thus the EU Medical Software Guide serves as a reference.

According to the EU Guide, software qualifies as a medical device if it performs data processing (not merely storage or transmission), provides outputs, and contributes directly to improving health.

Therefore, AI software that operates autonomously and assists in diagnosis or treatment based on data analysis qualifies as a medical device (6). Conversely, AI used only for recordkeeping or data transfer does not.

If AI is classified as a medical device, it falls under the Medical Device Regulation, subject to limitations and obligations on market placement, operation, and manufacturer or importer liability.

II. Administrative Liability for AI Use in Health Services

Due to its complex algorithmic nature, AI can yield unpredictable results, which may not be fully explained by traditional administrative law principles. For instance, if an AI-based diagnostic tool produces incorrect results causing harm to a patient, it is unclear whether liability rests with the administration, the physician, or the software developer.

Under Articles 40, 125, and 129 of the Constitution, administrative liability arises in two forms: fault-based (service fault) and strict (no-fault) liability.

A. Fault-Based (Service Fault) Liability

A service fault occurs when there is a defect, irregularity, or failure in the establishment, organization, or functioning of a public service. In healthcare, such faults can arise in several AI-related scenarios (13).

If a physician uses an unapproved AI tool—one that lacks Ministry of Health authorization or medical device licensing—any resulting harm constitutes a breach of medical standards, and the administration is liable for the physician’s conduct.

If the AI system is medically approved but harm results from the physician’s misinterpretation of AI-generated data (14), the administration compensates the damage but may seek recourse against the negligent physician (15).

Failure to review or verify AI outputs before acting on them also constitutes a service fault. For example, if a doctor applies AI-generated radiological findings without validation, administrative liability arises.

Because AI systems must be regularly updated with new data to maintain accuracy, the failure of the administration to ensure timely updates constitutes another form of service fault (1).

B. Strict (No-Fault) Liability

Strict liability is based on the principle that damages arising from risky public activities should be borne by society as a whole. It relies on two doctrines: risk liability and equitable distribution of sacrifice.

Under the risk principle, the administration must compensate for damage caused by dangerous tools or activities—even without fault—if such risks are inherent in public service (16).

AI use in healthcare can be considered a hazardous and unpredictable activity (17). AI systems may produce erroneous outcomes if not properly updated, lack sensitivity to complex data, or fail to respond to unexpected events as quickly as humans (18). Therefore, the mere use of AI entails risk (19).

If an AI algorithm, during its self-learning process, produces unforeseen outcomes, or malfunctions due to external data errors, the administration bears strict liability for resulting harm.

When AI systems are developed by private entities, the contractor or software provider may also be liable (20). The administration may compensate the victim and subsequently seek recourse from the developer (21).

CONCLUSION

The use of AI in health services reshapes both the delivery of public healthcare and the scope of administrative liability in Turkish law. Although current legislation lacks explicit regulation, AI can only be used under physician supervision as a decision-support tool; otherwise, it would breach constitutional and legal limits.

Administrative liability should be assessed within the dual framework of fault-based and strict liability, ensuring that AI-based healthcare applications remain safe, transparent, and auditable, as required by the principle of legal security.

Keywords:
Artificial intelligence, healthcare, administrative responsibility
Financial Disclosure: The author declared that this study has received no financial support.

References

1
Yayla A. İdare hukuku bakımından yapay zeka: idarenin faaliyetleri, yapay zekalı idari işlemler ve sorumluluk. Ankara: Seçkin Yayıncılık; 2023.
2
Çeçen Çamlı, D. The use of artificial intelligence technologies in surgical nursing: ethical dilemma [Cerrahi hemşireliğinde yapay zekâ teknolojilerinin kullanımı: etik ikilem]. Euroasia Journal of Mathematics, Engineering, Natural & Medical Sciences, 2024; 11: 26-34.
3
European Parliament and Council. Proposal for a regulation of the European Parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Retrieved from: URL: https://www.eur-lex.europa.euEuropean Commission.
4
Artificial intelligence for Europe. 2021. (n.d.). Available from: URL: https://eur-lex.europa.eu/legal-content
5
Sarı O. Liability arising from damages caused by artificial intelligence [Yapay zekânın sebep olduğu zararlardan doğan sorumluluk]. TBB Dergisi. 2020; 147: 251-312.
6
Diri F. Assessment of artificial intelligence technology and its implications in the context of Turkish Health Law [Yapay zeka teknolojisi ve beraberinde getirdiklerinin sağlık hukuku kapsamında değerlendirilmesi]. Bilişim Hukuku Dergisi. 2024; 6: 270-321.
7
Benzer E. The appearance of artificial intelligence in the EU legal order: The EU artificial intelligence act. Journal of Information Law. 2025; 1: 187-221.
8
Constitutional Court. Decision of 22 November 2007; No. 2007/85.
9
Baydemir, E. The requirement for a regulatory and supervisory institution in the Turkish admisinistrative organization in the field of artificial intelligence [Türk idari teşkilatında yapay zeka alaninda düzenleyici ve denetleyici kurum ihtiyacı]. Kırıkkale Hukuk Mecmuâsı. 2024; 4: 869-900.
10
Medical Device Regulation. Art. 3.
11
Akgün Toker A. Artificial intelligence systems used for diagnosis and treatment in the context of medical device law [Tıbbi cihaz hukuku bağlamında teşhis ve tedavide kullanılan yapay zekâ sistemleri]. Dokuz Eylül Universitesi Hukuk Fakültesi Dergisi. 2025; 27: 769-824.
12
EU Regulation no. 2017/746.
13
Duran L. Türkiye idaresinin sorumluluğu. Ankara: Türkiye ve Ortadoğu Amme İdaresi Enstitüsü; 1974. p.26.
14
Yüzbaşıoğlu C. İdare ve personel yönüyle sağlık hizmetlerinden kaynaklanan sorumluluk. İstanbul: On İki Levha Yayıncılık. 2020. p.253.
15
Yılmaz DH. İdarenin kusurlu sorumluluğu ve rücu. İstanbul: Platon Hukuk Yayınevi; 2023. p.169.
16
Çağlayan R. İdarenin kusursuz sorumluluğu: tarihsel, teorik ve pratik yönleriyle. Ankara: Asil Yayın Dağıtım; 2007.
17
Kadıoğlu M, Güçlütürk OG. Yapay zeka ve regulasyon. In: Aksoy Retornaz EE, Güçlütürk OG, editors. Gelişen teknolojiler ve hukuk II: yapay zeka. İstanbul: On İki Levha Yayıncılık; 2021. p. 75-118.
18
Pirim, H. Artificial intelligence. Journal of Yaşar University. 2006; 1: 81-93.
19
Akrurakcı NF. The effect of artificial intelligence on administrative discretion and decision - making [Yapay zekânın idarenin takdir yetkisi ve karar alma mekanizmalarına etkisi. Journal of Administrative Law and Sciences. 2022; 20: 77-97.
20
Par, S. Use of artificial intelligence in health field and its legal status [Yapay zekânın sağlık alanında kullanımı ve hukuki statüsü]. International Anatolia Academic Online Journal. 2024: 10; 179-96.
21
Sarı O. Liability for damages caused by artificial intelligence [Yapay zekânın sebep olduğu zararlardan doğan sorumluluk]. Journal of the Union of Turkish Bar Associations. 2020: 147; 251-312.