INTRODUCTION
In recent years, artificial intelligence (AI) has initiated a transformation in the delivery of health services. Utilizing AI alongside traditional methods is a requirement of the principle of adapting public services to modern needs (1). AI has extended from imaging analysis to early and accurate diagnosis, from personalized treatments to drug development, and from home healthcare to hospital management. It has influenced the organization and operation of healthcare institutions and ensured effective use of limited resources (2).
The status of health services is explicitly provided under Article 56 of the Turkish Constitution, which mandates the State to ensure that everyone lives in physical and mental health and to establish and operate the necessary organizational structure. Thus, the integration and regulation of AI in health services also fall under the responsibility of the State.
The purpose of this article is to discuss the legal basis, limits, and potential liabilities of the administration’s use of AI in the delivery of healthcare services under Turkish law. The use of AI in health services may conflict with the traditional concept of public service and raises new questions of liability.
I. The Use of AI in Health Services
AI is defined as “software that can generate outputs such as content, predictions, recommendations, or decisions for human-defined objectives, influencing the environment with which it interacts” (3). According to the Council of Europe’s 2018 declaration, AI systems “demonstrate intelligent behavior by analyzing their environment and taking action to achieve specific goals, functioning with a certain degree of autonomy” (4). Thus, AI—like a human—has the ability to make decisions independently based on available data (5).
AI offers opportunities that go beyond assisting physicians in organizing, diagnosing, or treating patients; it allows for direct diagnosis or treatment without human medical personnel.
A. The Legal Framework of AI Use in Health Services
Turkish legislation does not currently contain explicit provisions on whether or to what extent AI may be used in healthcare. According to Article 1 of the Law No. 1219 on the Practice of Medicine and Medical Sciences, only graduates of medical faculties are authorized to diagnose and treat patients. Therefore, the delegation of such functions entirely to AI is not legally permissible (6). AI may serve only as a decision-support tool to assist or complement the physician’s judgment (7).
Constitutionally, Article 128 provides that essential and permanent duties of the State must be performed by public officials. Therefore, in health services, AI can be used only when the final authority remains with the physician (8). For non-clinical tasks—such as data management, administration, or organization—there is no restriction on the use of AI.
However, the use of AI must also comply with the principle of legality: every administrative act and public service must have a legal basis (9). Thus, AI integration into public healthcare must rest on a statutory foundation.
B. The Classification of AI Use in Health Services
A key legal issue concerns whether AI used to assist in diagnosis or treatment can be classified as a medical device. A “medical device” refers to any instrument, apparatus, software, or accessory used for medical purposes whose principal intended effect is not achieved through pharmacological, immunological, or metabolic means (10).
To qualify as a medical device, a product must aim at diagnosis, prevention, monitoring, prediction, prognosis, treatment, control, or alleviation of disease or injury, or the modification of an anatomical structure or physiological process (11).
Although software is included in the definition, Turkish law lacks explicit provisions regarding software as a medical device. However, Turkish regulations are largely harmonized with the European Union (EU) Medical Device Regulation (2017/745) (12), and thus the EU Medical Software Guide serves as a reference.
According to the EU Guide, software qualifies as a medical device if it performs data processing (not merely storage or transmission), provides outputs, and contributes directly to improving health.
Therefore, AI software that operates autonomously and assists in diagnosis or treatment based on data analysis qualifies as a medical device (6). Conversely, AI used only for recordkeeping or data transfer does not.
If AI is classified as a medical device, it falls under the Medical Device Regulation, subject to limitations and obligations on market placement, operation, and manufacturer or importer liability.
II. Administrative Liability for AI Use in Health Services
Due to its complex algorithmic nature, AI can yield unpredictable results, which may not be fully explained by traditional administrative law principles. For instance, if an AI-based diagnostic tool produces incorrect results causing harm to a patient, it is unclear whether liability rests with the administration, the physician, or the software developer.
Under Articles 40, 125, and 129 of the Constitution, administrative liability arises in two forms: fault-based (service fault) and strict (no-fault) liability.
A. Fault-Based (Service Fault) Liability
A service fault occurs when there is a defect, irregularity, or failure in the establishment, organization, or functioning of a public service. In healthcare, such faults can arise in several AI-related scenarios (13).
If a physician uses an unapproved AI tool—one that lacks Ministry of Health authorization or medical device licensing—any resulting harm constitutes a breach of medical standards, and the administration is liable for the physician’s conduct.
If the AI system is medically approved but harm results from the physician’s misinterpretation of AI-generated data (14), the administration compensates the damage but may seek recourse against the negligent physician (15).
Failure to review or verify AI outputs before acting on them also constitutes a service fault. For example, if a doctor applies AI-generated radiological findings without validation, administrative liability arises.
Because AI systems must be regularly updated with new data to maintain accuracy, the failure of the administration to ensure timely updates constitutes another form of service fault (1).
B. Strict (No-Fault) Liability
Strict liability is based on the principle that damages arising from risky public activities should be borne by society as a whole. It relies on two doctrines: risk liability and equitable distribution of sacrifice.
Under the risk principle, the administration must compensate for damage caused by dangerous tools or activities—even without fault—if such risks are inherent in public service (16).
AI use in healthcare can be considered a hazardous and unpredictable activity (17). AI systems may produce erroneous outcomes if not properly updated, lack sensitivity to complex data, or fail to respond to unexpected events as quickly as humans (18). Therefore, the mere use of AI entails risk (19).
If an AI algorithm, during its self-learning process, produces unforeseen outcomes, or malfunctions due to external data errors, the administration bears strict liability for resulting harm.
When AI systems are developed by private entities, the contractor or software provider may also be liable (20). The administration may compensate the victim and subsequently seek recourse from the developer (21).
CONCLUSION
The use of AI in health services reshapes both the delivery of public healthcare and the scope of administrative liability in Turkish law. Although current legislation lacks explicit regulation, AI can only be used under physician supervision as a decision-support tool; otherwise, it would breach constitutional and legal limits.
Administrative liability should be assessed within the dual framework of fault-based and strict liability, ensuring that AI-based healthcare applications remain safe, transparent, and auditable, as required by the principle of legal security.


