The UN calls for legal safeguards for the use of AI in healthcare.

Translation. Region: Russian Federation –

Source: United Nations – United Nations –

An important disclaimer is at the bottom of this article.

November 19, 2025 Healthcare

The use of artificial intelligence (AI) in healthcare is rapidly expanding, but basic legal mechanisms to protect patients and healthcare workers are still lacking.

This is highlighted by a new report from the World Health Organization Regional Office for Europe (WHO/Europe). In European countries, AI technologies are already helping doctors identify diseases, reduce administrative burdens, and communicate with patients.

AI is changing the way we deliver healthcare, interpret data, and allocate resources. "But without clear policies, data protection, legal frameworks, and investment in AI literacy, we risk deepening inequalities rather than reducing them," said Hans Kluge, Director of WHO/Europe.

Transforming Healthcare Systems

The report is the first comprehensive assessment of how AI technologies are being implemented and regulated in the healthcare systems of countries in the region. Representatives from 50 of the 53 countries in Europe and Central Asia that are members of WHO/Europe participated in the survey.

While nearly all countries recognize the potential of AI—from diagnostics to surveillance and personalized healthcare—only four countries have a dedicated national strategy, and seven more are in the process of developing one.

Some countries are taking proactive steps. For example, in Estonia, electronic health records, insurance data, and demographic registries have been integrated into a single platform, enabling the use of AI tools.

Finland is investing in training medical professionals to use AI, while Spain is launching pilot projects to use AI for early disease detection in primary care.

Problems and limitations

Meanwhile, regulatory measures in most countries have not kept pace with technological progress. Forty-three countries in the region, or 86 percent, cite legal uncertainty as the main barrier to AI use. Another 39 countries, or 78 percent, cite financial constraints.

Less than 10 percent of countries have liability standards for the use of AI in healthcare – a critical element that determines who is responsible if errors or harm occur.

“Despite these challenges, there is broad consensus on policy measures that could facilitate the adoption of AI,” the report says.

Almost all countries believe that clear rules of liability for producers, operators, and users of AI systems are key. Similarly, to build trust, countries recognize the need for guidelines that ensure the transparency, verifiability, and explainability of AI decisions.

Acting in the interests of people

WHO has called on countries to develop AI strategies that align with public health goals.

Experts recommend that countries invest in staff training, strengthen legal and ethical frameworks, engage local communities in decision-making processes, and improve cross-border data management.

“AI has the potential to revolutionise healthcare, but its potential will only be realised if decision-makers put people, and especially patients, at their heart,” said Hans Kluge.

“The choices we make today will determine whether AI will help patients and healthcare workers – or leave them behind,” he added.

Please note: This information is raw content obtained directly from the source. It represents an accurate account of the source's assertions and does not necessarily reflect the position of MIL-OSI or its clients.