UN Human Rights Chief: AI must be developed with inclusiveness and accountability

Translation. Region: Russian Federation –

Source: United Nations – United Nations –

An important disclaimer is at the bottom of this article.

February 19, 2026 Human rights

The UN High Commissioner for Human Rights is convinced that without urgently creating protective mechanisms, artificial intelligence could exacerbate inequality and bias on the planet. Speaking on the sidelines of the India AI Impact Summit 2026 in New Delhi, Volker Türk told the UN News Service that technology must be regulated within a human rights-based approach that ensures transparency, accountability, and inclusiveness.

Volker Türk: Artificial intelligence is a technological tool, and its development should be based on a risk assessment. It is essential to have rules within which AI is developed, designed, and applied, and it is at this stage that human rights should be emphasized.

UN News Service: What do you see as the most serious risks to human rights in the context of the rapid spread of AI?

Volker Türk: Inequality is a huge problem. That's why I'm glad this summit is taking place in India. It's crucial that such tools are used and developed everywhere.

There's also the issue of bias and discrimination. If data is collected only in one part of the world, or if AI is developed exclusively by men, unconscious bias is inevitably built into the system. We believe it's crucial to consider the interests of vulnerable groups and minorities, as they are often excluded from AI development processes. This is about active participation and a vision for a better world. Human rights provide that vision.

UN News Service: Generative AI is advancing faster than regulation. What protective measures should governments and companies urgently implement?

Volker Türk: Take the pharmaceutical industry for example: testing [of new drugs] usually takes a very long time because it is necessary to ensure that all risks associated with a new product are identified before it is launched on the market.

When it comes to AI tools, we must require companies to conduct human rights impact assessments during the development, implementation, and marketing stages of the product.

When it comes to AI tools, we must require companies to conduct human rights impact assessments during the development, implementation, and marketing stages of the product.

We see that some companies' budgets exceed those of small countries. If you control technology not only in your own country but globally, you wield power. This power can be used for good—for example, to improve healthcare, education, and sustainable development. But it can also be used for evil—to create lethal autonomous weapons, spread disinformation, hatred, and aggressive misogyny.

UN News Service: What AI governance mechanisms are needed to prevent bias and inequality from worsening?

Volker Türk: I've had the opportunity to speak with people who create artificial intelligence systems. I'm struck by how often they have a very superficial understanding of the fundamental principles when they begin development. It's like Frankenstein's monster: you create something you don't control from the start. Eventually, the genie escapes from the bottle.

I've had the opportunity to speak with people who create artificial intelligence systems. I'm struck by how often they have a very superficial understanding of the fundamental principles when they begin development.

Failure to consider the risks and potential threats can cause enormous harm. We saw this in Myanmar, where hatred against the Rohingya spread on social media.

It's crucial to consider the interests of all groups in society, especially women and youth, and remember that our consciousness develops differently. We don't want to create dependencies that poison the mind and soul. We also need to understand how destructive disinformation can be: it corrodes the social fabric, creating division and polarization, as everyone begins to live in their own bubble.

We also see a lot of misogyny. Many female politicians tell me they're considering leaving politics because of what they encounter on social media.

UN News: What do you think the responsible use of AI will look like in five years?

Volker Türk: I hope that we will move towards inclusive AI development, where power is no longer concentrated in the hands of a few companies in North America, and where AI development takes into account the richness and diversity of all communities.

I also hope for an inclusive and meaningful approach that will help us address the many challenges facing the modern world. The climate crisis, access to healthcare, education for all—AI could be a fantastic tool for achieving these goals. But if we don't offer a vision for a better future, the world could become even more polarized, and wars could spiral completely out of human control. And that's extremely dangerous.

Please note: This information is raw content obtained directly from the source. It represents an accurate account of the source's assertions and does not necessarily reflect the position of MIL-OSI or its clients.