Our own AI or someone else's? The upcoming certification will put an end to foreign neural networks in fleet and warehouse management.

Translation. Region: Russian Federation –

Source: KMZ Cargo – KMZ CARGO –

An important disclaimer is at the bottom of this article.

New FSTEC and FSB regulations may block the use of foreign algorithms for analyzing cargo flows, predicting equipment wear, and managing warehouses, as they will be deemed unsafe.

Russia may introduce mandatory certification of AI systems for critical infrastructure and government agencies, RBC reports.

The authorities have prepared a bill that will, for the first time, introduce differentiated regulation for artificial intelligence systems based on their risk level. Mandatory certification by the Federal Service for Technical and Export Control (FSTEC) and the Federal Security Service (FSB) is planned for AI systems used in critical information infrastructure (CII) facilities and in public administration. This follows from the draft law "On the Use of Artificial Intelligence Systems by Agencies Part of the Unified System of Public Authority," a copy of which RBC has obtained.

According to the document, all AI systems will be divided into four risk categories: minimal, limited, high, and critical. High-risk systems will be those used in critical information infrastructure (CII) facilities—communication networks, energy, transportation, finance, and other significant industries. Critical risk implies a threat to life, health, or national security. It is for these two categories that a mandatory certificate of compliance will be required. The new National AI Center for Public Administration will determine the criticality level of each system and maintain a registry of approved solutions.

RBC has learned that testing of high- and critical-risk AI systems is planned for a special testing facility being established at the initiative of the Ministry of Digital Development. Successful completion of the tests will be a prerequisite for approval for use on critical information infrastructure (CI) facilities. Furthermore, the bill explicitly prohibits the use of AI systems with rights held by foreign entities on such infrastructure. The authors intend this to ensure technological sovereignty and protect against the risk of data leaks. A Ministry of Digital Development spokesperson, however, declined to comment on the details, stating only that the ministry is "not currently developing legislation" related to such a testing facility.

Market experts have varying assessments of the initiative's potential implications. On the one hand, the need to protect critical assets is undeniable. As Pavel Rastopshin, CEO of Ultimatek Group, notes, AI is currently frequently used in the energy, transportation, and housing and utilities sectors for predictive analytics, load management, and accident prevention, often based on solutions from international vendors such as Siemens or Schneider Electric. The idea of certification for such systems is understandable, but, according to him, there is currently no real mechanism for certifying neural networks with unpredictable behavior. "It cannot be tested for all situations, which means security cannot be guaranteed using traditional methods," Rastopshin noted. He admits that regulators may face a dilemma: either impose a restrictive regime, freezing digitalization, or seek fundamentally new approaches to regulation.

On the other hand, a strict ban on foreign AI solutions raises questions about practical implementation and potential costs. Vitaly Popov, Director of the Softline Solutions Department, points out that many companies have already integrated foreign systems into their processes, including open-source models (LR). Abruptly abandoning them could paralyze workflows. "This is a step backwards in terms of development… Many companies are already using foreign AI solutions… Will we be able to transition to Russian solutions without losses?" the expert asks. He also emphasizes that most domestic AI developments are built on architectures derived from global open source, so a legal ban will not eliminate technological dependence but merely limit access to the most mature models.

Denis Romanov, Director of the Lukomorye AI Product Development Center (part of Rostelecom), points out that not all foreign solutions are equally risky. Open-source models like Llama or Mistral can be deployed locally and, if trained on domestic data and with controlled updates, used with minimal risk. He believes the key factor is not the architecture's origin, but control over the entire chain: deployment, data, and updates.

At the same time, there are also warnings about the potential negative consequences of overly strict regulation for the emerging market. Dmitry Markov, CEO of VisionLabs (MTS), believes that strict measures at the initial stage of AI development could hinder technological progress, reduce the competitiveness of Russian developers, and trigger an outflow of qualified personnel.

As a reminder, Russia ranked 28th out of 36 countries in the Global AI Vibrancy Tool for 2024. Its strong legislative position contrasts with weak implementation and low research citation rates. LR

Read more:http://logirus.ru/nevs/infrastructure/your_and_or_someone else’s_upcoming_certification_to put an end to_on_foreign_neural networks.html

Publication date: 02/11/2026

Please note; this information is raw content obtained directly from the information source. It is an accurate account of what the source claims, and does not necessarily reflect the position of MIL-OSI or its clients.