Transparency of Artificial Intelligence Algorithms
- Autores: Talapina E.1
-
Afiliações:
- Institute of State and Law
- Edição: Volume 18, Nº 3 (2025)
- Páginas: 4-27
- Seção: Legal Thought: History and Modernity
- URL: https://journal-vniispk.ru/2072-8166/article/view/318080
- DOI: https://doi.org/10.17323/2072-8166.2025.3.4.27
- ID: 318080
Citar
Texto integral
Resumo
In the modern era of active development of artificial intelligence (AI), lawyers are faced with the question: how to solve the “black box” matter, the incomprehensibility and unpredictability of decisions made by artificial intelligence. The development of rules ensuring transparency and explainability of AI algorithms allows artificial intelligence to be integrated into classical legal relations, eliminating the threat to the institution of legal liability. In Private Law consumer protection in front of large online platforms brings the algorithms transparency to the forefront, changing the very obligation to provide information to the consumer, which is now described by the formula: know + understand. Similarly, in Public Law, states are unable to properly protect citizens from harm caused by dependence on algorithmic applications in the provision of public services. It can only be countered by knowledge and understanding of the functioning of algorithms. A fundamentally new regulation is required to introduce the artificial intelligence use into a legal framework in which requirements for the transparency of algorithms should be formulated. Researchers are actively discussing creation of a regulatory framework for the formation of a system of observation, monitoring and preliminary permission for the AI technologies use. The paper analyzes “algorithmic accountability policies” and a “Transparency by Design” framework (problem solving throughout the entire AI development process) and the implementation of explainable AI systems. Overall, the proposed approaches to AI regulation and transparency are quite similar, as are the predictions about the mitigating role of AI algorithm transparency in matters of trust in AI. The concept of “algorithmic sovereignty” which refers to the ability of a democratic State to govern the development, deployment, and impact of AI systems in accordance with its own legal, cultural, and ethical norms, is also analyzed. This model is designed for the harmonious coexistence of different states, leading to an equally harmonious coexistence between humanity and AI. At the same time, ensuring the AI algorithms transparency is a direction of the general AI governance policy, the most important part of which is AI ethics. Despite its apparent universality, artificial intelligence ethics does not always take into account the diversity of ethical constructs in different parts of the world, as the African example demonstrates as well as fears of algorithmic colonization.
Palavras-chave
Sobre autores
Elvira Talapina
Institute of State and Law
Autor responsável pela correspondência
Email: talapina@mail.ru
ORCID ID: 0000-0003-3395-3126
Doctor of Sciences (Law), Chief Researcher.
10 Znamenka St., Moscow 119019, Russian Federation.Bibliografia
Arquivos suplementares
