The Ministry of Science and Technology has introduced principles aimed at promoting the research and development of safe and responsible artificial intelligence (AI) systems, minimizing negative impacts and controlling risks.
This is the first time general principles for the responsible research and development of AI systems have been established, intended for scientific and technological agencies, organizations, and individual enterprises involved in the design and development of AI systems, as stated in Decision No. 1290 by the Ministry of Science and Technology, issued on June 11. Accordingly, the nine principles for responsible AI system research and development include:
Spirit of collaboration and promoting innovation. Developers should pay attention to the connectivity and interaction capabilities of AI systems to enhance the benefits of AI through the process of connecting systems and improving coordination to control risks. To achieve this, developers must cooperate and share relevant information to ensure the interoperability and interaction of systems. Prioritizing the development of AI systems that comply with technical standards, national standards, or international standards, alongside standardizing data formats and ensuring openness of interfaces and protocols, including application programming interfaces (APIs), is essential. Sharing and exchanging conditions regarding intellectual property rights, such as patents, also contributes to enhancing connectivity and interoperability concerning intellectual assets.
Transparency: Developers need to focus on controlling the input/output of AI systems and the ability to interpret related analyses based on the characteristics of the applied technology and its usage.
System control capability: One method for assessing risks is to conduct testing in a controlled environment, such as a laboratory or testing environment where security measures are in place before actual implementation. Developers should pay attention to system monitoring (with assessment/monitoring tools or adjustments/updates based on user feedback) and response measures (such as system shutdowns, network interruptions, etc.).
Assessing, identifying, and mitigating risks related to the safety of AI systems.
Developers must pay attention to security, particularly regarding the reliability and resilience of AI systems against various attacks or physical accidents. They must also ensure the confidentiality, integrity, and availability of necessary information related to the safety of the system.
Ensuring that AI systems do not violate the privacy rights of users or third parties. Privacy in this principle includes personal space (peace of life), information (personal data), and the confidentiality of communications. Developers can implement appropriate measures aligned with the characteristics of the applied technology throughout the development process (from design) to avoid infringing on privacy when the system is put into use.
When developing AI systems that involve human interaction, developers must pay special attention to respecting human rights and dignity, taking preventive measures to ensure that human values and social ethics are not violated.
Supporting users and facilitating opportunities for users to choose, such as creating interfaces that provide timely information and measures to assist the elderly and disabled in using the systems easily.
Finally, developers must fulfill their accountability regarding the AI systems they have developed to ensure user trust.
Robot performing music at the Vietnam Artificial Intelligence Day (AI4VN 2023). (Photo: Thanh Tung)
The Ministry of Science and Technology stated that the establishment of standards is aimed at guiding and directing, thereby increasing the benefits from AI systems while controlling and minimizing risks during the development and use of artificial intelligence, balancing economic, ethical, and legal factors.
Previously, Deputy Minister of Science and Technology Bui The Duy mentioned that AI ethics is a complex issue of global scale, attracting many countries and organizations worldwide, including UNESCO, to seek solutions. AI ethics impacts various aspects of life, such as social, legal, political competition, and commercial competition.
Therefore, the principles for the research and development of AI systems in Vietnam closely align with goals that prioritize a human-centered society, ensuring a reasonable balance between the benefits and risks of AI systems, leveraging the advantages of artificial intelligence through research, development, and innovation activities, while minimizing the risk of rights violations. Additionally, research must ensure technological neutrality, and developers should not be affected by the rapid development of AI-related technologies in the future.