Focus groups related to
various AI fields of Applications
Centre for Humanitarian Dialogue, Code of conduct on artificial intelligence in military systems
Status
Voluntary uptake
Type
Year
Stakeholders
Civil Society
Participants
State actors
A draft identifying possible agreements in the principles and limitations of AI use in millitary systems.
IEEE, P7000 series, IEEE P2247 series, IEEE, P2802 (medical)
Status
Technical standard
Type
Process
Year
Stakeholders
Industry
Participants
Organisations that nurture, develop and advance global technologies
Voluntary standards to promote the development of autonomous and intelligence systems in manner that considers human ethics and wellbeing rather than only technological feasibility.
Global Partnership on AI (GPAI)
Status
Non-binding
Type
Process
Year
Stakeholders
Governments - Multilateral
Participants
GPAI member countries
GPAI is a multi-stakeholder initiative which aims to bridge the gap between theory and practice on AI by supporting cutting-edge research and applied activities on AI-related priorities across four working groups - Responsible AI, Future of Work, Innovation and Commercialisation and Data Governance.
OECD Recommendation of the Council on Artificial Intelligence (2019)
Status
Non-binding
Type
Outcome doc
Year
2019
Stakeholders
Participants
46 countries - 38 member countries and 8 non-members
Intergovernmental policy guidelines on AI to uphold international standards that aim to ensure AI systems are designed to be robust, safe, fair and trustworthy.
G20 Recommendation on AI (2019)
Status
Non-binding
Type
Outcome doc
Year
2019
Stakeholders
Governments - Multilateral
Participants
G20 member countries
G20 AI Principles for responsible stewardship of trustworthy AI encompass inclusive growth, sustainable development and well-being; human-centred values and fairness; transparency and explainability; robustness, security and safety; and accountability.
ISO, JTC 1/SC42
Status
Technical standard
Type
Process
Year
2017
Stakeholders
Standards C'ty
Participants
All ISO member states
Scope of ISO/IEC JTC 1/SC 42 is standardization in AI by serving as focus and proponent for JTC 1's standardization program on AI and provide guidance to JTC 1, IEC, and ISO committees developing AI applications
ITU, Focus Groups related to various AI fields of application - FG-AI4A and FG-AI4H
Status
Voluntary uptake
Type
Process
Year
Ongoing
Stakeholders
Standards C'ty
Participants
UN member states
FG-414A is set up to explore the potential of emerging technologies including AI in supporting data acquisition and handling, improving modelling on agricultural and geospatial data, providing effective communication to optimise agricultural production processes, examine key concepts, and relevant gaps in current standardization landscape in agriculture, and inform best practices and barriers related to the use of AI; FG-AI4H is a partnership of ITU and WHO to establish a standardized assessment framework for the evaluation of AI-based methods for health, diagnosis, triage or treatment decisions.
UNESCO Recommendation on the Ethics of AI (2021)
Status
Non-binding
Type
Outcome doc
Year
2021
Stakeholders
Governments - Multilateral
Participants
UN member states
Through 11 policy areas, the document highlights concrete policy actions to be undertaken for ethical development, deployment and use of AI.
UNGGE on LAWS
Status
Non-binding
Type
Process
Year
Ongoing
Stakeholders
Governments - Multilateral
Participants
UN member states (193)
Open-ended Group of Governmental Experts's to design normative and operational framework in lethal autonomous weapon systems
Standards Australia, An AI Standards roadmap
Status
Voluntary uptake
Type
Outcome doc
Year
Stakeholders
Standards C'ty
Participants
Australian businesses and government agencies
Australia's framework to intervene and shape AI standards development internationally.
Singapore Government, AI Verify: an AI governance testing framework and toolkit
Status
Voluntary uptake
Type
Process
Year
Stakeholders
Government
Participants
Private sector organisations
Pilot for testing trustworthiness of AI systems deployed by companies in their products and services.
US NIST, Interagency Committee on Standards Policy, AI Standards Coordination Working Group
Status
Regulatory
Type
Process
Year
Stakeholders
Standards C'ty
Participants
US government
The Working Group coordinates government activities related to the development and use of AI standards.
Singapore Government, Compendium of Use Cases: Practical Illustrations of the Model AI Governance Framework
Status
Voluntary uptake
Type
Outcome doc
Year
Stakeholders
Government
Participants
Private sector organisations
Guide that outlines organisations' implementation or alignment of AI-related practices with Model AI Governance Framework.
Singapore Government, Model AI Governance Framework, Ed2
Status
Voluntary uptake
Type
Outcome doc
Year
Stakeholders
Government
Participants
Private sector organisations
Singapore government's voluntary guidance for private sector organisations regarding the responsible use of AI.
Australian Government, AI Ethics Framework (2019)
Status
Voluntary uptake
Type
Outcome doc
Year
Stakeholders
Government
Participants
Australian businesses and government agencies
Governance principles and measures for achieving best results from AI while maintaining wellbeing of Australians.
Princeton Uni, Stanford Uni and Meta, ImageNet
Status
Please select option
Type
Process
Year
Stakeholders
Academia
Participants
AI Partnership for Defence
Status
Please select option
Type
Process
Year
Stakeholders
Governments - Multilateral
Participants
US
NATO Certification (Data and Artificial Intelligence Review Board)
Status
Technical standard
Type
Process
Year
Stakeholders
Government
Participants
Members of NATO Alliance
Standards to translate NATO’s Principles of Responsible Use into concrete checks and balances, notably in terms of governability, traceability and reliability
CENELEC JTC 21
Status
Technical standard
Type
Process
Year
Stakeholders
Standards C'ty
Participants
European Union National Standards Body
Identify and adopt international standards available or under development and produce standardization deliverables that address European market and societal needs along with values underpinning EU legislation, policies, and principles.
European Commission, Draft Regulation on AI (2021)
Status
Regulatory
Type
Outcome doc
Year
2021
Stakeholders
Government
Participants
European Commission
Act follows a risk-based approach, establishing obligations for providers and users depending on the level of risk the AI can generate.
European Commission, Coordinated Plan on AI (2021)
Status
Non-binding
Type
Outcome doc
Year
2021
Stakeholders
Government
Participants
EU Member Countries
Plan aims to accelerate investment in AI, act on AI strategies and programmes and align AI policy to avoid fragmentation in Europe
NATO AI Strategy (2021)
Status
Non-binding
Type
Outcome doc
Year
2021
Stakeholders
Governments - Multilateral
Participants
Members of NATO Alliance
Strategy aims to encourage responsible development, adoption, and use of AI for defence and security purposes for capability development and delivery and identify and safeguard against the threats from malicious use of AI by state and non-state actors.
European Commission White Paper on AI (2020)
Status
Non-binding
Type
Outcome doc
Year
2020
Stakeholders
Participants
EU Member Countries
White paper analyzing strengths, weaknesses, opportunities of Europe in the global market of AI
NATO Principles of Responsible Use (2021)
Status
Non-binding
Type
Outcome doc
Year
2021
Stakeholders
Governments - Multilateral
Participants
NATO member countries (29)
Principles are based on existing ethical, legal, and policy commitments under which NATO has historically operated and continues to operate under - Lawfulness, Responsibility and Accountability, Explainability and Traceability; Reliability; Governability and Bias Mitigation.
Microsoft, Responsible AI Standard v2
Status
Self-regulation
Type
Outcome doc
Year
Stakeholders
Industry
Participants
Self-regulatory
Microsoft's internal product development requirements for responsible AI with similar principles as the previous version.
Google, Responsible AI practices
Status
Self-regulation
Type
Outcome doc
Year
Stakeholders
Industry
Participants
Self-regulatory
Google's internal responsible AI practices guide with principles of fairness, interpretability, privacy, and safety.
Microsoft, Responsible AI Principles
Status
Self-regulation
Type
Outcome doc
Year
Stakeholders
Industry
Participants
Self-regulatory
Microsoft's internal AI deployment guiding principles based on fairness, reliability & safety, privacy, inclusiveness, transperancy and accountability.
Google, AI Principles
Status
Self-regulation
Type
Outcome doc
Year
Stakeholders
Industry
Participants
Self-regulatory
Google's internal AI deployment guiding principles based on 7 objectives for AI applications.
Quad Principles on Critical and Emerging Technology Standards
Status
Non-binding
Type
Outcome doc
Year
2023
Stakeholders
Governments - Minilateral
Participants
Quad members Australia, India, Japan and US
Quad partners's non-binding guiding principles on technology design and governance.
Quad Principles on Technology Design, Development, Governance, and Use
Status
Non-binding
Type
Outcome doc
Year
Stakeholders
Governments - Minilateral
Participants
Quad members Australia, India, Japan and US