Published on: October 8, 2025
Source: UN Secretary-General Antonio Guterres (C, Front) speaks during the High-level Meeting to Launch the Global Dialogue on AI Governance at the UN headquarters in New York
https://english.news.cn/20250926/fd02217b60ac45288627714ea5eaffe5/8H4Yten0JM4fxct2.jpeg
During the eightieth session of the United Nations General Assembly, the “High-level Multi-stakeholder Informal Meeting on Launching a Global Dialogue on Artificial Intelligence Governance” emerged as one of the most closely watched agenda items (Floridi et.al., 2018). At a time when algorithms are profoundly shaping diplomacy, security, and development, the UN’s attempt to advance a global framework for AI governance is both timely and necessary.
The initiative was formally proposed by the President of the General Assembly in August 2025 (Natorski, 2025). The meeting brought together member states, UN specialized agencies, technology companies, the scientific community, and civil society representatives to address a central question: How can we ensure that the development of artificial intelligence remains bounded by and centered on human interests? A broad consensus emerged that the answer lies not in unilateral state regulation, but in a new model of multilateral governance that integrates scientific evidence, human rights principles, and inclusive participation as the three pillars of AI governance.
Conducted in an “informal” format, the meeting aimed to preserve flexibility for future institutional development. Rather than a policy negotiation, it served as a collective exploration of direction, with transparency, accountability, fairness, and safety emerging as the most frequently cited principles (Dafoe, 2018). Many participants emphasized that the cross-border nature of AI calls for an “ecosystem-based approach to governance,” rather than a traditional state-control logic, given the pace of technological evolution far outstrips that of diplomatic processes.
For many developing countries, the dialogue represented a critical opportunity for “early engagement.” Representatives from Africa, Asia, and Latin America urged that discussions on ethics and regulation be accompanied by attention to technology transfer and capacity-building, and warned that without such measures, global AI governance risks replicating the inequitable structures of past industrial eras. Several delegations proposed the establishment of an “AI Capacity Fund” to support talent development, research institutions, and digital infrastructure in low-income countries, thereby democratizing innovation (Cihon, 2019).
The meeting also explored deeper questions: How can a balance be struck between fostering innovation and mitigating risks? While no unified stance emerged, participants widely agreed that responsible AI is not merely a matter of risk management, but fundamentally concerns how humanity defines “human-centered progress.”
The private sector received unprecedented attention in this forum (Whittlestone et.al., 2019). Technology giants, often viewed with caution in UN settings, were recognized as indispensable partners in the governance ecosystem. At the same time, national representatives stressed that corporate involvement must be grounded in transparency and human rights accountability not self-regulation. This gradually crystallized into a shared understanding that future AI governance must adopt a “co-governance model”: one involving governments, enterprises, academia, and civil society, yet anchored in the ethical baselines of the UN Charter and international human rights law.
One of the most constructive outcomes was the preliminary decision to establish an “Independent Scientific Advisory Panel on Artificial Intelligence.” Modeled in part on the Intergovernmental Panel on Climate Change (IPCC), this panel will conduct regular risk assessments of emerging technologies and provide scientific input for policymaking, serving as a vital mechanism to maintain rationality and stability amid rapid technological change.
Nevertheless, participants widely emphasized that dialogue must swiftly translate into action (Jobin et.al., 2019). Multiple countries proposed convening a “Global Summit on AI Governance” by 2026, building on the outcomes of this meeting and the scientific panel’s work to advance coordination mechanisms and potentially lay the groundwork for future international norms or “soft law” frameworks. In his closing remarks, the President of the General Assembly described the meeting as “the beginning of a new social contract between technology and humanity.” This formulation resonated deeply, reframing AI not merely as a “risk,” but as a “responsibility,” and shifting the focus of governance from restraint to co-creation (Boddington, 2017).
This UN-led global dialogue did not begin from consensus, but from commitment. In a world increasingly driven by code, the most essential “algorithm” may still be trust.
The 80th UNGA demonstrated that multilateralism is neither dead nor static, and it is being recalibrated. The common thread was not ideology but governance realism: an acknowledgment that principles alone cannot sustain legitimacy without measurable action, inclusive participation, and transparent institutions.
From gender equality to global finance to artificial intelligence, the Assembly’s debates reflected a world seeking coherence amid complexity. Each initiative, whether to empower women, reform finance, or regulate technology, points toward a broader transformation.
For all its imperfections, the United Nations remains the only forum capable of weaving these diverse agendas into a shared narrative of human responsibility. As this year’s Assembly reminded the world, progress may be uneven, but dialogue, when grounded in honesty and hope, remains the strongest engine of change.
References
Boddington, P. (2017). Towards a code of ethics for artificial intelligence (pp. 0-143). Cham: Springer.
Cihon, P. (2019). Standards for AI governance: international standards to enable global coordination in AI research & development. Future of Humanity Institute. University of Oxford, 40(3), 340-342.
Dafoe, A. (2018). AI governance: A research agenda. Governance of AI Program, Future of Humanity Institute.
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., … & Vayena, E. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and machines, 28(4), 689-707.
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature machine intelligence, 1(9), 389-399.
Natorski, M. (2025). Multilateralism in the Global Governance of Artificial Intelligence. arXiv preprint arXiv:2508.15397.
Unesco. (2022). Recommendation on the ethics of artificial intelligence. United Nations Educational, Scientific and Cultural Organization.
Whittlestone, J., Nyrup, R., Alexandrova, A., & Cave, S. (2019, January). The role and limits of principles in AI ethics: Towards a focus on tensions. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (pp. 195-200).
Disclaimer. The views and opinions expressed in this analysis are those of the author and do not necessarily reflect the official policy or position of MEPEI. Any content provided by our author is of his opinion and is not intended to malign any religion, ethnic group, club, organization, company, individual, or anyone or anything.
About the author:
Ms. Lu DONG: master at University College London and intern at MEPEI.

