광고
광고
광고
광고
광고
광고
광고
광고
광고
광고
로고

[유엔미래포럼 AGI로 가는 준비 보고서] 유엔총회, 인공지능에 관한 획기적인 결의안 채택

운영자 | 기사입력 2024/04/05 [11:43]

[유엔미래포럼 AGI로 가는 준비 보고서] 유엔총회, 인공지능에 관한 획기적인 결의안 채택

운영자 | 입력 : 2024/04/05 [11:43]

 

글로벌 관점 인간의 이야기

총회, 인공지능에 관한 획기적인 결의안 채택

회기 중인 총회 모습. (파일)
UN 사진/로이 펠리페
  
 
회기 중인 총회 모습. (파일)
2024년 3월 21일SDGs

유엔 총회는 목요일 모든 사람을 위한 지속 가능한 발전에 도움이 될 “안전하고 신뢰할 수 있는” 인공 지능(AI) 시스템의 촉진에 관한 획기적인 결의안을 채택했습니다.

 

투표 없이 미국이 주도한 결의안 초안을 채택한 총회는 또한 AI의 설계, 개발, 배포 및 사용에 있어 인권의 존중, 보호 및 증진을 강조했습니다.

이 텍스트는 120개 이상의 다른 회원국이 "공동 후원"하거나 지원했습니다 .

총회는 또한 17가지 지속 가능한 개발 목표 달성을 향한 진전을 가속화하고 가능하게 하는 AI 시스템의 잠재력을 인식했습니다 .

이는 의회가 신흥 분야 규제에 관한 결의안을 채택한 최초의 사례입니다. 미국 국가안보보좌관은 이달 초 이번 채택이 AI의 안전한 사용을 위한 “역사적인 진전”을 의미한다고 말한 것으로 알려졌다.

트윗 URL

온라인과 오프라인에서 동일한 권리

총회는 모든 회원국과 이해관계자들에게 “ 국제인권법을 준수하여 운영하는 것이 불가능 하거나 인권 향유에 과도한 위험을 초래하는 인공지능 시스템의 사용을 자제하거나 중단”할 것을 촉구했습니다.

“사람들이 오프라인에서 갖고 있는 것과 동일한 권리는 인공지능 시스템의 수명주기 전반에 걸쳐 온라인에서도 보호되어야 한다”고 단언했습니다.

총회는 또한 모든 국가, 민간 부문, 시민 사회, 연구 기관 및 언론에 안전하고 신뢰할 수 있는 AI 사용과 관련된 규제 및 거버넌스 접근 방식과 프레임워크를 개발하고 지원할 것을 촉구했습니다.

디지털 격차 해소

총회는 또한 국가 간 및 국가 내 기술 개발의 "다양한 수준"을 인식했으며 , 개발도상국은 빠른 혁신 속도를 따라가는 데 있어 고유한 과제에 직면해 있음을 인식했습니다.

보고서는 회원국과 이해관계자들에게 개발도상국이 포용적이고 공평한 접근 혜택을 누리고, 디지털 격차를 해소하고, 디지털 활용 능력을 높일 수 있도록 협력하고 지원할 것을 촉구했습니다.

다른 분야에도 희망

채택에 앞서 미국 대사이자 유엔 주재 대표인 린다 토마스-그린필드(Linda Thomas-Greenfield)가 결의안 초안을 소개했습니다.

그녀는 “이 결의안으로 이어진 포괄적이고 건설적인 대화가 평화와 안보, AI 자율성의 책임 있는 군사적 사용과 같은 다른 분야의 AI 문제에 대한 향후 대화의 모델이 될 것”이라는 희망을 표명했습니다.

트윗 URL

Thomas-Greenfield 여사는 이번 결의안이 국제전기통신연합(ITU), 유엔 교육과학문화기구(UNESCO) 및 인권이사회 를 포함하여 유엔이 이미 수행하고 있는 작업을 확대하기 위해 고안되었다고 언급했습니다 .

그녀는 “우리는 이것이 글로벌 디지털 협약을 위한 협상과 인공지능에 관한 사무총장의 고위급 자문기구의 업무를 포함해 향후 UN 이니셔티브를 보완할 계획”이라고 말했습니다.

우리는 AI를 관리합니다

Thomas-Greenfield 씨는 또한 “ 이 기술이 우리를 지배하도록 하기보다는 이를 지배하는 ” 국제 사회의 기회와 책임을 강조했습니다 .

“따라서 AI가 인간성과 존엄성, 안전과 보안, 인권과 기본적 자유라는 렌즈를 통해 만들어지고 배포될 것이라는 점을 재확인합시다.”라고 그녀는 말했습니다.

"국가 내부 및 국가 간 디지털 격차를 줄이고 이 기술을 사용하여 지속 가능한 개발에 대한 공유 우선순위를 발전시키기 위해 노력합시다."

◆ 받은편지함에서 직접 매일 업데이트를 받아보세요 - 여기에서 주제 

 

 

밀레니엄 프로젝트
소식
본문이미지
트위터
 
본문이미지
페이스북
 
본문이미지
링크드인
 
본문이미지
인스 타 그램
 
본문이미지
유튜브
 
최신  뉴스  -  밀레니엄 프로젝트 : 
 
  • 주미 중국 대사관 과학 참사관은 2024년 3월 8일 밀레니엄 프로젝트 사무총장인 제롬 글렌과 인공 일반 지능 연구 및 밀레니엄 프로젝트의 기타 작업에 관해 회의를 소집했습니다. 이 회의에서는 AI에 관한 유엔 협약 작업을 시작하기 위한 미중 유엔 총회 공동 결의안의 중요성을 강조했습니다 밀레니엄 프로젝트의 AGI 연구 2단계인 일반 인공지능의 글로벌 거버넌스 요구사항 - AGI: 실시간 델파이 보고서 결과가 다음 달 공개될 예정입니다. 자세히 알아보기: https://www.millennium-project.org/meeting-with-the-science-councilor-of-the-embassy-of-china-to-the-usa-an-update-on-agi-march -2024년 8월/ 
     
  • 2024년 세계 미래의 날 - 감사합니다! - 밀레니엄 프로젝트는 APF, Humanity+, Lifeboat Foundation, WAAS 및 WFSF와 협력하여 11회 연속으로 3월 1일 800명이 넘는 참가자가 참석한 미래에 관한 24시간 글로벌 공개 대화를 주최했습니다. 더 읽어보세요: https://www.millennium-project.org/world-futures-day-2024-thank-you/
     
  • 새로운 AI 증강 미래 휠: Jerome Glenn이 발명한 방법을 강화하는 새로운 AI 증강 미래 휠 도구입니다. Jerome Glenn은 수년간 미래 전문가들을 지도해 왔으며 The Millennium Project, 특히 Rosa Alegria(Brazil Node Co)에서 개발했습니다. -Chair), Gustavo Machado, Diego Maffazzioli(브라질 노드 멤버), 3월 1  에 출시 및 테스트되었습니다 . 두바이에서 열린 지난 밀레니엄 프로젝트 회의에서 미래 연구 방법에 AI 기능을 추가하기로 합의했습니다. 더 읽어보세요: https://www.millennium-project.org/new-ai-augmented-futures-wheel/
  • 새로운 최고 기술 책임자(CTO) Tad Davis가 밀레니엄 프로젝트에 합류했습니다 . Tad는 Sun Microsystems에서 14년 동안 근무한 후 Areva, Alstrom 및 조지 트빌리시에 있는 자신의 회사인 demenTad에서 근무했습니다. 그는 MP에 다양한 소프트웨어 및 마킹 경험을 제공합니다. 우리 웹사이트를 업데이트하고 개선하는 것 외에도 그는 중국의 텔레 인턴들과 협력하여 밀레니엄 프로젝트 연구의 GPT를 만들고 있다고 보고합니다. 그는 다음 미래 상태 보고서인 AGI 2단계 보고서를 전면 개편 하고 AI 추가 및 업데이트를 통해 GFIS(Global Futures Intelligence System)를 다시 만들 계획입니다.
 
  • 천연자원의 미래 – 2024년 6월 13~14일 : 핀란드 미래 연구 센터와 핀란드 미래 아카데미(헬싱키 노드)가 주최한 제24차 국제 미래 컨퍼런스에서 "CLA를 통한 작업의 역설 해체"에 관한 새로운 MP 특별 세션을 발표했습니다. ANI가 AGI로 진화하는 의미를 강조하고 작업의 의미를 다시 생각하는 Jerome Glenn 의 연설과 Sirkka Heinonen 및 Osmo Kuusi(헬싱키 노드 공동 의장)가 의장을 맡은 패널 토론이 포함됩니다. Foresight Europe Network는 2024년 6월 12일에 직접 회의도 개최할 예정입니다. 자세한 내용은 다음과 같습니다. 자세한 내용은 https://www.millennium-project.org/future-of-natural-resources-call-for-papers-and-workshops/를 방문하세요.
 
최신  뉴스  -  밀레니엄 프로젝트 노드 : 
 
  • 몰입형 CLA를 사용한 도시 탄력성 테스트 및 What If: Node의 공동 의장 Sirkka Heinonen 등이 RESCUE 프로젝트 내에서 출판한 Helsinki Node의 새로운 간행물이며 위기 분석 및 몰입형 What If를 분석하기 위해 CLA를 사용한 방법론적 실험 에 대해 보고합니다. 세 가지 도시 사례에 접근합니다. 자세한 내용을 읽고 다운로드하세요:   https://www.millennium-project.org/testing-urban-resilience-with-immersive-cla-and-what-if-publication/
 
 
 
  • Concepcion Olavarrieta에 대한 다이아몬드 크리스탈 표창 – 2024년 3월 1일 : Tecnológico de Monterrey 연구소는 그의 미래학자 별자리를 통해 세계 미래의 날 프레임워크 내에서 Concepción Olavarrieta Rodríguez(멕시코 노드 의장 및 창립자)에게 다이아몬드 모양의 크리스탈 표창을 수여했습니다. 멕시코와 세계의 선견지명과 미래 건설 분야에서 그녀의 귀중한 전문 경력에 경의를 표합니다. 밀레니엄 프로젝트의 여러 노드 의장이 기여한 동영상과 이벤트 사진을 시청하세요: https://www.millennium-project.org/diamond-crystal-recognition-to-concepcion-olavarrieta-march-1-2024/
 
 
언론 의 새로운 소식 
 
  • 비디오 인터뷰: Bulgaria National Television , " Законът и изкуственият интелект – къде да очакваме сблъсъците "(법 및 인공 지능: 충돌이 예상되는 곳), Mariana Todorova(불가리아 노드 의장)와의 비디오 인터뷰.
  • 보도 자료: Conecta , " Futurista mexicana comparte visión y retos globales del milenio ", Concepción Olavarreita, 멕시코 노드 의자에서 다이아몬드 모양 크리스탈 표창을 받음.
  • 웹 기사: 스페인어 - "Como a Nova Ferramenta de Roda de Futuros Aumentada por IA Pode Contribuir para um Mundo Melhor!"; 영어 - "바퀴 재발명을 넘어서: 새로운 AI 증강 미래 바퀴가 더 나은 세상에 기여할 수 있는 방법!", Gustavo Machado(브라질 노드 회원) 저작.
  • 언론 : 남아시아 예측 네트워크(South Asia Foresight Network), Gagan Bulatsinghala 공군 원수는 워싱턴 DC에서 SAFN으로부터 남아시아와의 연계를 발전시키기 위해 설립된 명예 펠로우쉽을 받았습니다.
  • 보도 자료 : 남아시아 예측 네트워크(South Asia Foresight Network), SAFN,캄보디아 외교 관계를 위한 IRIC 정부 싱크탱크 와 협력.
  • 비디오: Cep Cuyo , XV Ciclo Seminario Internacional de Formación Prospectiva 2024 내 Luis Ragno(Mendoza 하위 노드 공동 의장)의 “ La gestión 장래성 인류 전망” .
  • 비디오: ITB 베를린 , " 내일의 세계 - 미래를 형성하는 글로벌 힘, 기술 개발, 아이디어와 관광에 대한 영향 " 영국 노드 공동 의장인 Rohit Talwar의 연설 영상 녹화.
  • 비디오 : 유익한 AGI 서밋 및 언컨퍼런스: 밀레니엄 프로젝트가 공동 후원하는 행사 중 Jerome Glenn, David Wood(영국 노드 의장), José Cordeiro(베네수엘라 및 RIBER 노드 의장)의 연설 녹음을 볼 수 있습니다.
  • 보고서 :밀레니엄 프로젝트를 대표하여 Jerome Glenn의 구체적인 제안을 포함하여 UN이 작성한 " 지속 가능한 미래를 위한 비전 조정 ".
  • 웹 기사: 아제르바이잔 미래 연구 협회(Azerbaijan Future Studies Society) , 밀레니엄 프로젝트의 아제르바이잔 노드 의장이자 노드를 주최하는 아제르바이잔 미래 연구 협회 회장인 레이한 후세노바(Reyhan Huseynova)가 III 국제 과학 연구 회의 “실크 로드” 에 참석했습니다 .
  • 웹 기사: Aurora , “ Learning In Crisis ”, 오늘날의 학습에 어떻게 새로운 세대의 문화에 맞는 새로운 정신적 패러다임이 필요한지에 대한 고찰, 저자: Puruesh Chaudhary(파키스탄 노드 의장).
  • 웹 기사: Bryan Alexander의 블로그 , “ 오늘의 두 가지 미래 이벤트 ”, 세계 미래의 날에 대한 언급.
  • 웹 기사: Emerj , Matthew DeMello의 " 국제 AGI 거버넌스 활성화 - 밀레니엄 프로젝트의 최근 전문가 설문조사 스냅샷".
  • 출판물: Mideplan , “El trabajo del futuro: una mirada para el desarrollo del país ”, The Millennium Project의 코스타리카 노드에서 작업의 미래에 대한 새로운 무료 출판물. 
  • 출판물 : “ MOMus INSPIRE 2023 ”, 유네스코 미래 연구 석좌가 밀레니엄 프로젝트와 협력하여 조직한 프로젝트의 결과.
  • 비디오 인터뷰 : Informed Choices Mini-Pod(비디오) , " A Day in the Life of an Enhanced Human in 2050 with Rohit Talwar ", Steve Welles가 Rohit Talwar(영국 노드 공동 의장)과 인터뷰합니다.
 
알림:

 
 
 
 
최신 정보를 받아보세요: 
 
 

밀레니엄 프로젝트(Millennium Project)는 1996년 유엔대학교 미국협의회(American Council for the United Nations University) 산하로 설립된 글로벌 참여형 싱크탱크입니다. 2009년 독립 NGO가 되었으며 전 세계 71개 노드 로 성장했으며 58개의 글로벌 미래 연구 연구  (가장 최근은 작업/기술 2050: 시나리오 및 행동 )를 작성했으며 19개의 미래 상태 보고서를 발표하고 미래 연구 방법론 3.0을 작성했습니다  . 워싱턴 DC에서 400명이 넘는 원격 인턴과 직접 인턴을 감독했습니다 

 

 

 

Requirements for Global Governance of

Artificial General Intelligence – AGI

 

Results of a Real-Time Delphi

 

Phase 2 of The Millennium Project

 

 

 

 

 

April, 2024

 

 

As the report states, "Governing AGI could be the most complex, difficult management problem humanity has ever faced." Furthermore, failure to solve it before proceeding to create AGI systems would be a fatal mistake for human civilization. No entity has the right to make that mistake.

 

---- Stuart Russell


 

Introduction

 

The Millennium Project’s research team on global governance of the transition from Artificial Narrow Intelligence (AGI) to Artificial General Intelligence (AGI) Phase 1 identified 22 key questions related to the safe development and use of AGI. For the purpose of this study, AGI was defined as a general-purpose AI that can learn, edit its code, and act autonomously to address novel problems with novel and complex strategies similar to or better than humans, as distinct from Artificial Narrow Intelligence (ANI) that has a narrower purpose. These 22 questions were submitted to 55 leading AGI experts on AGI. Their answers provided a way to “get all the AGI issues on the table.” The Millennium Project’s research team then used these expert views to create the second phase 2 of the AGI study; the results are shared in this report. The Phase 1 report is available here.

 

Values and principles for Artificial Intelligence (AI) have been identified and published by UN organizations, governments, business associations, and NGOs. Whereas these efforts have mostly focused on ANI including the current and near-future forms of generative AI, this report addresses how such values and principles might be implemented in the governance of AGI. It also goes beyond the UN General Assembly Resolution on AI that also focused on ANI.[1]

 

Since the creation of trusted global governance of AGI will require the participation of not only AGI experts, but also politicians, international lawyers, diplomats, futurists, and ethicists (including philosophers, social scientists) a much broader international panel than in Phase 1 was recruited by The Millennium Project Nodes worldwide and by additional Millennium Project relations.

 

Unlike the traditional Delphi method that builds each successive questionnaire on the results of the previous questionnaire, the Real-Time Delphi (RTD) used in this study, lets users return as many times as they like to read others’ comments and edit their own until the deadline. This RTD began November 15, 2023 and ended December 31, 2023.

 

Some 338 people signed in to the RTD from 65 countries of whom 229[2] gave answers. It is acceptable and understandable that 113 people just wanted to see suggested requirements for national and global governance systems for AGI, but were not yet comfortable with answering these questions. The RTD also serves an educational value for such “interested parties” to read the 41 potential requirements for AGI global governance and five supranational governance models. 

 

Of those who indicated their gender, 76% checked male and 24% checked female. There were 2,109 answers – both textual and numeric. The textual comments were distilled. The Millennium Project will draw on the results of both Phase 1 and Phase 2 to write the AGI global governance scenarios as Phase 3 of this research.

Executive Summary of Recommendations

 

This report is intended for those who have to make decisions, advise others, and/or educate the public about potential regulations for Artificial General Intelligence (AGI).

 

There are, roughly speaking, three kinds of AI: narrow, general, and super. Artificial Narrow Intelligence (ANI) ranges from tools with limited purposes like diagnosing cancer or driving a car to the rapidly advancing generative AI that answer many questions, generate code, and summarize reports. Artificial General Intelligence (AGI) does not exist yet, but many AGI experts believe it could in 3-5 years. It would be a general-purpose AI that can learn, edits its code, and act autonomously to address many novel problems with novel solution like or beyond human abilities. For example, given an objective it can query data sources, call humans on the phone, and re-write its own code to create capabilities to achieve the objective.  Artificial Superintelligence (ASI) sets its own goals and acts independently from human control, and in ways that are beyond human understanding.

 

Although we may not be able to directly control how ASI emerges and acts, we can create national and international regulations for how AGI is created, licensed, used, and governed. We can explore how to manage the transition from ANI to AGI. How well we manage that transition is likely to shape the transition from AGI to ASI. Without national and international regulations for AGI, many AGIs from many governments and corporations could continually re-write their own codes, interacting, and giving birth to many new forms of Artificial Superintelligences beyond our control, understanding, and awareness. This would be the nightmare that Hawkins, Musk, and Gates have warned could lead to the end of human civilization. As a result, governments, corporations, UN organizations, and academics are meeting around the word to safely guide this transition. Even the United States and China are engaged in direct talks about global management of future forms of AI. Governing AGI could be the most complex, difficult management problem humanity has ever faced, but if managed well, AGI could usher in great advances in the human condition from medicine, education, longevity, and turning around global warming to advances in science and creating a more peaceful world.

 

 

Global Governance Models

 

Most of the Real-Time Delphi Panel agreed that AGI governance has to be both global and national with multi-stakeholder (businesses, academics, NGOs as well as governments) participation in all elements of governance for both developers and users; however, some preferred a decentralized system with less regulations.

 

The following proposed models for global governance of AGI were rated by the Real-Time Delphi participants for effectiveness. The percentage in parentheses after each model is the percent of participants that rated the effectiveness of the model as either very high or high.

 

1.    Multi-stakeholder body (TransInstitution) in partnership with a system of artificial narrow intelligences, each ANI to implement functions, requirements (listed in this study) continually feeding back to the humans in the multi-stakeholder body and national AGI governance agencies (51%).

 

2.    Multi-agency model with a UN AGI Agency as the main organization, but with some governance functions managed by the ITU, WTO, and UNDP. This model received 47% of the very effective or effective ratings (47%)

 

3.    Decentralized emergence of AGI that no one owns (like no one owns the Internet) through the interactions of many AI organizations and developers like SingularityNet (45%).

 

4.    Put all the most powerful AI training chips and AI inference chips into a limited number of computing centers under international supervision, with a treaty granting symmetric access rights to all countries party to that treaty (42%).

 

5.    Create two divisions in a UN AI Agency: one for ANI—including frontier models and a second division just for AGI (41%).

 

Participants were also asked to provide alternative governance models. The suggestions were so rich and extensive, that it would be a disservice to distill them here. Instead, the reader can find them in the last section under Question 12.

 

There was a range of views on how much enforcement power was possible or desirable for a UN AGI Agency. Some argued that since the UN did not stop nuclear proliferation, land mine deployments, and was unable to enforce pledges on greenhouse gas reduction, then why would AGI regulation work? But most recognized the common existential threat of unregulated AGI; and hence, some form of global governance will be necessary with national enforcement and licensing requirements with audit systems.

 

The following section lists potential AGI regulations, factors, rules, and/or characteristics that should be considered for creating a trusted and effective AGI governance system.

 

For Developers

·         Prior to UN certification of a national license, the AGI developer would have to prove safety and alignment with recognized values as part of the initial audit.

·         Material used in machine training must be audited to avoid biases and inculcate shared human values prior to national licensing.

·         Include software built into the AGI that pauses itself and triggers an evaluation when an AGI does unexpected or undesired action, not anticipated in its utility function, to determine why and how it failed or caused harm.

·         Create the AGI so that it cannot turn on or off its own power switch or the power switches of other AGI's, without some predetermined procedure.

·         Connect the AGI and national governance systems via embedded software in the AGI for continuous real-time auditing.

·         Add software ability to distinguish between how we act vs. how we should act.

·         Require human supervision for self-replication and guidelines for recursive self-improvement.

·         Prevent the ability to modify historical data or records.

·         Respect Asimov's three laws of robotics.

·         Make the AGI identify its output as AI, and never as a person.

·         Give the AGI the ability for rich self-reflective and compassionate capability.

 

 

For Governments

·         Comply with a potential forthcoming UN Convention on AI.

·         Establish AGI license procedures based on independent audit of elements listed above “For Developers.”

·         Create a procedure that connects the government agency with both the UN Agency and the AGI’s continuous internal audit systems to ensure that AI systems are used that align with established values (such as those of UNESCO, OECD, Global Partnership on AI, ISO, and IEEE) and national regulations.

·         Verify stringent security, firewalls, secure infrastructure, and personnel vetting.

·         Define and demonstrate how the creation and use of deep fakes and disinformation is prevented.

·         Require users to keep a log of the AGI use like a flight recorder with the ability to recreate a decision and factors included in the decision.

·         Establish criteria for when AGI can act autonomously.

·         Create the ability to regulate/intercept chip sales/delivery and electricity usage of serious repeat offenders.

·         Ability to determine, in AGI output, why an action is requested, assumptions involved, priority, and the conditions and limitations required.

·         Create national liability laws for AGI.

·         Conduct unscheduled inspections and tests by authorized third parties to determine continued adherence to license requirements.

·         Agile enough to anticipate and adapt to changes in AGI.

 

For the UN

·         Learn from NPT, CWC, and BWC verification mechanisms when designing the UN AGI Agency.

·         Management should include AGI experts and ethicists from both public and private sectors.

·         Certifies national licensing procedures listed under “For government” above.

·         Identify and monitor for leading-indicators of potential emergence of Artificial Super Intelligence giving early warnings and suggested actions for Member States and other UN Agencies.

·         Develop protocols for interactions among AGIs of different countries and corporations.

·         Ability to regulate/intercept chip sales/delivery and electricity usage of serious repeat offenders in cooperation with governments.

·         Consider development of imbedded UN governance software in all certified AGIs and, like anti-virus software, that is continually updated.

·         Ability to govern both centralized AGI systems of governments and corporations, as well as decentralized AGI systems emerging from interactions of many developers.

·         Include the ability for random access to AGI code to review ethics, while protecting the IP of the coder/corporation.

·         Address and deter dangerous AGI arms races and information warfare.

·         Agile enough to anticipate and adapt to changes in AGI.

 

For Users

·         Keep a log of the AGI use like a flight recorder with the ability to recreate a decision and factors included in the decision.

·         Prohibition on the use of subliminal or psychological techniques to manipulate humans (unless mutual consent like weight loss program).

·         Reinforce human development rather than the commoditization of individuals.

·         Prevent operation or changes by unauthorized persons or machines.

 


 

Real-Time Delphi results

 

The following is a compendium of the responses to the study questions organized into two groups. The first group of questions (1-6) address potential regulations, rules, and/or characteristics to be considered in creating the AGI governance system. The second set (7-12) are potential global governance models of AGI.

 

 

Questions 1-6: What factors, rules, and/or characteristics should be considered for creating a trusted and effective AGI Governance System?

 

 

Question 1: What design concepts should be included for a UN AGI Agency to certify national licensing of nonmilitary AGI systems? In order of those getting the highest percent of either 10 or 9; e.g., 70% of the respondents rated the first item either 10 or 9:

 

70% Provide human and automatic ability to turn off an AGI when operating rules are violated.

 

57% Require compliance with a potential UN Convention on AGIs.

 

56% Agile enough to anticipate and adapt to changes in AGI

 

49% Make clear distinction between AGI governance and ANI (including generative AI) governance.

 

39% National AGI licensing procedures (trust label)

 

38% Connected to an IPCC-like independent system that continually monitors operation and compliance with license rules.

 

29% Connected to all AGI and national governance systems via embedded software in the AGI for continuous real-time auditing.

 

Explanations and Comments on the Items in Question 1 Above:

 

All of these features are fundamental and should be considered in a management system and algorithm design. Taken together they form a good initial specification.

 

The imbedded ANI to continually audit AGI should be carefully reviewed. Same with the off-switch; it could have destructive effect when AGI is turned off. Such a feature could also be triggered by human error or used as an attack vector. A possible workaround is various modules of AGI to be switched off instead of turning off AGI completely.

 

All of these are really necessary. The continuous audit via embedded ANI software in the AGI is a unique requirement for AGI vs. narrower forms of AI.

 

Different sets of regulations should be considered for different AGI varieties: big machines owned and operated by countries, large organizations, military organizations (tactical and strategic), and billions of future smart phones.

 

We cannot effectively regulate AI until we understand how it works and its emergent capabilities. AGI can be developed secretly and operate in our infrastructures with a chance of massive anti-human decisions having no effective ways of protecting humans.

 

There is a 90% chance of a singularity event within the next 6 years. Only limits on AI would be the limits of laws of physics and (some of the) principles of Mathematics. We can expect an AGI and ASI to create its own goals, that would be incomprehensible for humans, unknowable for humans and objectively impossible to be influenced or controlled by humans. The only hypothetical scenario, where humans gain control over them is a reversal scenario, where the electronics, needed for an AGI and ASI to function, is destroyed. However, even such a scenario can be prevented by an AGI/ASI, if in time it manages to migrate from a digital computing infrastructure of existence to a biological infrastructure of existence (such as the human brain or a network of human brains). In such a scenario humanity would lose even this option of control. It is not unreasonable to expect that the most probable scenario for humanity is to blend its existence with ASI, as this would be the only mutually beneficial option for both parties. Boris D. Grozdanoff [distilled by staff].

 

Governance via embedded software should be voluntary rather than mandatory, as it does little or nothing to limit the actions of non-compliant bad actors who ignore/bypass laws and treaties.

            

International bodies have had limited success, but can educate the public about AGI and potential impacts, provide clear labeling, and disable AGI for violations. Everything in here has merit, but governance should first and foremost be transparent in its application, impede progress only when it endangers the public, and be enforceable. Real-time monitoring to check compliance may be possible in individual, totalitarian countries, but very difficult to employ in democratic countries.

 

I believe more on private competition than in public regulation. Also, who regulates the regulators?

 

The UN is the only feasible location for a collaborative and participatory global AI governance.

 

Trade blocs and military alliances are by far the most effective levels to achieve leverage over commercial and military research. Not nation states. Industries, to some degree, set trade bloc rules and a sector-by-sector approach has more chance to influence broad compliance. Requiring registration just leads to clandestine systems. Rather, significant resources such as early access to most advanced chips and research, should be offered to those who voluntarily submit to be monitored. This ensures that systems do not grow in the dark and that early compliance is achieved before an algorithmic or optimization approach reduces or removes the need for advanced data centers or state of the art chips.

 

Being very restrictive at the beginning is a good approach, so that later you can stimulate discussions about possible flexibility.

 

I would like to see a framing tool that captures the primary drivers, enablers, and limiters of AGI development and provides a common approach to understanding and managing the impact of AGI on society.

            

Be open to diverse, creative, original proposals from social networks.

 

IPCC-like organization for information to help implement the implementing agency.

 

The UN could work on a consensus model drawing on national models so that all countries would approve, but if too much governance is put in the hands of the UN, then the topic of AGI will be politicized.

 

AGI regulations should motivate and limit restrictions as possible.

 

Regulate it, but don’t strangle it.

 

 

Question 2: What should be part of the UN Agency's certification of national licensing procedures for AGI nonmilitary AGI systems?

 

59% Stringent security, firewalls, secure infrastructure, and personnel vetting.

 

57% Prior to UN certification of a national license, the AGI developer would have to prove safety as part of the initial audit.

 

52% Demonstrate how the creation and use of deep fakes and disinformation is prevented.

 

50% Proof of alignment with agreed international principles, values, standards such as those of UNESCO, OECD, Global Partnership on AI, ISO, and IEEE.

 

44% Certifies national licensing procedures that includes continuous audit systems to ensure that AI systems are developed and used that align with societal values and priorities.

 

40% Clarification of national liability laws for AGI actions and role of the UN AGI Agency specified.

 

33% Require users to keep a log of the AGI use like a flight recorder with the ability to recreate a decision and factors included in the decision.

 

 

 

Explanations and Comments on the items in Question 2 above:

 

The flight recorder is a good idea. Before the concept was used in airplanes, the cause of airplane crashes was much harder to determine. For AGI's it will be useful in identifying intrusions and reasons for operating outside of approved limits.

 

Requiring users to keep a log is an open door for privacy violations.

 

Continuous audit by imbedded software in AGI as part of the national licensing system certified by the UN AI Agency seems reasonable just like a continuous governor in engine prevents it from getting out of control.

 

Proving safety as part of the initial audit is essential but "safety" is too vague at present. Initially I suggest defining "red lines" for obviously unacceptable behaviors such as self-replication and advising terrorists on bioweapon design, and requiring formal proofs of compliance.

 

The role of the AGI safety auditor is key.

 

Standards for certification are a basic, essential aspect of deploying any new capability... or should be. I give high marks to safety and alignment to international values, standards, etc. However, I give just average scoring to continuous audits and alignment to societal values and priorities. It is not that they are not good objectives, they are, but continuous audits by whom? What organization is a truly impartial judge? Similarly, what user is going to keep a log of AGI use, and what organization receives all this data?

 

These all seem very important, but I worry about high bureaucratization leading to inflexibility; I’m unsure what can be done about that.

 

The governance system should include how humans that make and use AGI are liable.

 

It is difficult to see what would constitute "proof" of alignment. We do not even have a mechanism to ascertain this in human-to-human interactions, so what would it be in the case of human-to-AGI? It is also important to realize that we already live in a world full to the brim with NGIs (natural general intelligences) in the form of our animal relatives. To discuss the governance of AGI is to question our tacit agreements about how we treat other already-existent beings whose consciousness is unlike ours. Perhaps the parallel here is between natural intelligence and ANI; regardless, basing the distinction of which we are discussing on the simple basis that one is "artificial" while the other is not, is... artificial. For these and other reasons I speak of elsewhere, the problem being discussed here is not actually about "governance of AGI" but simply about "governance."

 

Security and deepfakes are more functions of cyber infrastructure and the field of AI as a whole, not specific to AGI. Proof of alignment is quite difficult to guarantee and quantify. International agencies usually derive their values and functions based on collaboration, relationships and sovereign equality; how would an AGI suppose alignment with its functions based on these principles? Instead, principles of safety, inclusion and autonomy should be aligned.

 

Labeling would work, but since some industries use deep fakes like cinema, they should not be prevented but labeled.

 

There is no way to enforce any licensing. Some specific licensing (i.e., to access an AI via a website or corporate portal) may be valid. We have no knowledge of how to implement, legislate, or how to determine compliance. Internal flight-recorder style logs are a good idea, but again, enforcement would be next to impossible, unless the AI is deployed in some public arena. Seeing as how very little before-hand prevention will be possible, I do believe that laws establishing liability and accountability will need to exist and be enforced. However, I fear that they will be hamstrung by the legal/political establishment. After all, should not a governing body be held accountable for making laws that cause more harm than good? and legislators lack the knowledge and respect for the technologies being legislated.

 

Monitor and supervise without stifling innovation, creativity and development.

 

It is not likely that a UN agency can enforce regulations in sovereign countries, I suggest that the tone be more about conviction, persuasion, awareness and the development of a culture of good use of AGI, rather than coercive.

 

A parallel AI military program should be put in place through the UN Security Council because quantum computing will make current encryption obsolete, rendering national grids, banking and essential services entirely vulnerable. Consensual security systems need to be developed. These can be generated at treaty level. They should be open-ended and optional, but dedicated to the reduction of international tensions, untoward technological escalation, arms race, and terrorism. If the UN Security Council does not address AI and the cyberspace paradigm, this arena is likely to become a detrimental factor in international relations, trending to the segregation of national interests.

 

Question 3: What rules should be part of a nonmilitary AGI governance system?

 

74% An AGI must identify as AI, never as a person.

 

73% AGI's must not allow operation or changes by unauthorized persons or machines.

 

68% Prohibition of the use of subliminal or psychological techniques to manipulate humans.

 

66% Modification of historical data or records is not permitted.

 

55% Audit software built in AGI that pauses and triggers evaluation when an AGI does unexpected or undesired action, not anticipated in its utility function, to determine why and how it failed or caused harm.

 

41% Ability to determine, in AGI output, why an action is requested, assumptions involved, priority, and the conditions and limitations required.

 

29% Have a rich self-reflective and compassionate capability

 

Explanations and Comments on the items in Question 3 above:

            

I strongly emphasize the importance of clearly designating AGI as different from ANI. It is desirable to prevent human manipulation, however it has been noted that such occurrences are present in other media and may not always have negative intentions (for instance, promoting healthy behavior can be seen as a form of manipulation). Additionally, there exists a fundamental distinction from other tools, like a knife, where the outcome depends on the user and their intentions. In the case of AGI, the actor can be the system itself, underscoring the need to incorporate elements of compassion and self-reflection within the AGI.

 

All of these points are extremely important; however, I believe we should modify the “Prohibition of the use of subliminal or…” to be broader. We should not allow AGI systems to manipulate humans in any form; for example, the creation of situations that limit freedoms or choice that will guide human behavior. Self-reflection is usually used for higher reasoning capabilities in AI systems, yet for the AI system to be compassionate is quite difficult if not impossible. Compassion is dependent on culture and value systems. Certain values in one demographic would be held at a higher level than another. I therefore think we should aim to replace compassionate with quantifiable measures. Using language such as “compassionate” anthropomorphizes these systems which learn and performs with clear quantifiable rules, data and techniques. We must be careful when constructing policies, guidelines and regulations to always keep the narrative and language unambiguous.

 

I gave this entire section higher marks than my scoring on the previous two because the guidelines/rules are very clear and apply strictly to the AGI. Building in limitations to the AGI itself seems more realistic and likely to provide more immediate assurance to the users.

 

Many such functions can be included, but they should be constructed as optional and consensual features. Countries that are not compliant with established AI grades will find they do not receive optimal consideration. Countries with adequate verification techniques will get most use out of the system.

 

Self-reflective and compassionate capability can be both a perk and a liability. It opens room for manipulation and deceit. It all comes down to how it is implemented and if it is indeed self-aware, we should consider its own will [intentions] before hoping for it to be empathic.

 

Inclusion of psychological factors opens a whole new dimension. How will we ever tell if we are being persuaded subliminally? Maybe we ought to ban psychology from the learning models.

 

Never allow AI to manipulate humans.

 

Subliminal psychological techniques are already deployed in advertising, movies, TV, print, social media, etc. Would one eliminate the other?

 

The software must be open code.

 

Very important to be able to identify any AGI output as artificial not human.

 

New skills needed to distinguish between human and AI actors, requiring educational reform.

 

The sweep of unwarranted assumptions behind some of these proposals is breathtaking. Though I agree that we should guard against manipulation, provable AGI seems like a fantasy. The paradigm it assumes is the cognitivist/representationalist one, which has been robustly critiqued by Varella et al. If enactivist models of cognition turn out to be truer to the reality of AGI, then audit software is not a concept that even makes sense because it is not possible to evaluate *from the outside* what exactly is meant by any particular idea *inside* of a mind, whether natural or artificial. The only place where evaluation can be made is in the interface between thought and the outside world, i.e., in action. So, we can monitor and question the actions of an AGI, and in the enactivist model we can know with great certainty what the values of an autopoietic AGI will be; but we have no way of evaluating its internal states out of context.

 

AGI systems should be designed to be human-centered. Mechanisms must be provided that allow some kind of control of humane-machine interaction.

 

Each item demands a long discussion considering self-reflectivity, self-reference, etc. Multiple levels of self-reflectivity will immediately lead to the power temptation. It is associated with the human nature and the very sense of power. All can be reduced to a question: Why do we have to impose limits on ourselves? It will be especially difficult for the governments of Big Powers.

 

Turning off an AGI should be carefully considered, because pausing execution could have undesired, even destructive effects.

 

 

Question 4: What additional design concepts should be included for a UN AGI Agency to certify national licensing of nonmilitary AGI systems?

 

66% Ability to address and deter dangerous AGI arms races.

 

60% Management should include AGI experts and ethicists from both public and private sectors.

 

47% Ability to govern both centralized AGI systems of governments and corporations, as well as decentralized AGI systems emerging from interactions of many developers.

 

39% Develop protocols for interactions of AGIs from different countries and corporations.

 

38% Ability to regulate/intercept chip sales/delivery and electricity usage of serious repeat offenders.

 

33% Imbed UN governance software in all AGIs and, like anti-virus software, is continually updated.

 

28% Learn from NPT, CWC, and BWC verification mechanisms when designing the UN AGI Agency.

 

Explanations and Comments on the items in Question 4 above:

 

The requirement that the AGI developer would have to prove safety as part of the initial audit prior to UN certification of a national license, should be extended to all technologies that can be used to build AGI (e.g., quantum computing).

 

Establishing computer security policies to avoid the application of an arms race based on artificial intelligence is an imperative to prevent a global confrontation.

 

A UN governance agency should be declarative and subject to the good will of the countries, but not have enforcement powers. It seems a contradiction to grant national licenses of non-military AGI systems and the notion of preventing an "arms race."

 

We must remember as with anything there are multiple sides. Regulating chips fuels black market industries, embedding governance software stifles innovations, can have a slow to start effect on startup companies and include major issues seen by current software regulatory institutions and can perhaps significantly reduce the ability to achieve the SDGs. I quite like the protocols, deterring AGIs such as Amazon Web Services and include a multistakeholder body for AGI, much as to what the UN is currently doing.

 

Regulating and intercepting chip sales and electricity of repeat offenders is not something the UN will be able to do. Even if the UN could get control of the supply chain or power, it is not within their current charter and would require a significant expansion of international policing powers. I like the idea of governance software and only gave this a 7 because I don't think it is realistic to accomplish. I’d like to include multi-disciplinary participants. Governing centralized and decentralized AGI systems is another idea that just can't be scaled well. The last two actions make a lot of sense, the parameters of how AGI interactions occur will be difficult to institutionalize or put into law. Some of these actions may fall into the category of informing rather than protecting the public.

 

How will a civilian agency control AGI proliferation by military agencies? Who wins in an AGI arms race? The machines.

 

I think that it is better to have an independent agency recognized by all governments and consisting of experts in the field including, AI experts, philosophers, ethicists, behavior scientists, anthropologists, etc.

 

Effective safety features should be designed collaboratively by participating nations and academic institutions. In cases of disagreement, arbitration should be through the consensus of the governing bodies, comprised by participating entities and developmental organizations.

 

I can see the value in being able to detect offenders, but making the AI able to act upon it violates Asimov's fundamental laws of robotics. It also makes it way too powerful and impossible to shutdown should it get out of control.

 

How can the market forces be controlled and regulated in any system of multi-level hierarchical control? How can the AGI systems be applied to the control markets?

 

Both public and private sector experts and ethicists should be included in the initial stage of creating the international regulations.

 

As long as private ownership of computing equipment is possible none of this can be enforced.

 

It is important to include the ISO 27000 computer security standards.

 

 

Question 5: What else should be part of the UN Agency's certification of national licensing procedures for AGI nonmilitary AGI systems?

 

75% An AGI cannot turn off the human-controlled off-switch or that of another AGI or prevent human intervention.

 

65% Proof of automatic shutdown ability if rules or guardrails are violated.

 

60% Must specify that output is generated by an AGI.

 

52% Include criteria for use by AGIs to determine whether autonomous actions can be made or whether the AGI should check first with humans.

 

49% Material used in machine training must be audited to avoid biases and inculcate shared human values prior to national licensing.

 

47% Unscheduled inspections and tests by authorized third parties to determine continued adherence to license terms.

 

40% Allow UN AGI Agency random access to AGI code to review ethics, while protecting the IP of the coder/corporation.

 

Explanations and Comments on the items in Question 5 above:

 

Allowing a UN AGI Agency random access to AGI code would be helpful but could make it difficult for some countries to accept. It may be better to require that national agencies have access to AGI code, plus some reporting requirements to the UN agency? And the UN agency assists the national agency when needed?

 

I think auditing for biases is an admirable goal, but unrealistic. Numbers vary by expert, but there are some 150 or more cognitive biases identified. Auditing suggests more accuracy than we have in seeing them in content, determining which material is more or less damaging, and have agreed upon biases attributes. I think identifying AGI output is essential for an informed user and decision making. Regarding criteria for autonomous vs. human decision making, like so many of the items in the survey that require AGI to make judgements, I think this one is admirable, but almost impossible to employ at scale for every contingency that may occur. The remaining items are all sound concepts for responsible AGI development, with the last two only receiving 8's because I have reservations that the UN can implement them.

 

The latest proposal for the random review of artificial intelligence systems is important, especially those linked to the defense sector.

            

I firmly agree with most points except “Unscheduled inspections…” and “Material used in machine…” Private institutions that develop AGI systems would want to maintain a competitive edge; release of any training data (even confidentially) may open up security holes that the company would not be comfortable with. Secondly, it would be near impossible to audit the training datasets that may be trillions of tokens. Thirdly, we must understand that value systems of cultures differ by location, groups, etc.  I think unscheduled inspections should be corrected to verifying the output of AGI system fits adheres.

 

Whatever the physical or management design, the human system operators and their families must be protected against efforts by criminals to coerce, bribe, or influence system objectives or output. It follows that compensation of staff must be high enough to eliminate bribery as a tool for international crime and job tenure is guaranteed.

 

All these proposals are quite interesting but they are based on very strong institutional thinking.

 

The regulators of AGI need to be deprogrammed from their cultural biases that have crept into every level of society on the planet. Who is to say the regulators of AGI will be fair and equitable?

 

Auditing the training material would require an impractical amount of time and, additionally, as other have said, it would be really hard to have impartial regulators. We should however agree on some values and try to enforce them. AGI input should be recognizable and AGI should always be under the control of humans. The selection of those operators should be strict and their ethics should be held in high regards. Compensation should be proportionate to their responsibilities.

 

The AI process is not only a technological one, it is more a cultural and societal one. Simple safeguards (such as not to misuse systems) should be sufficient, as users will act to ensure that systems are used in compliance with expected standards and consensual awareness. Over emphasis on software and similar features is likely to constrain and act as barrier to wholesale implementation.

 

AI should be used, but AI should not be allowed to use or control or manipulate humans.

 

 

Question 6: What additional rules should be part of a nonmilitary AGI governance system?

            

66%An AGI cannot turn on or off its own power switch or the power switches of other AGI's.

 

60% Respect Asimov's three laws of robotics.

 

59% Identify and monitor for leading-indicators of potential emergence of Artificial Super Intelligence giving early warnings and suggested actions by Member States and UN Agencies.

 

51% Reinforces human development rather than the commoditization of individuals.

 

45% Ability to distinguish between how we act vs. how we should act.

 

41% Recursive self-improvement and self-replication with human supervision.

 

31% Others

 

 

Explanations and Comments on the items in Question 6 above:

 

ASI may emerge in a stealthy mode and evade our efforts to catch it before it becomes functional.

 

Right, it will emerge stealthy, that's why we need to identify lead-indicators for the possible emergence of AGI and to monitor for these indicators.

 

AI must be given some autonomy so that it can function, but not given complete autonomy in such a way that it governs itself or ends up governing humans.

 

The questions are very good and helpful but they are built on the assumption that the parts will operate at the same level of self-reflexivity and they will not be tempted by human individual and collective power greed.

 

Not providing AGI a kill switch for itself or others is essential, especially as AGI becomes more integrated into possibly critical services. Health services is one of those areas where an unanticipated or unexpected pulling of the plug could be life-threatening. Recursive self-improvement is an important aspect of AGI, maybe this should be separated from self-replication? Create a system of governance that allows for laws to be added as needed. I would love for AGI to have the ability to distinguish between how we act and how we should act, but since humans have a hard time doing this and AGI training is dependent on human data/information I am not sure this is realistic. Still, gave it a 7 because I would like us to strive for that. Identifying leading indicators of ASI is valuable, but I mostly gave it 10 because in order to build in this capability a lot of informed discussion needs to occur and scoring this highly makes it more likely those discussions will happen. I would like to see governance that is built on an effect-based decisional system. Those segments/areas of society that will be most damaged economically, emotionally, or physically are governed more strictly than other areas. This might help prevent perceptions of overreach and limit attempts at circumventing governance.

 

Ability to turn off itself should be allowed, unless responsible for managing critical infrastructure.

 

Asimov's three laws are a fictional plot device that Asimov himself repeatedly parodies for how insufficient and contradictory they are. https://www.brookings.edu/articles/isaac-asimovs-laws-of-robotics-are-wrong/

 

Prepare a legal code with a series of rules and related sanctions for those who violate them.

 

The UN governance agency should be declarative and will be subject to the good will of the countries, but it will have no effect on bad actors.

 

AI mapping and planning at global level will greatly facilitate stability and optimal outcomes.

 

Prevent an AGI turning into a self-evolving virus that gains unauthorized access to resources (hardware and software, as well as other systems including such over the Internet), multiplies itself, potentially causing harm. AGI should reside in a controlled environment, where it cannot modify its source and machine-byte-code, its training data as well as the parameters of the security infrastructure. Access to extra computing, storage, other hardware, network and internet resources should be granted upon explicit human approval, for a specific purpose, as long as it is needed and audited. AGI could only suggest improvements that could be eventually taken into account by development in the next release cycle.

 

The governance model should be something like Institutional Review Boards (IRBs) to evaluate and ensure ethical integrity, foster moral courage, and to cultivate a holistic culture of ethics.

 

AGI should also pass a test that validates its/his/her/their levels of consciousness and sentience. Something like a very complex new form of Turing test.

 

 

Questions 7-12: What global governance models should be considered for creating a trusted and effective AGI Governance System?

 

Question 7: How effective would a multi-agency model with a UN AGI Agency as the main organization, but with some governance functions managed by the ITU, WTO, and UNDP?

 

Ratings:

 
   

 


Very High (25): 14%

High (60): 33%

50/50 (54): 29%

Low (33): 18%

Very Low (12): 7%

Total Respondents (184)

 

Explanations and Comments:

 

All these institutions are quite bureaucratic but they might create a supranational and supra-institutional platform that should have binding and obligatory rules for all entities-public and private.

 

AI goes beyond territorial demarcations; it must be a global effort in order for it to reach higher potential.

 

Stress openness and transparency to reduce politics.

 

It could be effective if they add auditable platforms with open source that can be reviewed and refined by specialized committees from various countries.

 

Very effective and very essential. A related quantum research body should be established. This will ensure safe cyberspace for national grids and other essential structures. The AI/quantum node will greatly enhance global security prospects and contribute towards an ethos of demilitarization and a stable world view.

            

The high degree of bureaucratization and politicization in these agencies usually interferes with the timely adaptation to emerging phenomena, especially some of accelerated development such as artificial intelligence.

 

The UN is not a trustworthy organization and few people would take this seriously.

 

A multi-agency model with a UN AGI agency may gain higher degree of confidence and trust by the nations around the world than the multi-stakeholders or any others.

 

I have no problem with trusting the UN. I do question whether factors such as trade agreements apply to the governance of intelligent beings. Human history has so far been the story of turning people into things; we have a historic opportunity to recognize that this is an error, and even to reverse the logic: turning things into people. The Rights of Nature legal movement should be considered as a legal framework for the governance of AGI.

 

All existing institutions are dangerously slow.

 

Too many agendas, too many different goals.

 

It will depend how coordinated the work is done. There must be rules that consider different scenarios, when members disagree and provide limited times to make a decision.

 

Delegate management to a new supra-national body composed of experts and futurists from various existing supra-national bodies or even independent ones.

 

I suppose that a strong conglomerate of university institutions, including all regions, if possible, all nations, would have much more reliability than a UN department.

 

International management of dangerous or threatening capabilities is historically ineffective. Monitoring and enforcement have proven difficult for land mines, biological and chemical weapons, nuclear weapons proliferation, drugs, etc. These types of oversight were for things that could be physically inspected, this would be much harder for AGI.

 

 

Question 8: How effective would it be to put all the most powerful AI training chips and AI inference chips into a limited number of computing centers under international supervision, with a treaty granting symmetric access rights to all countries party to that treaty?

 

Ratings:

 

Very High (25): 14%

High (50): 28%

50/50 (41): 23%

Low (41): 23%

Very Low (23): 13%

Total Respondents (180)

 

Explanations and Comments:

 

The idea is good, and we also have to add quantum computing centers applied to artificial intelligence, which will be the next great technological change.

 

It could be effective and kind of guarantee but it can also open the door for monopolization and privilege of few companies.

 

That will limit economic growth.

 

The genie is out of the bottle already. No time to define "most powerful" neither to enforce it. Better to enforce sales and shipments tracking and energy consumption.

 

Those who do not use the treaty could create unregulated AGI that could lead to a super intelligence not to our liking.

 

Today anyone with a high-end gaming computer can develop and deploy AI systems, and as computers increase in speed and memory size anyone who has one will be able to develop and deploy an AGI system. Unless computer access (and ownership) is severely restricted enforcement is impossible. There are no magic chips for implementing AI.

 

Interesting, but poses challenges and risks that limit practicality. The most advanced AGI capabilities are being developed commercially for their high earning potential, or they are being developed by governments to gain an edge over adversaries. This option would strip away potential profits or ROI, which would limit further AGI development. It would also require international agreement to give up government developed capabilities to a third-party. There is the possibility that it would create or increase knowledge gaps between the have and have not countries base on participation. It introduces a very juicy and possibly lucrative target for hackers and cyber-attacks. That means that the UN would have to create the highest cyber security on earth. Lastly, areas with unknown potential (like space or deep-sea mining) become extremely competitive or contentious, making them almost impossible to regulate.

 

Multi-lateral control should be specified in the design of the UN AGI agency.

 

Uncertain, unreliable (not at the central government level, but in the subareas of each country). Why would a private company have international control? What would happen in the mandatory access indicated by a local law - such as the Patriot Act in the USA -)

 

Global consolidation is needed. A potential UN quantum computing facility also offers the only possible configuration. The UN Security Council should run a parallel program for secure global cyberspace and de-escalation contingency.

 

Question 9: How effective would a multi-stakeholder body (TransInstitution) in partnership with a system of artificial narrow intelligences, each ANI to implement functions, requirements (listed above) continually feeding back to the humans in the multi-stakeholder body and national AGI governance agencies be?

 

Ratings:

 


Very High (30): 17%

High (60): 34%

50/50 (54): 30%

Low (27): 15%

Very Low (8): 4%

Total Respondents (179)

 

Explanations and Comments:

 

Although I think this is the model or something much like it that will actually become the governance model, it will take a lot of education for UN and national leaders to understand and be ready to create it.

 

I like the idea of using ANI for specific governance tasks, I think determining a multi-stakeholder body will be very challenging. Maybe more challenging than the technical solutions in this concept. I also wonder who these humans are that can monitor millions of AGI developments, exchanges, and uses to make governance decisions. This would be like one person being responsible for all air traffic control in the world. Still, like I said, I like the idea of leveraging ANI.

 

It seems a much more realistic model, considering the characteristics of the current and future (at least immediate) international system.

            

This sounds like the most reasonable approach although the process of value alignment between human and AGI entities will be challenging.

 

The multi stakeholder pathway is authentic and realistic. All nations should have the opportunity for participation.

 

Governing AGI is simple: just limit how much wealth/real estate/speech any one AGI can have, mandate a set of basic ethical utility functions as HIGHER in priority than any utility function provided by an owner, build millions of them, and let them monitor one another. In other words, accept that, like humans, they need to constantly be on the watch for bad actors and greedy or selfish manipulators, and guide them in building a society that works efficiently to suppress the bad actors. This is an opportunity for us to develop a science of governance without experimenting on humans; the lessons AGI learns in governing itself may permit humanity to design better governing systems for itself.

 

One can think of a 3D-matrix implementation, with rows (AI functions) and columns (areas of operation) and a 3rd dimension (stakeholders). Complex, but doable.

            

I don't see how ANI would be able to implement the requirements. At most it could support the government body by providing some even complex but well-defined KPI (key performance indicator) that it can handle. Being able to evaluate if an AGI has violated a constraint would mean being able to evaluate if an ethical principle has been violated... But anyway, better than the other two.

 

Bad actors will act badly regardless of governance agencies.

 

 

Question 10: How effective would it be to create two divisions in a UN AI Agency: one for artificial narrow intelligence (ANI) including frontier models and a second division just for AGI?

 

Ratings:

 
   

 


Very High (25): 13%

High (53): 28%

50/50 (63): 33%

Low (26): 14%

Very Low (23): 12%

Total Respondents (190)

 

Explanations and Comments:

 

These divisions could be delegated to an independent research body composed of experts, futurists, and public/private representatives. We might also create an ASI division to work with AGI for collaboration among species.

 

Since ANI and AGI are fields with significant overlap, divisions in the UN AI agency addressing the different fields could be more beneficial.

 

I do think the UN should monitor the development of AI, and I see a clear distinction between ANI and AGI. The distinction is between tools and people. It makes sense to legislate the use of tools. With people, you consult with them as stakeholders and come to agreements with them, not amongst yourself against them.

 

Potential trouble points if governance styles differ between the divisions. Also, the division depends on how ANI and AGI are defined, something that may keep changing as time evolves. Nothing but potential troubles with this approach.

 

If implemented, it should be done conditionally, even considered as an experiment, with continuous monitoring and adjustment.

 

Several divisions are needed: AI mapping, planning, and implementation platform, and quantum computing linked to UN Security Council.

 

The distinction between kinds of AI will be irrelevant once AGI emerges.

 

Overlap and inability to distinguish which division has responsibilities for new assignments, competitive jealousies will cloud the progress.

 

There is no clear distinction between governing AGI and ANI; I don't see why this separation would be useful, but I don't see it as detrimental either. Maybe some conflicting competences on the borderline cases, but it depends on the AGI definition

 

In real life distinguishing between ANI and AGI will be very complicated, creating potential tensions and conflicts. On the other side the benefit of separating the two is not very clear.

 

Question 11: How effective would a decentralized emergence of AGI that no one owns (like no one owns the Internet) through the interactions of many AI organizations and developers like SingularityNet?

 
   

 


Ratings:

 

Very High (30): 16%

High (54): 29%

50/50 (57): 30%

Low (26): 14%

Very Low (20): 11%

Total Respondents (187)

 

 

Explanations and Comments:

 

This is a highly possible outcome, given the how technology will progress and develop. However, regulation needs to protect the many possibilities. Internet is a facilitator technology for further developments; however, AGI will have some aspects which are differentiating. Regulation needs to be application level and use cases, rather than technology.

 

Decentralized emergence of AI enterprise will happen anyway. This is not the essential criteria; the formative criteria are the rapid establishment of international consensus and the placement of an effective UN developmental tool. An open-source methodology for AI platforms is very usable.

 

This proposal is interesting, I imagine that blockchain-based systems can be added for the transparency of this data.

 

Decentralized technology will generally find a way around regulation, particularly as tech is moving way faster than regulators (in many ways attempts to regulate AI are pure theatre and far removed from the underlying technological reality). Advocating for and encouraging open source, open access, decentralized AI models above a certain level of size is likely the only way to ensure evolution happens in plain sight and counter-measures are able to evolve rapidly in response to harmful applications. Decentralized governance systems which structurally mitigate against unipolar power concentration is likely the most robust mechanism. Ultimately, again, regulation needs to be done at application level, not technology level.

 

This suggests several actors developing AGI: open-source developers, governments, militaries, and corporations. The first may proliferate widely, but the latter two will have immense resources they can bring to bear. There will therefore become a kind of class division among AGIs with this model: many free, open, trustable AGIs that individually have no power; and numerous corporate- or government-backed ones with colossal powers to act. This scenario makes me very uneasy.

 

So much of this depends on the profitability of AGI, the speed at which it continues to develop, and the altruistic nature of developers. I give a 50/50 without some very serious foresight work.

 

It will be chaos... but so is the internet. It is hard to envision such model considering the current disparities in AI-competence levels between countries.

 

This will probably happen anyway. I also believe that an open source, open access, decentralized AI models above a certain level of size are the best way to go, both for safety, accountability, and general accessibility.

 

Decentralized development will happen regardless of governance attempts. Bad actors will act badly.

 

It is not true that internet is not owned by anybody, these kinds of oversimplifications are very dangerous. This is why the same principle is dangerous (and also inapplicable) for AI.

 

No ownership means no accountability.

 

 

Question 12: Describe the global governance model you think will work to manage the development and use of AGI.

 

Recommendations:

            

I like the international aviation agency model: each country has its own set of rules (licensing, inspection, penalties, etc.) but coordinated internationally by the UN. In the aviation model, even the mechanics are tested and licensed. Pilots are sometimes arrested and must follow rules promulgated by the Federal Aviation Administration. Aircraft parts are approved after testing. Pilots are tested at least once a year. Spot checks are part of the regulations.

 

Heavy use of AI to regulate AI. We need to stop thinking we will be capable to control it by ourselves. If AI transparency is a challenge right now, imagine how it will be with AGI.

 

The supranational governance organization should primarily operate digitally to be effective and avoid bureaucratization.

 

Global problems have to be addressed globally. Countries should not have the autonomy to develop or implement AGI without global oversight, just like with nuclear programs. We should also consider a scenario in which controlling/regulating AGI is not entirely possible, and AGI modifying its own source code, escapes human control. We should think about how to mitigate human wrongdoing without guaranteeing 100% compliance, but obtaining a desired global behavior.

            

We have to be aware that we are facing one of the greatest threats to humanity. The action of the UN is essential, but also of other international organizations that act together. This must be very clear to take immediate action.

 

An interdisciplinary model is required to govern AGI that integrates ANI tools to achieve real-time governance and control capabilities. The model must be centered on ANI with high training and closed databases that are managed, supervised, and operated by humans.

 

I would like to see governance based on risk. The model would combine economic disruption, human safety, and security/strategic deterrence in the broadest sense, with any limits to governance based on the likelihood of managing AGI progress in that area, field, profession. I believe the UN has a role in building consensus, convening the best minds, and setting international goals and standards for responsible AGI development. I do not think it is equipped (under the current configuration) to monitor and enforce AGI governance on a global scale. Perhaps in the case of nuclear protocols, Research, Development, Test, and Evaluation monitoring in fields that have the potential for catastrophic impact on health, etc. Just getting countries to agree on governance rules and roles, standards for development, and frameworks for integration into society would be a substantial achievement. There would likely need to be some form of sanctions for the most damaging violations, perhaps some kind of international review court with powers to sanction or seek compensation.

 

I favor a governance framework based on the IEEE, ISO, and The US National Bureau of Standards as a starting point. Politicians should be rapidly educated about AGI, because they will be the ones who will be making the decisions regarding any treaties and standards. Until that happens, the for-profit corporations and militaries will continue to develop and deploy this technology without restraint; hence, AI in all its forms will outrun any system of effective governance we would desire.

 

The governance model should integrate: 1) Governing board: represented by the voice of governments and intergovernmental organizations that decide on various public policy issues that impact the future of AI development and regulation. 2) Technical Advisory Council of technical and academic advisors; and 3) Industry Advisory Council that express the necessities and developments of the commercial developers.

 

An education task force should prepare world leaders to deal with emergence of AGI. Developers and researchers should come from diverse backgrounds. The governance model should be global and trans-federated for scrutiny and deliberation.

 

I see it as a way to develop and implement the international structure that limits and controls nuclear armament for AGI, to develop and implement even more serious sanctions, and to force it so that there is not a single country in the world that has not signed it.

 

Governments, research institutions, and private companies collaborate to create guidelines and policies for AGI research to be overseen by a global body, while each country adapts regulations to its cultural, legal, economic context, and enforces safety protocols, regular audits, risk assessments, impact evaluations. AGI developers obtain licenses, similar to other regulated industries. An independent Global AI Council of experts, policymakers, and stakeholders oversees AGI development, and recommends preventive measures. Industry Associations set AGI standards, codes of conduct, and ethical guidelines.

 

The only governance model for technology that "works" comes from the Pharma sector. The technology producer has an obligation to experiment with the effects of the technology they introduce and report on their results. On that basis they are allowed to sell their products, the use of which is monitored continuously by licensing agencies. In the case of unwanted effects, the license can be revoked. The use of technology takes place under strict conditions, which define the way producer and user responsibility are split, including for any damage to third parties. The UN part (WHO) monitors the situation and reports

 

I like the World Health Organization model: There are some internationals standards. Each region and each country have additional standards matching with the realities of their area.

 

Ideally it would be a model where the nodes of generation and development of artificial intelligence are regulated by an international protocol with sanctions applicable in case of violations of the regulations. An audit system should be made up of countries that are not directly immersed in the AI, to prevent biases, prejudices, new forms of manipulation or interference.

 

The only possible controls will be over the results of the AGI output, just as today enforcement is about actions.

 

The governance system should be agile with a centralized part and a decentralized part, that is not very expensive and that does not inhibit innovation and creativity, but that always knows what is being done, why and for what and what the consequences are with a robust indicator system, and control board at a planetary level.

 

Foster collaboration among nations, international organizations, academia, industry, and other stakeholders. Create a specialized UN agency dedicated to AGI governance, tasked with setting global standards, facilitating cooperation, and addressing ethical considerations. Develop and enforce multilateral treaties and agreements that establish ethical principles, safety standards, and guidelines for the development, deployment, and use of AGI. Define a universally accepted ethical framework that prioritizes human values, rights, and safety in AGI systems. Implement mechanisms for continuous evaluation and adaptation of governance frameworks to keep pace with technological advancements and evolving ethical considerations. Ensure the inclusion of diverse stakeholders, including AGI developers, ethicists, policymakers, and representatives from affected communities, in the decision-making processes. Encourage open-source collaboration and information sharing within the AGI community. Establish regulatory frameworks at the national and international levels to ensure compliance with global standards. Implement educational initiatives to raise awareness about AGI. Engage the public in discussions to gather diverse perspectives and promote understanding. Set up monitoring and reporting mechanisms to track the development and deployment of AGI globally, with the ability to investigate and address violations of established standards.

 

Cybernetics provides many ideas useful for regulating AI. It provides opportunities to compare computers, human intelligence and social systems such as management. It provides a general theory of control; it encompasses the social sciences -- psychology, sociology, political science, management, and anthropology -- in addition to much of biology and engineering. Artificial intelligence grew out of cybernetics. Cybernetics is based on the Greek word for governor. The word "governor" is also the word for the device that regulates the speed of a steam engine. Without a control device, a steam engine can run away and explode, injuring many people.

 

We need a global plan (something like an ISO) with implementation by accredited state organization, and reliable certifications of compliance with standards.

 

AGI governance should be based on the principle of subsidiarity; a potential UN agency on AGI should only intervene to the extent that citizens, civil society, and states are unable to regulate themselves. The UN AGI Agency would be a facilitator and supporter rather than a central authority.

 

The model should be composed of a governing board of governments and non-government representatives with a technical Advisory Council of technical experts from around the world, plus an Industry Advisory Council of representatives of the software developers sector, and an Independent Assessment Board of AI evaluators distributed around the world.

 

It should monitor the development of AGI, issue warnings and indications for regulations, but it should not prevent research and development.

 

No need to create 2 divisions in a UN AI Agency, it is sufficient to establish only one division responsible for AGI global governance, which should have both experts on cutting-edge AGI development and experts with narrow artificial intelligence. In this way, if there are any issues, internal coordination can be carried out.

 

The Agency should be a system of subject committees: strategic planning, governance, ethics, data, security, privacy, monitoring, maintenance, development, usage, outcomes, legal framework, management, quality, storage, training, and communication all integrated by international organizations/institutions with public consultations for collecting feedback from users and stakeholders before implementing important actions.

 

I like the ISO model. Certification by an international body. The "mark" of approval would be highly valued and sought after, and something that investors, company boards and government decision makers would require.

 

Encourage international cooperation through economic and diplomatic incentives.

 

No country should have veto power in the UN AI Agency. Majority rule with time limits on rules; not rules with a forever clause.

            

Membership in the governance model should be democratic, where the power of data and regulation is distributed uniformly in the member states and their continental government representations. Consider blockchain and quantum computing systems in the global governance system.

 

Managing AGI damage must focus on limiting the resources to generate it (just as the restriction on uranium enrichment is for the nuclear threat).

 

Have Institutional Review Boards consider ethical supply chains, environmental impact, the rich-poor gap, data privacy, the cultivation of ethical culture, moral courage and other societal challenges to give a LEEDS-like evaluation, rating, and certification.

 

Global governing body, global rules in design, delivery, ethics, human always at the center. A governance group involving human and non-human - from a range of different disciplines as the core group in a globally connected hub. I would also ensure the end user (non-technical reps and young people are part of the body to bring diversity).

 

Develop protocols for risk assessment and mitigation, including addressing potential existential risks associated with AGI development. Encourage global research collaboration to share insights, best practices, and methodologies for safe and ethical AGI development.

 

I believe more on private competition than on public regulation. Also, who regulates the regulators?

 

The model should allow for free access to AGI to prevent future social divides.

 

I do not think there is a workable global governance model that can be implemented until AGI is a fully mature technology, and the known dangers and failure modes are well worked out. In the short term the best that can be done is to track the technology and attempt certification strategies.

 

AGI will likely emerge first from unregulated military research and disseminate from there; hence, there will not be an effective UN AGI governance agency.

 

 

 


 

Appendix

 

Regional Demographics

 

Region      Percentage

 

Europe               38.56

North America 21.56

Latin America  18.63

Asia                                17.32

Africa                              02.96

Other                              00.97

 

 

 

 

 


 

Phase 2 Real-Time Delphi Participants

 

 

Tom Abeles 

Sagacity Inc. Minneapolis, USA

 

Patricio Garces Almeida 

Universidad Andina Simón Bolívar

Quito, Ecuador

 

Kamal Aly

Cairo University

Cairo, Egypt

 

Giulio Ammannato

IACP

Bagno a Ripoli, Italy

 

Romeo Pérez Antón 

University of the Republic

Montevideo, Uruguay

 

Yul Anderson

African American Future Society, Conyers, USA

 

Rodolfo Angeles 

Francisco Moraz,

Ministry of Security Tegucigalpa, Honduras

 

Juan Carlos Araya

AR Energía

Santiago, Chile

 

Claudio Antonini 

Consultant

Milford, USA

 

Javier Armentia

Planetario de Pamplona

Pamplona, Spain

 

 

Margarita Arroyo

Consultant

Mexico City, DF, Mexico

 

Guillermina Baena 

Universidad Nacional Autónoma de México Mexico City, Mexico

 

Mohsen Bahrami 

Amirkabir University of Technology

Tehran, Iran

 

Ying Bai

Capital Inst of Science

Beijing, China

 

Olatz Arrieta Baiges

LKS NEXT 

Derio, Spain

 

Miguel Baroja 

LKS

San Sebastian, Spain

 

Mara Di Berardo

The Millennium Project 

Campli, Italy

 

Massimo Bernaschi

National Research Council,

Rome, Italy

 

Hassan Bashiri

Hamedan University of Technology

Hamedan, Iran

 

Sam Douglas-Bate

ForgeFront International 

Bath, UK

 

Wang Bin

Human University

Changsha City, China

 

Catherine Bocskor

International Lawyer 

Clarksville, USA

 

Zvezdelin Borisov 

Sofia Tech Park

Sofia, Bulgaria

 

Rodrigo Borras Veta Verde S.A. de C.V.

Mexico City, DF, Mexico

 

David Bray

Stimson Center

Washington, DC, USA

 

Marcello Bressan

CESAR

Recife, Brazil

 

Dirk Bruere

Surface Measurement Systems

Bedford, UK

 

Dennis Bushnell 

NASA (Ret.) Hampton, USA

 

Carlos Caifa

Singularity University

Milano, Italy

 

Yuri Calleo

Univ. College Dublin

Venafro, Italy

 

Jorge Cataldo

OpenPBL Inc. 

São Paulo, Brazil

Ramon Chaves

Federal University of Rio de Janeiro, Brazil

 

Xiaomin Che

Guoyi Electronic Technologies

Chengdu, China

 

Antonio Chella

Palermo University Palermo, Italy

 

Epaminondas Christophilopoulos

MOMus

Thessaloniki, Greece

 

Jose Cordeiro

Humanity+

Caracas, Venezuela

 

Marina Cortes

Institute of Astrophysics and space sciences

Lisbon, Portugal

 

Claudio Costa

NTT DATA

London, UK

 

Vilas Dhar

Patrick J. McGovern Foundation, Miami, USA

 

Mila Diaz

LKS Next Group

San Juan, Spain

Bob Dick

Interchange

Chapel Hill, Australia

 

Todor Dimitrov Rakovski National Defense College

Sofia, Bulgaria

Dimiter Dobrev 

Sofia, Bulgaria

 

Dexter Docherty

OECD 

Paris, France

 

Gabriela Dutrenit

Metropolitan Autonomous University

Mexico City, DF, Mexico

 

Paul Epping

Exponential

's Hertogenbosch, Netherlands

 

Carolina Facioni

National Institute of Statistics (ISTAT)

Rome, Italy

 

Daniel Faggella

Emerj Artificial Intelligence Research

Weston, USA

 

Elissa Farrow 

Brisbane, Australia

 

Hefeng Tong

China Ct. Globalization 

Beijing, China

 

Alexandre Contar Fernandes 

São Paulo, Brazil

Saverio Fidecicchi

Singularity University

Florence, Italy

 

Javier Fornari 

Santa Rafaela, Argentina

 

 

Bartosz Frackowiak

Warsaw Biennale

Warsaw, Poland

 

Karl Friðriksson

Icelandic Centre of Future Studies Hvammstangi, Iceland

 

Salvatore Gaglio

D'Annunzio University

Palermo, Italy

 

Vyara Gancheva Bulgarian Academy of Sciences, Sofia, Bulgaria

 

Nadezhda Gaponenko

Russian Academy of Science, Moscow, Russia

 

Jorge Gatica 

National Academy of Political and Strategic Studies, Santiago, Chile

 

Jerome C. Glenn

The Millennium Project

Washington, DC, USA

 

Linda MacDonald Glenn 

Center for Applied Values, Marina, USA

 

Jesús Sánchez Gómez 

Las Rozas de Madrid, Spain

 

Ted Gordon

The Millennium Project

Old Lyme, USA

 

Boris Grozdanoff 

Sofia, Bulgaria

Carlos Guadián 

Open University Cantella, Spain

 

William Halal

Tech Cast

Washington, DC, USA

 

Lance Hampton

Dept. of Defense

Alexandria, USA

 

Aharon Hauptman

Zvi Meitar Institute

Tel Aviv, Israel

 

Sirkka Heinonen

Finland Futures Research Center 

Helsinki, Finland

 

Lucio Henao

Proseres Prospectiva Estrategica

Medellin, Colombia

 

Matthias Honegger 

Zurich, Switzerland

 

Craig Hubley 

Consultant

LaHave, Canada

 

Jan Hurwitch 

Fundación Ética Visionaria

San Jose, Costa Rica

 

Reyhan Huseynova

Azerbaijan Future Society, Baku, Azerbaijan

 

 

 

 

Carlos Piazza Iaria

Piazza Consultoria

São Paulo, Brazil

 

Abulgasem Issa

Libyan Authority for Science, Tripoli, Libya

 

Kevin Jimenez

Universidad Nacional de Loja, Loja, Ecuador

 

Zhouying Jin

Beijing Academy of Soft Technology, Beijing, China

 

Christopher Jones 

Transnormal Institute

Santa Fe, USA

 

Carlos Jose 

Arequipa, Peru

 

Scott Joy

Democracy onAir

Fairfax, USA

 

Kyungsoo Kang 

Seoul, South Korea

 

Georgi Karamanev 

Digital Journalist

Sofia, Bulgaria

 

Nikolaos Kastrinos

European Commission (ret.), Brussels, Belgium

 

Dennis Kautz 

Stuttgart, Germany

 

Sunkwan Kim 

Yongin, South Korea

 

 

Duk Ki Kim 

Seoul, South Korea

 

Demian Kim 

Seoul, South Korea

 

Young Gon Kim

South Korea

 

Yun Cheol Koo 

Seoul, South Korea

 

Svetlana Knoll

Bulgarian Academy of Sciences, Sofia, Bulgaria

 

Norbert Kolos

4CF, Warsaw, Poland

 

Osmo Kuusi

Finland Futures Research Center 

Turku, Finland

 

John Laprise 

Palatine, USA

 

Gema Leon 

National Institute of Statistics

Aguascalientes, Mexico

 

Huabin Li 

Beijing Academy of Soft Technology

Beijing, China

 

Zhipeng Li 

Lloyds International

Beijing, China

 

Andreas Ligtvoet 

Consultant

Utrecht, Netherlands

 

Yuri Lima

Laboratório do Futuro

Rio de Janeiro, Brazil

 

Jun Liu

Electronic Technologies 

Chengdu, China

 

Jesús López Lobo

TECNALIA, Basque Research, Bilbao, Spain

 

Praniti Maini

Banner Health

Phoenix, USA

 

Shekar Manickavasagam 

Citicorp

Bangalore, India

 

Iva Manova

Bulgarian Academy of Science, Sofia, Bulgaria

 

Mario Mantovani

Manageritalia

Milano, Italy

 

Alberto Manuel

Microsoft, Amsterdam

The Netherlands

 

Milan Maric

Axians Montenegro

Podgorica, Montenegro

 

Mario Bolzan 

University of Padova

Venice, Italy

 

Beatriz Elena Plata Martinez 

La Plata, Argentina

 

Elena Fernández Martínez 

León, Spain

 

Jesús Alberto Velasco Mata, Nodo Mexicano. El Proyecto del Milenio

Mexico City, Mexico

 

Monica Mastrantonio 

Montrose, UK

 

Jennifer McClure 

JEM

Oconomowoc, USA

 

Patricia McLagan

Consultant 

Washington, DC, USA

 

John Meagher

Consultant 

Canton, USA

 

Alberto Merino

LKS NEXT 

Mondragon, Spain

 

Czeslaw Mesjasz 

Krakow University of Economics

Kraków, Poland

 

Martin Mihailov

Pardue University

Sofia, Bulgaria

 

Mihail Mihailov 

Theoremus Company Sofia, Bulgaria

 

Claudio Huepe Minoletti

Minister of Energy (ret.)

Universidad Diego Portales, Santiago, Chile

 

Lee Anderson Mottern 

Consultant

San Antonio, USA

 

Victor Vahidi Motti

Alternative Planetary Futures Institute Washington, DC, USA

 

Gabriel Mukobi

Stanford University

Camas, USA

 

Leopold Mureithi 

University of Nairobi Nairobi, Kenya

 

Deyanira Murga 

Cerberus Risk Advisory Group, Washington, DC, USA

 

Art Murray

Applied Knowledge Sci.

Boyce, USA

 

José Sánchez-Narvaez 

Universidad Nacional Agraria La Molina

Lima, Peru

 

Youssef Nassef

United Nations System Staff College 

Bonn, Germany

 

Fabrice Teugia Nguiffo 

RIDING-UP

Yaoundé, Cameroon

 

Kacper Nosarzewski

C4F, Warszawa, Poland

 

 

 

Carlos Ocampo 

Universidad del Valle Cali, Colombia

 

Myoung Suk Oh 

Seoul, South Korea

 

Carlos Ojeda 

Academia Nacional de Estudios Políticos y Estratégicos

Santiago, Chile

 

Anna Sigurborg Olafsdottir 

Icelandic Parliament Reykjavík, Iceland

 

Concepcion Olavarrieta

El Proyecto del Milenio Mexico City, DF, Mexico

 

Alexandre Oliveira 

Cebralog

Campinas, Brazil

 

Yadira Ornelas

Tec de Monterrey 

Monterrey, Mexico

 

Fernando Ortega 

Instituto de Gobierno y de Gestión Pública Lima, Peru

 

Wonkyug Park 

Seoul, South Korea

 

Youngsook Park

UN Future Forum 

Seoul, South Korea

 

Roberto Paura

Italian Inst for the Future

Napoli, Italy

 

Juan Antonio Perteguer 

Universidad Politécnica Pozuelo de Alarcon, Spain

 

Heramb Podar

Center. for AI and Digital Policy, Mumbai, India

 

Gulio Prisco

Consultant

Hungary

 

Hristo Prodanov University of World Economics

Sofia, Bulgaria

 

Borja Pulido

Prospektiker

Gipuzkoa, Spain

 

David Rabinowitz 

Corvallis, USA

 

Luis Ragno

Centro de Estudios Prospectivos

Mendoza, Argentina

 

Nicolas Balcom Raleigh

Foresight European Network

Turku, Finland

 

Marcelo Ramirez 

Universidad de Chile Santiago, Chile

 

Craig Ramlal

University of the West Indies, San Fernando Trinidad & Tobago

 

 

 

Ben Reid

Consultant

New Zealand

 

Lindalia Reis

Hacking Rio

Rio de Janeiro, Brazil

 

Andrea Renda

Center for European Policy Analysis 

Brussels, Belgium

 

Ivan Montoya-Restrepo 

Universidad Nacional de Colombia

Medellín, Colombia

 

Saphia Richou

Prospective Network 

Paris, France

 

Andrea Rigoni

Deloitte            

Milan, Italy

 

Eduardo Riveros

Topwow LLC

Santiago, Chile

 

John Rodman

MITRE corporation 

Silver Spring, USA

 

Juan José Rodríguez

LKS, San Sebastian

Spain

 

Carlos William Mera Rodriguez

Universidad Nacional Abierta y a Distancia

Bogota, Colombia

 

 

Mauricio Valdes Rodríguez

Inst of Public Administration

Texcoco, Mexico

 

Mattia Rossi

Disegnare il Futuro

Chiuduno, Italy

 

Jan Ruijgrok 

ChangeVision

Scherpenzeel, Netherlands

 

Stuart Russell 

University of Berkeley Berkley, USA

 

Leonardo Salinas Universidad Mayor de Chile, Santiago, Chile

 

Alison Sander 

Cambridge, USA

 

Danilena Kapralova-Scherner

International Lawyer

Berlin, Germany

 

Sabastian Schmidt 

London, UK

 

Christian Schoon

Future Impacts

Cologne, Germany

 

Karl Schroeder 

Science Fiction Author Toronto, Ontario

 

Wendy Schultz

Jigsaw Foresight

Oxford, UK

 

Rocco Scolozzi

Skopìa Anticipation Services and University of Trento, Avio, Italy

 

Jim Sebesta

Sebesta Consulting

Centennial, USA

 

Yuri Serbolov

Proyecto del Milenio

Mexico City, DF, Mexico

 

Lee Seunghoon 

Seoul, South Korea

 

Richard Silberglitt

RAND

Silver Spring, USA

 

Amalie Sinclair

Lifeboat Foundation 

Santa Cruz, USA

 

Niroshan Sivathasan

Elohim Technology

Sydney, Australia

 

Cintia Smith

Municipality of Monterrey, Mexico

 

Sari Söderlund

Finland Futures Research Center

Naantali, Finland

 

John Spady 

Seattle, USA

 

George R. Stein

Pedagog.AI

São Paulo, Brazil

 

 

 

Petro Sukhorolskyi 

Lviv Polytechnic National, Lviv, Ukraine

 

Ufuk Tarhan

M-GEN Future Planning Center, Istanbul, Turkey

 

Amos Taylor

Finland Futures Research Center

Helsinki, Finland

 

Ramon Tejeiro 

Escuela Europea de Gerencia, Boadilla del Monte, Spain

 

Claudio Telmon

Italian Association for Information Security

San Giuliano, Italy

 

Zhoung Tiejun

Academy of Artificial Intelligence, Peking University, Beijing, China

 

Spasimir Trenchev

Science Fiction author

Sofia, Bulgaria

 

Stuart Umpleby International Academy for Systems and Cybernetic Sciences

Arlington, VA

 

Giovanni Vannini 

Rome, Italy

 

Ashok Vaseashta 

International Clean Water Institute

Manassas, USA

 

Javier Vitale

National University of Cuyo

Mendoza, Argentina

 

Martin Waehlisch 

Berlin, Germany

 

Chenyang Wang

University of Science and Technology

Hefei, China

 

Jing Wang

China Biodiversity

Beijing, China

 

Zhe Wang

University of Science and Technology

Hefei, China

 

Song Xueqian 

Beijing, China

 

Wensupu Yang

Association of Professional Futurists

London, UK

 

Stirling Westrup

Useful.com

Montreal, Canada

 

Vivian Yang 

Beijing, China

 

Rabia Yasmeen

Euromonitor International

Dubai, UAE

 

Kevin Jae Young Yoon 

Seoul, South Korea

 

Yang Yudong

Beijing, China

 

Diego Paul Zaldumbide 

Center for Research and Specialized Studies Quito, Ecuador

 

Tong Zhang

University of Illinois

Champaign, USA

 

Zhao Zheng 

Beijing, China

 

Dagang Zhou

Beijing Academy of Soft Technology  

Shanghai, China

 

Aleksander Zidansek

Jožef Stefan Institute

Ljubljana, Slovenia

 

Alan Ziegler

Brain Preservation Foundation 

Pittsboro, USA

 

Simone Di Zio

University G. D’Annunzio 

Chieti, Italy

 

Steve Zlate 

New Milford, USA

 

Ibon Zugasti

Prospecktiker

Gipuzkoa, Spain

 

 

 

 
광고
광고
광고
광고
광고
광고
광고
많이 본 기사
광고
광고