EA - Interviews with 97 AI Researchers: Quantitative Analysis by Maheen Shermohammed

The Nonlinear Library: EA Forum - En podcast av The Nonlinear Fund

Podcast artwork

Kategorier:

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Interviews with 97 AI Researchers: Quantitative Analysis, published by Maheen Shermohammed on February 2, 2023 on The Effective Altruism Forum.TLDR: Last year, Vael Gates interviewed 97 AI researchers about their perceptions on the future of AI, focusing on risks from advanced AI. Among other questions, researchers were asked about the alignment problem, the problem of instrumental incentives, and their interest in AI alignment research. Following up after 5-6 months, 51% reported the interview had a lasting effect on their beliefs. Our new report analyzes these interviews in depth. We describe our primary results and some implications for field-building below. Check out the full report (interactive graph version), a complementary writeup describing whether we can predict a researcher’s interest in alignment, and our results below![Link to post on LessWrong]OverviewThis report (interactive graph version) is a quantitative analysis of 97 interviews conducted in Feb-March 2022 with machine learning researchers, who were asked about their perceptions of artificial intelligence (AI) now and in the future, with particular focus on risks from advanced AI systems. Of the interviewees, 92 were selected from NeurIPS or ICML 2021 submissions, and 5 were informally recommended experts. For each interview, a transcript was generated, and common responses were identified and tagged to support quantitative analysis. The transcripts, as well as a qualitative walkthrough of common perspectives, are available at Interviews.Several core questions were asked in these interviews:When advanced AI (~AGI) would be developed (note that this term was imprecisely defined in the interviews)A probe about the alignment problem: “What do you think of the argument ‘highly intelligent systems will fail to optimize exactly what their designers intended them to, and this is dangerous’?”A probe about instrumental incentives: “What do you think about the argument: ‘highly intelligent systems will have an incentive to behave in ways to ensure that they are not shut off or limited in pursuing their goals, and this is dangerous’?”Whether interviewees were interested in working on AI alignment, and why or why notWhether interviewees had heard of AI safety or AI alignmentFindings SummarySome key findings from our primary questions of interest:Most participants (75%), at some point in the conversation, said that they thought humanity would achieve advanced AI (imprecisely labeled “AGI” for the rest of this summary) eventually, but their timelines to AGI varied. Within this group:32% thought it would happen in 0-50 years40% thought 50-200 years18% thought 200+ yearsand 28% were quite uncertain, reporting a very wide range.(These sum to more than 100% because several people endorsed multiple timelines over the course of the conversation.) (Source)Among participants who thought humanity would never develop AGI (22%), the most commonly cited reason was that they couldn't see AGI happening based on current progress in AI. (Source)Participants were pretty split on whether they thought the alignment problem argument was valid. Some common reasons for disagreement were (Source):A set of responses that included the idea that AI alignment problems would be solved over the normal course of AI development (caveat: this was a very heterogeneous tag).Pointing out that humans have alignment problems too (so the potential risk of the AI alignment problem is capped in some sense by how bad alignment problems are for humans).AI systems will be tested (and humans will catch issues and implement safeguards before systems are rolled out in the real world).The objective function will not be designed in a way that causes the alignment problem / dangerous consequences of the alignment problem to arise.Perfe...

Visit the podcast's native language site