EA - Responses to the Rival AI Deployment Problem: the importance of a pre-deployment agreement by HaydnBelfield
The Nonlinear Library: EA Forum - En podcast av The Nonlinear Fund
Kategorier:
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Responses to the Rival AI Deployment Problem: the importance of a pre-deployment agreement, published by HaydnBelfield on September 23, 2022 on The Effective Altruism Forum. Introduction: The rival AI deployment problem Imagine an actor is faced with highly convincing evidence that with high probability (over 75%) a rival actor will be capable within two years of deploying advanced AI. Assume that they are concerned that such deployment might threaten their values or interests. What could the first actor do? Let us call this the ‘rival AI deployment problem’. Three responses present themselves: acquiescence, an agreement before deployment, and the threat of coercive action. Acquiescence is inaction, and acceptance that the rival actor will deploy. It does not risk conflict, but does risk unilateral deployment, and therefore suboptimal safety precautions, misuse or value lock-in. An agreement before deployment (such as a treaty between states) would be an agreement on when and how advanced AI could be developed and deployed: for example, requirements on alignment and safety tests, and restrictions on uses/goals. We can think of this as a ‘Short Reflection’ - a negotiation on what uses/goals major states can agree to give advanced AI. This avoids unilateral deployment and conflict, but it may be difficult for rival actors to agree, and any agreement faces the credible commitment problem of sufficiently reassuring the actors that the agreement is being followed. Threat of coercive action involves threatening the rival actor with setbacks (such as state sanctions or cyberattacks) to delay or deter the development program. It is unilaterally achievable, but risks unintended escalation and conflict. All three responses have positives and negatives. However, I will suggest a pre-deployment agreement may be the least-bad option. The rival AI deployment problem can be thought of as the flipside of (or an addendum to) what Karnofsky and Muehlhauser call the ‘AI deployment problem’: “How do we hope an AI lab - or government - would handle various hypothetical situations in which they are nearing the development of transformative AI?”. Similarly, OpenAI made a commitment in its Charter to “stop competing with and start assisting” any project that “comes close to building” advanced AI for example with “a better-than-even chance of success in the next two years”. The Short Reflection can be thought of as an addendum to the Long Reflection, as suggested by MacAskill and Ord. Four assumptions I make four assumptions. First, I roughly assume a ‘classic’ scenario of discontinuous deployment of a singular AGI system, of the type discussed in Life 3.0, Superintelligence and Yudkowsky’s writings. Personally, more of a continuous Christiano-style take-off seems more plausible to me, and more of a distributed Drexler-style Comprehensive AI Services seems preferable to me. But the discontinuous, singular scenario makes the tensions sharper and clearer, so that is what I will use. Second, I roughly assume that states are the key players, as opposed to sustained academic or corporate control over an advanced AI development and/or deployment project. Personally, state control of this strategically important technology/project seems more plausible to me. In any case, state control again makes the tensions sharper and clearer. I distinguish between development and deployment. By ‘deployment’ I mean something like ‘use in a way that affects the world’ materially, economically, or politically. This includes both ‘starting a training run that will likely result in advanced AI’ and ‘releasing some system from a closed-off environment or implementing its recommendations’. Third, I assume that some states may be concerned about deployment by a rival state. They might not necessarily be concerned. Almo...
