EA - How major governments can help with the most important century by Holden Karnofsky
The Nonlinear Library: EA Forum - En podcast av The Nonlinear Fund
Kategorier:
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How major governments can help with the most important century, published by Holden Karnofsky on February 24, 2023 on The Effective Altruism Forum.I’ve been writing about tangible things we can do today to help the most important century go well. Previously, I wrote about helpful messages to spread; how to help via full-time work; and how major AI companies can help.What about major governments1 - what can they be doing today to help?I think governments could play crucial roles in the future. For example, see my discussion of standards and monitoring.However, I’m honestly nervous about most possible ways that governments could get involved in AI development and regulation today.I think we still know very little about what key future situations will look like, which is why my discussion of AI companies (previous piece) emphasizes doing things that have limited downsides and are useful in a wide variety of possible futures.I think governments are “stickier†than companies - I think they have a much harder time getting rid of processes, rules, etc. that no longer make sense. So in many ways I’d rather see them keep their options open for the future by not committing to specific regulations, processes, projects, etc. now.I worry that governments, at least as they stand today, are far too oriented toward the competition frame (“we have to develop powerful AI systems before other countries doâ€) and not receptive enough to the caution frame (“We should worry that AI systems could be dangerous to everyone at once, and consider cooperating internationally to reduce riskâ€). (This concern also applies to companies, but see footnote.2)In a previous piece, I talked about two contrasting frames for how to make the best of the most important century:The caution frame. This frame emphasizes that a furious race to develop powerful AI could end up making everyone worse off. This could be via: (a) AI forming dangerous goals of its own and defeating humanity entirely; (b) humans racing to gain power and resources and “lock in†their values.Ideally, everyone with the potential to build something powerful enough AI would be able to pour energy into building something safe (not misaligned), and carefully planning out (and negotiating with others on) how to roll it out, without a rush or a race. With this in mind, perhaps we should be doing things like:Working to improve trust and cooperation between major world powers. Perhaps via AI-centric versions of Pugwash (an international conference aimed at reducing the risk of military conflict), perhaps by pushing back against hawkish foreign relations moves.Discouraging governments and investors from shoveling money into AI research, encouraging AI labs to thoroughly consider the implications of their research before publishing it or scaling it up, working toward standards and monitoring, etc. Slowing things down in this manner could buy more time to do research on avoiding misaligned AI, more time to build trust and cooperation mechanisms, and more time to generally gain strategic clarityThe “competition†frame. This frame focuses less on how the transition to a radically different future happens, and more on who's making the key decisions as it happens.If something like PASTA is developed primarily (or first) in country X, then the government of country X could be making a lot of crucial decisions about whether and how to regulate a potential explosion of new technologies.In addition, the people and organizations leading the way on AI and other technology advancement at that time could be especially influential in such decisions.This means it could matter enormously "who leads the way on transformative AI" - which country or countries, which people or organizations.Some people feel that we can make confident statements today a...
