EA - ‘Dissolving’ AI Risk – Parameter Uncertainty in AI Future Forecasting by Froolow
The Nonlinear Library: EA Forum - En podcast av The Nonlinear Fund
Kategorier:
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: ‘Dissolving’ AI Risk – Parameter Uncertainty in AI Future Forecasting, published by Froolow on October 18, 2022 on The Effective Altruism Forum. 1 - Summary This is an entry into the Future Fund AI Worldview contest. The headline figure from this essay is that I calculate the best estimate of the risk of catastrophe due to out-of-control AGI is approximately 1.6%. However, the whole point of the essay is that “means are misleading” when dealing with conditional probabilities which have uncertainty spanning multiple orders of magnitude (like AI Risk). My preferred presentation of the results is as per the diagram below, showing it is more probable than not that we live in a world where the risk of Catastrophe due to out-of-control AGI is
