EA - The discount rate is not zero by Thomaaas

The Nonlinear Library: EA Forum - En podcast av The Nonlinear Fund

Podcast artwork

Kategorier:

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The discount rate is not zero, published by Thomaaas on September 3, 2022 on The Effective Altruism Forum. (Note: I privately submitted this essay to the red-teaming contest before the deadline passed, now cross-posting here. Also, this is my first ever post, so please be nice.) Summary Longtermists believe that future people matter, there could be a lot of them, and they are disenfranchised. They argue a life in the distant future has the same moral worth as somebody alive today. This implies that analyses which discount the future unjustifiably overlook the welfare of potentially hundreds of billions of future people, if not many more. Given the relationship between longtermism and views about existential risk, it is often noted that future lives should in fact be discounted somewhat – not for time preference, but for the likelihood of existing (i.e., the discount rate equals the catastrophe rate). I argue that the long-term discount rate is both positive and inelastic, due to 1) the lingering nature of present threats, 2) our ongoing ability to generate threats, and 3) continuously lowering barriers to entry. This has 2 major implications. First, we can only address near-term existential risks. Applying a long-term discount rate in line with the long-term catastrophe rate, by my calculations, suggests the expected length of human existence is another 8,200 years (and another trillion people). This is significantly less than commonly cited estimates of our vast potential. Second, I argue that equally applying longtermist principles would consider the descendants of each individual, when lives are saved in the present. A non-zero discount rate allows us to calculate the expected number of a person’s descendants. I estimate 1 life saved today affects an additional 93 people over the course of humanity’s expected existence. Both claims imply that x-risk reduction is overweighted relative to interventions such as global health and poverty reduction (but I am NOT arguing x-risks are unimportant). Discounting & longtermism Will MacAskill summarised the longtermist ideology in 3 key points: future people matter (morally), there are (in expectation) vast numbers of future people, and future people are utterly disenfranchised.Future people are disenfranchised in the sense that they cannot voice opinion on matters which affect them greatly, but another way in which they are directly discriminated against is in the application of discount rates. Discounting makes sense in economics, because inflation (or the opportunity to earn interest) can make money received earlier more valuable than money obtained later. This is called “time preference”, and it is a function of whatever discount rate is applied. While this makes sense for cashflows, human welfare is worth the same regardless of when it is expressed. Tyler Cowen and Derick Parfit first argued this point, however, application of a “social” discount rate is widely accepted and applied (where the social discount rate is derived from the “social rate of time preference”). Discounting is particularly important for longtermism, because the discount factor applied each year accumulates over time (growing exponentially), which can lead to radical conclusions over very long horizons. For example, consider the welfare of 1 million people, alive 1500 years in the future. Applying a mere 1% discount rate implies the welfare of this entire population worth less than 1/3 of the value of a person alive today. Lower discount rates only delay this distortion – Tarsney (2020) notes that in the long run, any positive discount rate “eventually wins”. It is fair to say that people in the distant future are “utterly disenfranchised”. Existential risk Longtermism implies that we should optimize our present actions to maximize the chance that fu...

Visit the podcast's native language site