AXRP - the AI X-risk Research Podcast
En podcast av Daniel Filan
59 Avsnitt
-
18 - Concept Extrapolation with Stuart Armstrong
Publicerades: 2022-09-03 -
17 - Training for Very High Reliability with Daniel Ziegler
Publicerades: 2022-08-21 -
16 - Preparing for Debate AI with Geoffrey Irving
Publicerades: 2022-07-01 -
15 - Natural Abstractions with John Wentworth
Publicerades: 2022-05-23 -
14 - Infra-Bayesian Physicalism with Vanessa Kosoy
Publicerades: 2022-04-05 -
13 - First Principles of AGI Safety with Richard Ngo
Publicerades: 2022-03-31 -
12 - AI Existential Risk with Paul Christiano
Publicerades: 2021-12-02 -
11 - Attainable Utility and Power with Alex Turner
Publicerades: 2021-09-25 -
10 - AI's Future and Impacts with Katja Grace
Publicerades: 2021-07-23 -
9 - Finite Factored Sets with Scott Garrabrant
Publicerades: 2021-06-24 -
8 - Assistance Games with Dylan Hadfield-Menell
Publicerades: 2021-06-08 -
7.5 - Forecasting Transformative AI from Biological Anchors with Ajeya Cotra
Publicerades: 2021-05-28 -
7 - Side Effects with Victoria Krakovna
Publicerades: 2021-05-14 -
6 - Debate and Imitative Generalization with Beth Barnes
Publicerades: 2021-04-08 -
5 - Infra-Bayesianism with Vanessa Kosoy
Publicerades: 2021-03-10 -
4 - Risks from Learned Optimization with Evan Hubinger
Publicerades: 2021-02-17 -
3 - Negotiable Reinforcement Learning with Andrew Critch
Publicerades: 2020-12-11 -
2 - Learning Human Biases with Rohin Shah
Publicerades: 2020-12-11 -
1 - Adversarial Policies with Adam Gleave
Publicerades: 2020-12-11
AXRP (pronounced axe-urp) is the AI X-risk Research Podcast where I, Daniel Filan, have conversations with researchers about their papers. We discuss the paper, and hopefully get a sense of why it's been written and how it might reduce the risk of AI causing an existential catastrophe: that is, permanently and drastically curtailing humanity's future potential. You can visit the website and read transcripts at axrp.net.
