The Inside View
En podcast av Michaël Trazzi
54 Avsnitt
-
Curtis Huebner on Doom, AI Timelines and Alignment at EleutherAI
Publicerades: 2023-07-16 -
Eric Michaud on scaling, grokking and quantum interpretability
Publicerades: 2023-07-12 -
Jesse Hoogland on Developmental Interpretability and Singular Learning Theory
Publicerades: 2023-07-06 -
Clarifying and predicting AGI by Richard Ngo
Publicerades: 2023-05-09 -
Alan Chan And Max Kauffman on Model Evaluations, Coordination and AI Safety
Publicerades: 2023-05-06 -
Breandan Considine on Neuro Symbolic AI, Coding AIs and AI Timelines
Publicerades: 2023-05-04 -
Christoph Schuhmann on Open Source AI, Misuse and Existential risk
Publicerades: 2023-05-01 -
Simeon Campos on Short Timelines, AI Governance and AI Alignment Field Building
Publicerades: 2023-04-29 -
Collin Burns On Discovering Latent Knowledge In Language Models Without Supervision
Publicerades: 2023-01-17 -
Victoria Krakovna–AGI Ruin, Sharp Left Turn, Paradigms of AI Alignment
Publicerades: 2023-01-12 -
David Krueger–Coordination, Alignment, Academia
Publicerades: 2023-01-07 -
Ethan Caballero–Broken Neural Scaling Laws
Publicerades: 2022-11-03 -
Irina Rish–AGI, Scaling and Alignment
Publicerades: 2022-10-18 -
Shahar Avin–Intelligence Rising, AI Governance
Publicerades: 2022-09-23 -
Katja Grace on Slowing Down AI, AI Expert Surveys And Estimating AI Risk
Publicerades: 2022-09-16 -
Markus Anderljung–AI Policy
Publicerades: 2022-09-09 -
Alex Lawsen—Forecasting AI Progress
Publicerades: 2022-09-06 -
Robert Long–Artificial Sentience
Publicerades: 2022-08-28 -
Ethan Perez–Inverse Scaling, Language Feedback, Red Teaming
Publicerades: 2022-08-24 -
Robert Miles–Youtube, AI Progress and Doom
Publicerades: 2022-08-19
The goal of this podcast is to create a place where people discuss their inside views about existential risk from AI.
