Generally Intelligent
En podcast av Kanjun Qiu
37 Avsnitt
-
Episode 17: Andrew Lampinen, DeepMind, on symbolic behavior, mental time travel, and insights from psychology
Publicerades: 2022-02-28 -
Episode 16: Yilun Du, MIT, on energy-based models, implicit functions, and modularity
Publicerades: 2021-12-21 -
Episode 15: Martín Arjovsky, INRIA, on benchmarks for robustness and geometric information theory
Publicerades: 2021-10-15 -
Episode 14: Yash Sharma, MPI-IS, on generalizability, causality, and disentanglement
Publicerades: 2021-09-24 -
Episode 13: Jonathan Frankle, MIT, on the lottery ticket hypothesis and the science of deep learning
Publicerades: 2021-09-10 -
Episode 12: Jacob Steinhardt, UC Berkeley, on machine learning safety, alignment and measurement
Publicerades: 2021-06-18 -
Episode 11: Vincent Sitzmann, MIT, on neural scene representations for computer vision and more general AI
Publicerades: 2021-05-20 -
Episode 10: Dylan Hadfield-Menell, UC Berkeley/MIT, on the value alignment problem in AI
Publicerades: 2021-05-12 -
Episode 09: Drew Linsley, Brown, on inductive biases for vision and generalization
Publicerades: 2021-04-02 -
Episode 08: Giancarlo Kerg, Mila, on approaching deep learning from mathematical foundations
Publicerades: 2021-03-27 -
Episode 07: Yujia Huang, Caltech, on neuro-inspired generative models
Publicerades: 2021-03-18 -
Episode 06: Julian Chibane, MPI-INF, on 3D reconstruction using implicit functions
Publicerades: 2021-03-05 -
Episode 05: Katja Schwarz, MPI-IS, on GANs, implicit functions, and 3D scene understanding
Publicerades: 2021-02-24 -
Episode 04: Joel Lehman, OpenAI, on evolution, open-endedness, and reinforcement learning
Publicerades: 2021-02-17 -
Episode 03: Cinjon Resnick, NYU, on activity and scene understanding
Publicerades: 2021-02-01 -
Episode 02: Sarah Jane Hong, Latent Space, on neural rendering & research process
Publicerades: 2021-01-07 -
Episode 01: Kelvin Guu, Google AI, on language models & overlooked research problems
Publicerades: 2020-12-15
Technical discussions with deep learning researchers who study how to build intelligence. Made for researchers, by researchers.
