37 Avsnitt

  1. Episode 17: Andrew Lampinen, DeepMind, on symbolic behavior, mental time travel, and insights from psychology

    Publicerades: 2022-02-28
  2. Episode 16: Yilun Du, MIT, on energy-based models, implicit functions, and modularity

    Publicerades: 2021-12-21
  3. Episode 15: Martín Arjovsky, INRIA, on benchmarks for robustness and geometric information theory

    Publicerades: 2021-10-15
  4. Episode 14: Yash Sharma, MPI-IS, on generalizability, causality, and disentanglement

    Publicerades: 2021-09-24
  5. Episode 13: Jonathan Frankle, MIT, on the lottery ticket hypothesis and the science of deep learning

    Publicerades: 2021-09-10
  6. Episode 12: Jacob Steinhardt, UC Berkeley, on machine learning safety, alignment and measurement

    Publicerades: 2021-06-18
  7. Episode 11: Vincent Sitzmann, MIT, on neural scene representations for computer vision and more general AI

    Publicerades: 2021-05-20
  8. Episode 10: Dylan Hadfield-Menell, UC Berkeley/MIT, on the value alignment problem in AI

    Publicerades: 2021-05-12
  9. Episode 09: Drew Linsley, Brown, on inductive biases for vision and generalization

    Publicerades: 2021-04-02
  10. Episode 08: Giancarlo Kerg, Mila, on approaching deep learning from mathematical foundations

    Publicerades: 2021-03-27
  11. Episode 07: Yujia Huang, Caltech, on neuro-inspired generative models

    Publicerades: 2021-03-18
  12. Episode 06: Julian Chibane, MPI-INF, on 3D reconstruction using implicit functions

    Publicerades: 2021-03-05
  13. Episode 05: Katja Schwarz, MPI-IS, on GANs, implicit functions, and 3D scene understanding

    Publicerades: 2021-02-24
  14. Episode 04: Joel Lehman, OpenAI, on evolution, open-endedness, and reinforcement learning

    Publicerades: 2021-02-17
  15. Episode 03: Cinjon Resnick, NYU, on activity and scene understanding

    Publicerades: 2021-02-01
  16. Episode 02: Sarah Jane Hong, Latent Space, on neural rendering & research process

    Publicerades: 2021-01-07
  17. Episode 01: Kelvin Guu, Google AI, on language models & overlooked research problems

    Publicerades: 2020-12-15

2 / 2

Technical discussions with deep learning researchers who study how to build intelligence. Made for researchers, by researchers.

Visit the podcast's native language site