AI Safety Fundamentals: Alignment

En podcast av BlueDot Impact

Kategorier:

83 Avsnitt

  1. Constitutional AI Harmlessness from AI Feedback

    Publicerades: 2024-07-19
  2. Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback

    Publicerades: 2024-07-19
  3. Illustrating Reinforcement Learning from Human Feedback (RLHF)

    Publicerades: 2024-07-19
  4. Chinchilla’s Wild Implications

    Publicerades: 2024-06-17
  5. Deep Double Descent

    Publicerades: 2024-06-17
  6. Intro to Brain-Like-AGI Safety

    Publicerades: 2024-06-17
  7. Eliciting Latent Knowledge

    Publicerades: 2024-06-17
  8. Toy Models of Superposition

    Publicerades: 2024-06-17
  9. Least-To-Most Prompting Enables Complex Reasoning in Large Language Models

    Publicerades: 2024-06-17
  10. Discovering Latent Knowledge in Language Models Without Supervision

    Publicerades: 2024-06-17
  11. ABS: Scanning Neural Networks for Back-Doors by Artificial Brain Stimulation

    Publicerades: 2024-06-17
  12. Two-Turn Debate Doesn’t Help Humans Answer Hard Reading Comprehension Questions

    Publicerades: 2024-06-17
  13. Imitative Generalisation (AKA ‘Learning the Prior’)

    Publicerades: 2024-06-17
  14. An Investigation of Model-Free Planning

    Publicerades: 2024-06-17
  15. Low-Stakes Alignment

    Publicerades: 2024-06-17
  16. Gradient Hacking: Definitions and Examples

    Publicerades: 2024-06-17
  17. Empirical Findings Generalize Surprisingly Far

    Publicerades: 2024-06-17
  18. Compute Trends Across Three Eras of Machine Learning

    Publicerades: 2024-06-13
  19. Worst-Case Thinking in AI Alignment

    Publicerades: 2024-05-29
  20. Public by Default: How We Manage Information Visibility at Get on Board

    Publicerades: 2024-05-12

1 / 5

Listen to resources from the AI Safety Fundamentals: Alignment course!https://aisafetyfundamentals.com/alignment

Visit the podcast's native language site