AI Safety Fundamentals: Alignment

En podcast av BlueDot Impact

Kategorier:

83 Avsnitt

  1. Is Power-Seeking AI an Existential Risk?

    Publicerades: 2023-05-13
  2. Where I Agree and Disagree with Eliezer

    Publicerades: 2023-05-13
  3. Supervising Strong Learners by Amplifying Weak Experts

    Publicerades: 2023-05-13
  4. Measuring Progress on Scalable Oversight for Large Language Models

    Publicerades: 2023-05-13
  5. Least-To-Most Prompting Enables Complex Reasoning in Large Language Models

    Publicerades: 2023-05-13
  6. Summarizing Books With Human Feedback

    Publicerades: 2023-05-13
  7. Takeaways From Our Robust Injury Classifier Project [Redwood Research]

    Publicerades: 2023-05-13
  8. AI Safety via Debatered Teaming Language Models With Language Models

    Publicerades: 2023-05-13
  9. High-Stakes Alignment via Adversarial Training [Redwood Research Report]

    Publicerades: 2023-05-13
  10. AI Safety via Debate

    Publicerades: 2023-05-13
  11. Robust Feature-Level Adversaries Are Interpretability Tools

    Publicerades: 2023-05-13
  12. Introduction to Logical Decision Theory for Computer Scientists

    Publicerades: 2023-05-13
  13. Debate Update: Obfuscated Arguments Problem

    Publicerades: 2023-05-13
  14. Discovering Latent Knowledge in Language Models Without Supervision

    Publicerades: 2023-05-13
  15. Feature Visualization

    Publicerades: 2023-05-13
  16. Toy Models of Superposition

    Publicerades: 2023-05-13
  17. Understanding Intermediate Layers Using Linear Classifier Probes

    Publicerades: 2023-05-13
  18. Acquisition of Chess Knowledge in Alphazero

    Publicerades: 2023-05-13
  19. Careers in Alignment

    Publicerades: 2023-05-13
  20. Embedded Agents

    Publicerades: 2023-05-13

4 / 5

Listen to resources from the AI Safety Fundamentals: Alignment course!https://aisafetyfundamentals.com/alignment

Visit the podcast's native language site