AI Safety Fundamentals: Alignment
En podcast av BlueDot Impact
Kategorier:
83 Avsnitt
-
Is Power-Seeking AI an Existential Risk?
Publicerades: 2023-05-13 -
Where I Agree and Disagree with Eliezer
Publicerades: 2023-05-13 -
Supervising Strong Learners by Amplifying Weak Experts
Publicerades: 2023-05-13 -
Measuring Progress on Scalable Oversight for Large Language Models
Publicerades: 2023-05-13 -
Least-To-Most Prompting Enables Complex Reasoning in Large Language Models
Publicerades: 2023-05-13 -
Summarizing Books With Human Feedback
Publicerades: 2023-05-13 -
Takeaways From Our Robust Injury Classifier Project [Redwood Research]
Publicerades: 2023-05-13 -
AI Safety via Debatered Teaming Language Models With Language Models
Publicerades: 2023-05-13 -
High-Stakes Alignment via Adversarial Training [Redwood Research Report]
Publicerades: 2023-05-13 -
AI Safety via Debate
Publicerades: 2023-05-13 -
Robust Feature-Level Adversaries Are Interpretability Tools
Publicerades: 2023-05-13 -
Introduction to Logical Decision Theory for Computer Scientists
Publicerades: 2023-05-13 -
Debate Update: Obfuscated Arguments Problem
Publicerades: 2023-05-13 -
Discovering Latent Knowledge in Language Models Without Supervision
Publicerades: 2023-05-13 -
Feature Visualization
Publicerades: 2023-05-13 -
Toy Models of Superposition
Publicerades: 2023-05-13 -
Understanding Intermediate Layers Using Linear Classifier Probes
Publicerades: 2023-05-13 -
Acquisition of Chess Knowledge in Alphazero
Publicerades: 2023-05-13 -
Careers in Alignment
Publicerades: 2023-05-13 -
Embedded Agents
Publicerades: 2023-05-13
Listen to resources from the AI Safety Fundamentals: Alignment course!https://aisafetyfundamentals.com/alignment