EA - Robert Long on Why You Might Want To Care About Artificial Sentience by Michaël Trazzi

The Nonlinear Library: EA Forum - En podcast av The Nonlinear Fund

Podcast artwork

Kategorier:

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Robert Long on Why You Might Want To Care About Artificial Sentience, published by Michaël Trazzi on August 28, 2022 on The Effective Altruism Forum. I talked to Robert Long, research fellow at the Future of Humanity Institute, working at the intersection of the philosophy of AI Safety and consciousness of AI. Robert has done his PhD at NYU, advised by David Chalmers. We talk about the recent LaMDA controversy about the sentience of large language models (see Robert's summary), the metaphysics and philosophy of consciousness, artificial sentience, and how a future filled with digital minds could get really weird. Below are some highlighted quotes from our conversation (available on Youtube, Spotify, Google Podcast, Apple Podcast). For the full context for each of these quotes, you can find the accompanying transcript. Why Artificial Sentience Is A Pressing Issue Things May Get Really Weird In The Near Future “Things could get just very weird as people interact more with very charismatic AI systems that, whether or not they are sentient, will give the very strong impression to people that they are. I think some evidence that we will have a lot of people concerned about this is maybe just the fact that Blake Lemoine happened. He wasn’t interacting with the world’s most charismatic AI system. And because of the scaling hypothesis, these things are only going to get better and better at conversation.” “If scale is all you need, I think it’s going to be a very weird decade. And one way it’s going to be weird, I think, is going to be a lot more confusion and interest and dynamics around AI sentience and the perceptions of AI sentience.” Why illusionists about consciousness still have to answer hard questions about AI welfare “One reason I wrote that post is just to say okay, well here’s what a version of the question is. And I’d also like to encourage people, including listeners to this podcast, if they get off board with any of those assumptions, then ask, okay, what are the questions we would have to answer about this? If you think AI couldn’t possibly be conscious, definitely come up with really good reasons for thinking that, because that would be very important. And also would be very bad to be wrong about that. If you think consciousness doesn’t exist, then you presumably still think that desires exist or pain exists. So even though you’re an illusionist, let’s come up with a theory of what those things look like.” On The Asymmetry of Pain & Pleasure “One thing is that pain and pleasure seem to be in some sense, asymmetrical. Its not really just that', it doesn't actually seem that you can say all of the same things about pain as you can say about pleasure, but just kind of reversed. Like pain, at least in creatures like us, seems to be able to be a lot more intense than pleasure, a lot more easily at least. It's just much easier to hurt very badly than it is to feel extremely intense pleasure. And pain also seems to capture our attention a lot more strongly than pleasure does, like pain has this quality of you have to pay attention to this right now that it seems harder for pleasure to have. So it might be to explain pain and pleasure we need to explain a lot more complicated things about motivation and attention and things like that.” The Sign Switching Argument "One thing that Brian Tomasik has talked about and I think he got this from someone else, but you could call it the sign switching argument. Which is that you can train RL agent with positive rewards and then zero for when it messes up or shift things down and train it down with negative rewards. You can train things in exactly the same way while shifting around the sign of the reward signal. And if you imagined an agent that flinches, or it says "ouch" or things like that, it'd be kind of weird if you were...

Visit the podcast's native language site