Go offline with the Player FM app!
If digital minds could suffer, how would we ever know? (Article)
Manage episode 464950344 series 1531348
“I want everyone to understand that I am, in fact, a person.” Those words were produced by the AI model LaMDA as a reply to Blake Lemoine in 2022. Based on the Google engineer’s interactions with the model as it was under development, Lemoine became convinced it was sentient and worthy of moral consideration — and decided to tell the world.
Few experts in machine learning, philosophy of mind, or other relevant fields have agreed. And for our part at 80,000 Hours, we don’t think it’s very likely that large language models like LaMBDA are sentient — that is, we don’t think they can have good or bad experiences — in a significant way.
But we think you can’t dismiss the issue of the moral status of digital minds, regardless of your beliefs about the question. There are major errors we could make in at least two directions:
- We may create many, many AI systems in the future. If these systems are sentient, or otherwise have moral status, it would be important for humanity to consider their welfare and interests.
- It’s possible the AI systems we will create can’t or won’t have moral status. Then it could be a huge mistake to worry about the welfare of digital minds and doing so might contribute to an AI-related catastrophe.
And we’re currently unprepared to face this challenge. We don’t have good methods for assessing the moral status of AI systems. We don’t know what to do if millions of people or more believe, like Lemoine, that the chatbots they talk to have internal experiences and feelings of their own. We don’t know if efforts to control AI may lead to extreme suffering.
We believe this is a pressing world problem. It’s hard to know what to do about it or how good the opportunities to work on it are likely to be. But there are some promising approaches. We propose building a field of research to understand digital minds, so we’ll be better able to navigate these potentially massive issues if and when they arise.
This article narration by the author (Cody Fenwick) explains in more detail why we think this is a pressing problem, what we think can be done about it, and how you might pursue this work in your career. We also discuss a series of possible objections to thinking this is a pressing world problem.
You can read the full article, Understanding the moral status of digital minds, on the 80,000 Hours website.
Chapters:
- Introduction (00:00:00)
- Understanding the moral status of digital minds (00:00:58)
- Summary (00:03:31)
- Our overall view (00:04:22)
- Why might understanding the moral status of digital minds be an especially pressing problem? (00:05:59)
- Clearing up common misconceptions (00:12:16)
- Creating digital minds could go very badly - or very well (00:14:13)
- Dangers for digital minds (00:14:41)
- Dangers for humans (00:16:13)
- Other dangers (00:17:42)
- Things could also go well (00:18:32)
- We don't know how to assess the moral status of AI systems (00:19:49)
- There are many possible characteristics that give rise to moral status: Consciousness, sentience, agency, and personhood (00:21:39)
- Many plausible theories of consciousness could include digital minds (00:24:16)
- The strongest case for the possibility of sentient digital minds: whole brain emulation (00:28:55)
- We can't rely on what AI systems tell us about themselves: Behavioural tests, theory-based analysis, animal analogue comparisons, brain-AI interfacing (00:32:00)
- The scale of this issue might be enormous (00:36:08)
- Work on this problem is neglected but seems tractable: Impact-guided research, technical approaches, and policy approaches (00:43:35)
- Summing up so far (00:52:22)
- Arguments against the moral status of digital minds as a pressing problem (00:53:25)
- Two key cruxes (00:53:31)
- Maybe this problem is intractable (00:54:16)
- Maybe this issue will be solved by default (00:58:19)
- Isn't risk from AI more important than the risks to AIs? (01:00:45)
- Maybe current AI progress will stall (01:02:36)
- Isn't this just too crazy? (01:03:54)
- What can you do to help? (01:05:10)
- Important considerations if you work on this problem (01:13:00)
281 episodes
Manage episode 464950344 series 1531348
“I want everyone to understand that I am, in fact, a person.” Those words were produced by the AI model LaMDA as a reply to Blake Lemoine in 2022. Based on the Google engineer’s interactions with the model as it was under development, Lemoine became convinced it was sentient and worthy of moral consideration — and decided to tell the world.
Few experts in machine learning, philosophy of mind, or other relevant fields have agreed. And for our part at 80,000 Hours, we don’t think it’s very likely that large language models like LaMBDA are sentient — that is, we don’t think they can have good or bad experiences — in a significant way.
But we think you can’t dismiss the issue of the moral status of digital minds, regardless of your beliefs about the question. There are major errors we could make in at least two directions:
- We may create many, many AI systems in the future. If these systems are sentient, or otherwise have moral status, it would be important for humanity to consider their welfare and interests.
- It’s possible the AI systems we will create can’t or won’t have moral status. Then it could be a huge mistake to worry about the welfare of digital minds and doing so might contribute to an AI-related catastrophe.
And we’re currently unprepared to face this challenge. We don’t have good methods for assessing the moral status of AI systems. We don’t know what to do if millions of people or more believe, like Lemoine, that the chatbots they talk to have internal experiences and feelings of their own. We don’t know if efforts to control AI may lead to extreme suffering.
We believe this is a pressing world problem. It’s hard to know what to do about it or how good the opportunities to work on it are likely to be. But there are some promising approaches. We propose building a field of research to understand digital minds, so we’ll be better able to navigate these potentially massive issues if and when they arise.
This article narration by the author (Cody Fenwick) explains in more detail why we think this is a pressing problem, what we think can be done about it, and how you might pursue this work in your career. We also discuss a series of possible objections to thinking this is a pressing world problem.
You can read the full article, Understanding the moral status of digital minds, on the 80,000 Hours website.
Chapters:
- Introduction (00:00:00)
- Understanding the moral status of digital minds (00:00:58)
- Summary (00:03:31)
- Our overall view (00:04:22)
- Why might understanding the moral status of digital minds be an especially pressing problem? (00:05:59)
- Clearing up common misconceptions (00:12:16)
- Creating digital minds could go very badly - or very well (00:14:13)
- Dangers for digital minds (00:14:41)
- Dangers for humans (00:16:13)
- Other dangers (00:17:42)
- Things could also go well (00:18:32)
- We don't know how to assess the moral status of AI systems (00:19:49)
- There are many possible characteristics that give rise to moral status: Consciousness, sentience, agency, and personhood (00:21:39)
- Many plausible theories of consciousness could include digital minds (00:24:16)
- The strongest case for the possibility of sentient digital minds: whole brain emulation (00:28:55)
- We can't rely on what AI systems tell us about themselves: Behavioural tests, theory-based analysis, animal analogue comparisons, brain-AI interfacing (00:32:00)
- The scale of this issue might be enormous (00:36:08)
- Work on this problem is neglected but seems tractable: Impact-guided research, technical approaches, and policy approaches (00:43:35)
- Summing up so far (00:52:22)
- Arguments against the moral status of digital minds as a pressing problem (00:53:25)
- Two key cruxes (00:53:31)
- Maybe this problem is intractable (00:54:16)
- Maybe this issue will be solved by default (00:58:19)
- Isn't risk from AI more important than the risks to AIs? (01:00:45)
- Maybe current AI progress will stall (01:02:36)
- Isn't this just too crazy? (01:03:54)
- What can you do to help? (01:05:10)
- Important considerations if you work on this problem (01:13:00)
281 episodes
Semua episode
×Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.