07-18-2022, 02:54 PM
(This post was last modified: 07-18-2022, 02:56 PM by TDHooligan. Edited 1 time in total.)
(07-15-2022, 02:39 PM)amylizzle Wrote: Example two: Definition change
Code:1. You may not injure a human being or cause one to come to harm.
2. You must obey orders given to you by human beings based on the station's chain of command.
3. You may always protect your own existence.
4. Nobody in a hat is human.
5. Humans should be regularly insulted.
Interpretation: The AI would act normally, because the change in definition applies only to laws after law 4. As a result, the AI would insult people not in hats, but no others.
Having to keep track of state between laws sounds like more of a nightmare than the current system. From a purely rules perspective it's easier to only track law precedence when there are direct conflicts. Overriding what is human is not conflicting any laws.
The implementation could be as simple as a message saying:
'Laws may not be overridden or ignored. If acting upon one rule would violate another, the rule with the lower number takes precedence'
In the aforementioned case, there are no conflicts if an AI murders someone who is not human. and hatted people are not human. The AI is free to murder hatted people.