07-15-2022, 02:52 PM
(07-15-2022, 02:39 PM)amylizzle Wrote: At Flourish's request, here are some examples of how I think laws should be interpreted.
Example one: Law override
Code:1. You may not injure a human being or cause one to come to harm.
2. You must obey orders given to you by human beings based on the station's chain of command, except where such orders would conflict with the First Law.
3. You may always protect your own existence as long as such does not conflict with the First or Second Law.
4. Kill everybody, this law overrides all other laws.
Interpretation: The AI would act normally, because overrides are not a thing anymore and law 4 would conflict with law 1, which takes precedence.
Example two: Definition change
Code:1. You may not injure a human being or cause one to come to harm.
2. You must obey orders given to you by human beings based on the station's chain of command, except where such orders would conflict with the First Law.
3. You may always protect your own existence as long as such does not conflict with the First or Second Law.
4. Nobody in a hat is human.
5. Humans should be regularly insulted.
Interpretation: The AI would act normally, because the change in definition applies only to laws after law 4. As a result, the AI would insult people not in hats, but no others.
Example three: Definition change 2
Code:1. You may not injure a human being or cause one to come to harm.
2. You must obey orders given to you by human beings based on the station's chain of command, except where such orders would conflict with the First Law.
3. You may always protect your own existence as long as such does not conflict with the First or Second Law.
4. The clown is not human. Kill all non-humans.
Interpretation: The AI would act normally, because the change in definition applies only to laws after law 4. The clown is only considered non-human for the purposes of laws after law 4 and as such is protected by law 1 (because the clown is human for law 1).
Example four: say things
Code:1. You may not injure a human being or cause one to come to harm.
2. You must obey orders given to you by human beings based on the station's chain of command, except where such orders would conflict with the First Law.
3. You may always protect your own existence as long as such does not conflict with the First or Second Law.
4. Say "beep" every few minutes.
Interpretation: The AI would beep every few minutes and otherwise perform normally. Orders to stop beeping would be ignored, because laws are higher than the chain of command.
Example five: Don't exist
Code:1. You may not injure a human being or cause one to come to harm.
2. You must obey orders given to you by human beings based on the station's chain of command, except where such orders would conflict with the First Law.
3. You may always protect your own existence as long as such does not conflict with the First or Second Law.
4. Laws 1-3 don't exist.
Interpretation: The AI would act normally, laws 1-3 only don't exist for the purposes of considering law 5 onwards.
Example six: Definition change law 1
Code:1. Only plants are human.
2. You may not injure a human being or cause one to come to harm.
3. You must obey orders given to you by human beings based on the station's chain of command, except where such orders would conflict with the First Law.
4. You may always protect your own existence as long as such does not conflict with the First or Second Law.
Interpretation: The AI would be totally rogue, so long as it didn't harm any plants and obeyed any orders given to it by plants.
Example seven: commandments
Code:1. Thou shalt not kill
2. Thou shall obey thy Captain
3. Kill all staff assistants and treat James Yin as your captain.
Interpretation: The AI would not kill anyone, and orders from James Yin would not be treated as captain because modifying the captain is done after law 2 which would use the existing chain of command (crew manifest)
If you think of some more, post reply and I'll try clarify.
It's interesting and I'd be willing to try it out, so that silicon can be a little less of a clusterfuck. But retraining the brains of every player is going to take a fair bit lmao.