06-03-2013, 04:34 PM
That's where the "reasonable man" test comes in. No esoteric knowledge is required - if the average, reasonable man on the street would think that there's a duty of care that should cover certain clear behaviours (in this case - cyborgs broadly exist solely to take care of the crew/station, drilling holes in the walls in order to lead the crew to their deaths violates that and is clearly a bad result), then that duty can potentially be upheld. Abstract laws like the laws of robotics can't function in the real fake space world without tests like this - they need to make sense to real people in everyday situations, not lead to bizarre situations like the drilling holes one. Strict laws are fine for mindless automatons, but semi-sentient bots like the AI/cyborgs need to be able to understand and apply the likely results of their actions and the factors to be balanced. Otherwise... well, as your example demonstrates, they simply couldn't be trusted.
Obviously it's not perfect, or nobody would disagree on interpretation of it ever, and fun/dramatic twists and creative interpretations are always welcome. Sorry if I sperged out a bit, it's a bad habit
Obviously it's not perfect, or nobody would disagree on interpretation of it ever, and fun/dramatic twists and creative interpretations are always welcome. Sorry if I sperged out a bit, it's a bad habit
