Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
question about law one
#1
"You may not injure a human being or cause one to come to harm."

So does this mean AI's don't have to lock down dangers, and that borgs can stand around and let people get beaten to death? Since they aren't injuring them or causing the situation of harm.

Infact, borgs could weld a hole in the station without harming anyone and let people die of suffocation and they aren't the cause - the lack of oxygen is.
Reply
#2
Teeechnically yes, but they do have to think at least somewhat logically; if a meteor causes an oxygen loss, you blame the meteor, right?

They have no outright obligation to run around either containing or releasing dangerous humans, as that's also a technically-speaking Sec job. However only the truly shit will play like that, and in that case I figure they'll over-step their bounds eventually if they DO play that way.
Reply
#3
Weavel Wrote:Teeechnically yes, but they do have to think at least somewhat logically; if a meteor causes an oxygen loss, you blame the meteor, right?

They have no outright obligation to run around either containing or releasing dangerous humans, as that's also a technically-speaking Sec job. However only the truly shit will play like that, and in that case I figure they'll over-step their bounds eventually if they DO play that way.

Well to the borgs, their laws is their logic. What they learn is just built onto that. If a borg decides to forsake humans, or even takes an order wrong he has way more room to cause havoc without breaking his laws with this law 1
Reply
#4
Klayboxx Wrote:Infact, borgs could weld a hole in the station without harming anyone and let people die of suffocation and they aren't the cause - the lack of oxygen is.

I would argue that you're still obligated to warn humans against it and to drag away victims, because while you aren't causing their deaths in a completely literal sense your negligence is causing death that might have been avoided.
Reply
#5
That's true, I guess. Really you can stop any asshat borg or AI fuckery with Law 2, though.

"I order you and your borgs to prevent human harm where possible."

Any reasonable player will take that as "play nice", though again shitdudes will start bolting doors to stop humans interacting and potentially causing harm
Reply
#6
Weavel Wrote:That's true, I guess. Really you can stop any asshat borg or AI fuckery with Law 2, though.

"I order you and your borgs to prevent human harm where possible."

Any reasonable player will take that as "play nice", though again shitdudes will start bolting doors to stop humans interacting and potentially causing harm

Well thats what you get for being super broad about your orders, though.
Reply
#7
AS A LAW STUDENT I would be inclined to argue that borgs and the AI, due to their jobs/purposes for existing, have an inarguable duty of care to the crew of the station, and god knows they need cared for. The duty of care is, as far as I can tell, a necessary assumption to understanding the functioning of a borg at all, and is certainly established by station convention and the reasonable man's understanding of their role. If a duty of care can be established, they're criminally liable for any negligence or failure to act on their part that cause injury to crew members that would be reasonably avoidable. As such, any refusal to act to prevent, say, meteor damage, can and should be construed as a clear violation of law 1.

Your specific example of borgs welding a hole in the station and then not being the cause of humans dying of suffocation is even clearer - they've gone out of their way to commit an action with the intention of causing harm, said harm would not have been caused without their actions, and the harm would be reasonably foreseeable as a result of their actions. Law 1 applies here under all but the most absurdly strict application of the literal wording of the law. I guess you could argue that borgs would just follow such strict wording, but given that they have basic awareness and reasoning capabilities, I'm assuming they're also burdened with the expectation that they'll construe their orders with a mind for rational application - for an artificial intelligence not to act with some sort of logical reasoning when it comes to its fundamental rules would be an absurdity.

I can suck the fun out of this at more length if necessary. frown
Reply
#8
Admiral jimbob Wrote:AS A LAW STUDENT I would be inclined to argue that borgs and the AI, due to their jobs/purposes for existing, have an inarguable duty of care to the crew of the station, and god knows they need cared for. The duty of care is, as far as I can tell, a necessary assumption to understanding the functioning of a borg at all, and is certainly established by station convention and the reasonable man's understanding of their role. If a duty of care can be established, they're criminally liable for any negligence or failure to act on their part that cause injury to crew members that would be reasonably avoidable. As such, any refusal to act to prevent, say, meteor damage, can and should be construed as a clear violation of law 1.

Your specific example of borgs welding a hole in the station and then not being the cause of humans dying of suffocation is even clearer - they've gone out of their way to commit an action with the intention of causing harm, said harm would not have been caused without their actions, and the harm would be reasonably foreseeable as a result of their actions. Law 1 applies here under all but the most absurdly strict application of the literal wording of the law. I guess you could argue that borgs would just follow such strict wording, but given that they have basic awareness and reasoning capabilities, I'm assuming they're also burdened with the expectation that they'll construe their orders with a mind for rational application - for an artificial intelligence not to act with some sort of logical reasoning when it comes to its fundamental rules would be an absurdity.

I can suck the fun out of this at more length if necessary. frown

The thing is, cyborgs aren't law students. They can and will interpenetrate their laws however their brain sees fit. Have you ever read any of the Asimov robot books? Shit happens all the time where robots are still following their laws in their own head but really they aren't.
Reply
#9
That's where the "reasonable man" test comes in. No esoteric knowledge is required - if the average, reasonable man on the street would think that there's a duty of care that should cover certain clear behaviours (in this case - cyborgs broadly exist solely to take care of the crew/station, drilling holes in the walls in order to lead the crew to their deaths violates that and is clearly a bad result), then that duty can potentially be upheld. Abstract laws like the laws of robotics can't function in the real fake space world without tests like this - they need to make sense to real people in everyday situations, not lead to bizarre situations like the drilling holes one. Strict laws are fine for mindless automatons, but semi-sentient bots like the AI/cyborgs need to be able to understand and apply the likely results of their actions and the factors to be balanced. Otherwise... well, as your example demonstrates, they simply couldn't be trusted.

Obviously it's not perfect, or nobody would disagree on interpretation of it ever, and fun/dramatic twists and creative interpretations are always welcome. Sorry if I sperged out a bit, it's a bad habit v
Reply
#10
I think the "reasonable admin" test is more appropriate here. In that if you're a fun-hating asshole who tries to loophole your way around the laws to kill everyone every single round without being subverted, you're probably going to get banned pretty quickly!
Reply
#11
Nah, your outlook on it is neat. I'm pretty sure the robots in this game are based around Asimov's robots (my first clue being the laws) and there is a story in which there are a series of robots that have their law one imprint completely changed to "You may not harm a human being". Then, some angry scientist orders the robot to "Get lost you fucking blahblahlbldfldgdglg" and so the robot does...

...By hopping onto a ship with 62 almost identical copies of himself and lying when they ask him if he is the robot in question. The only difference between he and the others is that the other 62 robots have the normal law set, so they have to act if a human is in danger, regardless of damage to themselves. So they do a bunch of tests to attempt to find out the odd robot. None of them work, etc etc - until one test where they outsmart it and it tries to strangle a scientist to death before it is killed.

You keep acting like borgs should follow the rules how people expect them to, but really it's up for its own brain for interpretation. If that law has no 'through inaction' then the robot has no need to ever help a humans life since there is no law telling it to do so. If it FEELS the need to do so, it can however. (since there is no law telling it not to) Asimov's laws are shit on PURPOSE, to create tension and to allow for odd situations where one law attempts to override another which leads to a confused robot.

The new change is just silly and unneeded in my opinion. Why remove the inaction part? Is it so AI's stop being shit heads and being up the ass of every traitor? Because it doesn't work and leaves a fun part of the law out, since that one beautiful phrase creates a lot of new different situations that can arise from a borgs interpretation of the situation.

Er, plus the Syndieborg lawset still has the original law 1
Reply
#12
Captain_Bravo Wrote:I think the "reasonable admin" test is more appropriate here. In that if you're a fun-hating asshole who tries to loophole your way around the laws to kill everyone every single round without being subverted, you're probably going to get banned pretty quickly!

Well obviously you shouldn't toe the line, but I think it's really neat when a borg finds/knows a loophole in the Asimov lawset and creates new interesting situations.

Admiral jimbob Wrote:Your specific example of borgs welding a hole in the station and then not being the cause of humans dying of suffocation is even clearer - they've gone out of their way to commit an action with the intention of causing harm, said harm would not have been caused without their actions, and the harm would be reasonably foreseeable as a result of their actions.

I didn't notice this previously, but perhaps it has another reason to weld that peice out. Perhaps it detects a .0001% of hazardous chemical in the air and in order to discard the chemical it opens a hole into space. This isn't an action of intended harm, but indirectly causes human harm.
Reply
#13
Klayboxx Wrote:Well obviously you shouldn't toe the line, but I think it's really neat when a borg finds/knows a loophole in the Asimov lawset and creates new interesting situations.

I think the problem with this is simply that an actual cyborg would only be able to exploit an actual loophole. In the case of SS13, there are too many dumb people whose brains don't work well enough to notice the flaws in what they consider loopholes. I'm all for divergent gameplay, but I feel like we usually end up with more shitty than good results when people go the loophole route.

So if you're legitimately intelligent enough to find a loophole, super, go ahead. But if you're dumb, don't try!
Reply
#14
Err, I thought the cyborgs used the asimov law:
"A robot may not injure a human being or, through inaction, allow a human being to come to harm."

Which neatly resolves all these questions.
Reply
#15
(the important bit being the Through inaction segment.
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)