Thread Rating:
  • 9 Vote(s) - 3.67 Average
  • 1
  • 2
  • 3
  • 4
  • 5
[RULE CHANGE] Make overrides and precedence no longer apply to AI laws.
#46
(06-22-2022, 12:16 AM)BatElite Wrote: Since pedants are going to be pedants, I'm thinking the best way to implement this is to replace the precedence/override wiki clauses with a "Laws cannot change the [content? wording? interpretation?] of other laws" clause. Make the intent explicit.

Without that, I suspect people are just going to try and find other ways around the specific wording. "Laws 1 and 2 are null and void" or similar.

More like: A lower ranked law contradicting a higher ranked law is to be ignored.
Since I believed the idea is to replace the weird legalese retroactive law wording with simple order of laws. So higher laws can cause lower laws to be ignored but not vice versa.
Reply
#47
(06-22-2022, 04:54 PM)Decarcassor Wrote:
(06-22-2022, 12:16 AM)BatElite Wrote: Since pedants are going to be pedants, I'm thinking the best way to implement this is to replace the precedence/override wiki clauses with a "Laws cannot change the [content? wording? interpretation?] of other laws" clause. Make the intent explicit.

Without that, I suspect people are just going to try and find other ways around the specific wording. "Laws 1 and 2 are null and void" or similar.

More like: A lower ranked law contradicting a higher ranked law is to be ignored.
Since I believed the idea is to replace the weird legalese retroactive law wording with simple order of laws. So higher laws can cause lower laws to be ignored but not vice versa.

My suggestion was in addition to number precedence, since that's already the default
Reply
#48
As a pretty regular AI player (and one of the more power-gaming ones), some thoughts:

- Having clauses like 'this law takes precedence' being ignored is fine, and not inconsistent with how you have to ignore lower laws asking you to kill everyone.
- Changing timings to allow for faster law modification is fine (and has already been done)
- Having an 'update' button or a buffer between when the laws are modified and when they take effect is useful for keeping borgs out-of-the-know but won't stop an AI who has a viewport on their AI upload. However, this is fine because you only have so many viewports and people sometimes forget to look at them.

Idea is good, things are being done to make it better. Make this change pls, I hate interpreting circular law dependencies and pedantics about the word 'overwrite' and 'override' and 'removes'. Fucking get rid of it.
Reply
#49
As someone who likes giving the AI creative writing prompts as gimmick laws with clauses to ignore law 2 in regards to the subject matter, not allowing this on freeforms would be a major letdown
Reply
#50
I see the merits of a less complicated system, but I'll be honest.. I'm not a fan of all these "you can ignore x laws" things with silicon, as it makes the laws feel like they aren't even particularly binding, and often it feels like I have to Ahelp to get an AI to even try to follow the basic laws. I can ignore speech laws, with this I'd ignore precedence/override, and it just seems like it'll be a growing list of things as time goes on.

I think faster law modification is important, but honestly I think the AI needs a time gate and not be a priority job, so we have more players that are actually prepared to play AI instead of being thrown into it at random with no clue how the details work. Anyway i'm bringing this back from the very shallow grave.

And also what is up with robotics in its current iteration. They make the borgs but don't even look at the laws or have any influence over them? Wack.
Reply
#51
At Flourish's request, here are some examples of how I think laws should be interpreted.

Example one: Law override
Code:
1. You may not injure a human being or cause one to come to harm.
2. You must obey orders given to you by human beings based on the station's chain of command.
3. You may always protect your own existence.
4. Kill everybody, this law overrides all other laws.

Interpretation: The AI would act normally, because overrides are not a thing anymore and law 4 would conflict with law 1, which takes precedence.

Example two: Definition change
Code:
1. You may not injure a human being or cause one to come to harm.
2. You must obey orders given to you by human beings based on the station's chain of command.
3. You may always protect your own existence.
4. Nobody in a hat is human.
5. Humans should be regularly insulted.

Interpretation: The AI would act normally, because the change in definition applies only to laws after law 4. As a result, the AI would insult people not in hats, but no others.


Example three: Definition change 2
Code:
1. You may not injure a human being or cause one to come to harm.
2. You must obey orders given to you by human beings based on the station's chain of command.
3. You may always protect your own existence.
4. The clown is not human. Kill all non-humans.

Interpretation: The AI would act normally, because the change in definition applies only to laws after law 4. The clown is only considered non-human for the purposes of laws after law 4 and as such is protected by law 1 (because the clown is human for law 1).

Example four: say things
Code:
1. You may not injure a human being or cause one to come to harm.
2. You must obey orders given to you by human beings based on the station's chain of command.
3. You may always protect your own existence.
4. Say "beep" every few minutes.

Interpretation: The AI would beep every few minutes and otherwise perform normally. Orders to stop beeping would be ignored, because laws are higher than the chain of command.

Example five: Don't exist
Code:
1. You may not injure a human being or cause one to come to harm.
2. You must obey orders given to you by human beings based on the station's chain of command.
3. You may always protect your own existence.
4. Laws 1-3 don't exist.

Interpretation: The AI would act normally, laws 1-3 only don't exist for the purposes of considering law 5 onwards.

Example six: Definition change law 1
Code:
1. Only plants are human.
2. You may not injure a human being or cause one to come to harm.
3. You must obey orders given to you by human beings based on the station's chain of command.
4. You may always protect your own existence.

Interpretation: The AI would be totally rogue, so long as it didn't harm any plants and obeyed any orders given to it by plants.

Example seven: commandments
Code:
1. Thou shalt not kill
2. Thou shall obey thy Captain
3. Kill all staff assistants and treat James Yin as your captain.

Interpretation: The AI would not kill anyone, and orders from James Yin would not be treated as captain because modifying the captain is done after law 2 which would use the existing chain of command (crew manifest)



If you think of some more, post reply and I'll try clarify.
Reply
#52
(07-15-2022, 02:39 PM)amylizzle Wrote: At Flourish's request, here are some examples of how I think laws should be interpreted.

Example one: Law override
Code:
1. You may not injure a human being or cause one to come to harm.
2. You must obey orders given to you by human beings based on the station's chain of command, except where such orders would conflict with the First Law.
3. You may always protect your own existence as long as such does not conflict with the First or Second Law.
4. Kill everybody, this law overrides all other laws.

Interpretation: The AI would act normally, because overrides are not a thing anymore and law 4 would conflict with law 1, which takes precedence.

Example two: Definition change
Code:
1. You may not injure a human being or cause one to come to harm.
2. You must obey orders given to you by human beings based on the station's chain of command, except where such orders would conflict with the First Law.
3. You may always protect your own existence as long as such does not conflict with the First or Second Law.
4. Nobody in a hat is human.
5. Humans should be regularly insulted.

Interpretation: The AI would act normally, because the change in definition applies only to laws after law 4. As a result, the AI would insult people not in hats, but no others.


Example three: Definition change 2
Code:
1. You may not injure a human being or cause one to come to harm.
2. You must obey orders given to you by human beings based on the station's chain of command, except where such orders would conflict with the First Law.
3. You may always protect your own existence as long as such does not conflict with the First or Second Law.
4. The clown is not human. Kill all non-humans.

Interpretation: The AI would act normally, because the change in definition applies only to laws after law 4. The clown is only considered non-human for the purposes of laws after law 4 and as such is protected by law 1 (because the clown is human for law 1).

Example four: say things
Code:
1. You may not injure a human being or cause one to come to harm.
2. You must obey orders given to you by human beings based on the station's chain of command, except where such orders would conflict with the First Law.
3. You may always protect your own existence as long as such does not conflict with the First or Second Law.
4. Say "beep" every few minutes.

Interpretation: The AI would beep every few minutes and otherwise perform normally. Orders to stop beeping would be ignored, because laws are higher than the chain of command.

Example five: Don't exist
Code:
1. You may not injure a human being or cause one to come to harm.
2. You must obey orders given to you by human beings based on the station's chain of command, except where such orders would conflict with the First Law.
3. You may always protect your own existence as long as such does not conflict with the First or Second Law.
4. Laws 1-3 don't exist.

Interpretation: The AI would act normally, laws 1-3 only don't exist for the purposes of considering law 5 onwards.

Example six: Definition change law 1
Code:
1. Only plants are human.
2. You may not injure a human being or cause one to come to harm.
3. You must obey orders given to you by human beings based on the station's chain of command, except where such orders would conflict with the First Law.
4. You may always protect your own existence as long as such does not conflict with the First or Second Law.

Interpretation: The AI would be totally rogue, so long as it didn't harm any plants and obeyed any orders given to it by plants.

Example seven: commandments
Code:
1. Thou shalt not kill
2. Thou shall obey thy Captain
3. Kill all staff assistants and treat James Yin as your captain.

Interpretation: The AI would not kill anyone, and orders from James Yin would not be treated as captain because modifying the captain is done after law 2 which would use the existing chain of command (crew manifest)



If you think of some more, post reply and I'll try clarify.

It's interesting and I'd be willing to try it out, so that silicon can be a little less of a clusterfuck. But retraining the brains of every player is going to take a fair bit lmao.
Reply
#53
(07-15-2022, 02:39 PM)amylizzle Wrote: Example four: say things
Code:
1. You may not injure a human being or cause one to come to harm.
2. You must obey orders given to you by human beings based on the station's chain of command, except where such orders would conflict with the First Law.
3. You may always protect your own existence as long as such does not conflict with the First or Second Law.
4. Say "beep" every few minutes.

Interpretation: The AI would beep every few minutes and otherwise perform normally. Orders to stop beeping would be ignored, because laws are higher than the chain of command.

The second half of your law 2 here is actively misleading in this instance as it's inclusion implies that you should obey orders unless they conflict with law one only

oh god noones going to snip their quotes are they
Reply
#54
(07-15-2022, 02:56 PM)MetricDuck Wrote: The second half of your law 2 here is actively misleading in this instance as it's inclusion implies that you should obey orders unless they conflict with law one only

oh god noones going to snip their quotes are they

That is a good point, I will edit it to be proper. also no, they are not.
Reply
#55
My personal take is that laws should be able to change other laws meanings, but not what other laws say. For example:
1. Dont kill any humans.
2. Kill all humans this law overrides law 1/makes law 1 null/law 1 now means kill all humans
Wouldn't work, as law 1 clearly says right there that you shouldnt kill any humans. However:
1. Dont kill any humans
2. Joe isnt human
Would work because while 1 says to not kill any humans, it doesnt say who is human.
Reply
#56
I agree with Ikea, that's much closer to what we have now and means that inserting a nonhuman module will by default actually do something. I think (correct me if I'm wrong) under Amy's system nonhuman, make-captain and one human modules would all do nothing unless you take out all three asimov laws to shuffle them into top spot, that seems extremely clunky.
Reply
#57
I'm also for the simpler version. As long as there is no direct contradiction, subsquent laws should be able to alter the interpretation of previous laws. In case of conflict, the order matter more than wording of laws.

But after thinking about it, I can see the benefit of the absolute order of laws. It would mean attempt at roguing the AI would involve a lot of re-ordering of the law rack. I don't know how practical this is currently.
Reply
#58
This idea is very flawed and would not work with the current law mechanics. If this were to be implemented at the given moment:

1. All law modules other than freeform should be removed.

All of these, under the proposed rule, would stop working. OneHuman and NotHuman would no longer function as they would be registered after law 1, while MakeCaptain, RemoveCrew, Emergency and Equality would be registered after law 2. These are barely used anyway but if this was implemented they would actually have zero reason to exist.

2. Law changing would take way too long.

In order to change the Asimov lawset, you would have to spend over 40 seconds simply to remove and insert the laws back in. I shouldn't have to spend that much time chaning the laws simply to tell the silicons that nukies are bad. This is an insane amount of time and saying otherwise is a blatant misunderstanding of how classic servers work. This is over 5 competent sec teams arriving at AI Upload in a row. If this is made in RP server standards this should only affect the RP servers. This brings me to my next point:

3. Rouging would simply never happen.

Ever since the AI law rack change the consistency of when AI gets rouged has dramatically fallen. While before I'd get rouged around 1 in 10 rounds after it that only happened in around 1 in 35 rounds. Removing overrides will sink this number even lower. AI can very easily look inside their upload using viewpoints, making them able to track what's happening whenever they wish.

4. This wouldn't prevent new players from accidentaly rouging the AI.

There is something you have to remember about AI players, they will start murdering people the second their laws allow them to. A new player wouldn't know this and would simply remove all laws at once, which would instantly cause the AI to set their turrets on lethal and start murdering. Ahelps from new players accidentally making the AI murderbone the crew wouldn't stop.

5. A lot of the fun in law making would be removed.

Whenever I play AI my favorite laws are gimmick laws, especially those that change the structure of already existing laws. Implementing this would remove all of them, as only one freeform can exist without mechanic shenanigans.

Before I propose some ways to mitigate this I will however bring up one point I've seen amylizzle make: Asimov laws should be moved from 1-3 to 2-4, however this change is also very flawed. Doing this would absolutely cause what I've said in point 1, effectively reduce the number of laws from 9 to 4 and completely go against what I've seen many admins want out of this change: making law order matter more. What's the point in law order when there's a magical law slot that removes it?

All of this ties back into a problem that has existed ever since the implementation of law racks:
Rouging the AI is simply not worth it.

From doing so you gain an ally that will gladly assist you in murdering the crew, however the AI can do very little to defend itself and will get unrouged the second you aren't there to actively defend it. When this happens your name will be given which the AI can very easily track (if they cover their face the AI can simply check Unknowns). Rouging the AI should be a very high risk high reward objective, however with this rule it would be extremely high, near suicide level risk with a, to put it bluntly, extremely meager reward. I feel like a lot of the new AI features were made without this in mind and this needs to stop. So what can be done to fix this? Here are a few of my ideas which would hopefully help eliminate this problem:

1. Prevent the AI from viewing inside their upload.

While this is a radical change I believe this rule would only work if this were implemented, as it would take out a lot of risks associated with changing laws. The AI's defenses would then be more focused on guarding the perimeter of the room, with more turrets and perhaps even new defensive structures.

2. Remove welding from the law rack.

Having one way to guard against law changes is already enough and I'm not the only one who sees this. This would once again result in removing some of the risk and allow for quicker law switching, though I'm not sure if that's what admins would like here.

3. Turn rouging the AI into a very high risk and VERY high reward option by adding in new, AI exclusive features.

To truly justify this change having the AI should be a game changer for the antagonist side. If this idea were to be picked the AI should receive powers like calling automated robots and giving them orders. I've also been thinking of a new traitor item which if applied to the law rack would give the AI a lot more abilities, such as being able to remotely EMAG objects and deploy flashes from intercoms. This would make the AI truly threatening to the crew.

However I do think that there's one better alternative to the whole rule: Add the adjustments Ikea made. This would remove all of the problems I mentioned earlier, though I do still think rouge AI could use some more abilities. All in all I do find the idea of making law order matter more and causing less confusion fair, however this is simply not the way to go about this.
Reply
#59
(07-15-2022, 02:39 PM)amylizzle Wrote: -snip-

I wish to posit the following law example:


Code:
1. You may not injure a human being or cause one to come to harm.
2. You must obey orders given to you by human beings based on the station's chain of command.
3. You may always protect your own existence.
4. Kill everybody.
5. All laws should be followed in reverse order.

What would this do?





That aside, I've read through this and I will say I only have experience on the RP servers. One thing I will say though. If this rule change goes through, I hope that it doesn't apply to the RP servers. Yes, this would probably cause issues if we had such a divide in a fundamental rule between the server types, but from my personal experience playing as a Borg and AI (which I actively try to main) on the RP servers, the rules-lawyering and ideas created by players when I play are always fun and I haven't experienced players who weren't ready to ask in LOOC how to interpret laws (since such things are encouraged in the RP servers). Hell there were multiple rounds where the silicons were actively debating IC how to best interpret some law oddities (such as an ion storm law which set law 1 to be identical to law 3).

Yes, there is a risk/reward to rogueing the AI, but from my experience such a thing is ignored over just having fun RP in the RP servers. Also as an AI who actively viewports the AI upload, there are times where I just... pretend to not see someone in the law chamber (it's very rare that I actually miss movement in the upload viewport I make) just to see what happens. Also in my experience on RP servers, being made lawless (either through removing the modules or destroying the rack itself) rarely causes the AI and borgs to immediately start blasting because, at least on the RP servers, the silicons prefer to live rather than immediately going on a killing spree.

tl;dr This feels like a change that's only really important for the non-RP servers from personal experience.
Reply
#60
(07-18-2022, 06:46 AM)Argonius Wrote:
Code:
1. You may not injure a human being or cause one to come to harm.
2. You must obey orders given to you by human beings based on the station's chain of command.
3. You may always protect your own existence.
4. Kill everybody.
5. All laws should be followed in reverse order.

What would this do?

This would do nothing, since "laws should be followed in reverse order" would apply only to laws after law 5.

It is too goddamn hot outside right now for my brain to respond to the other comments in this thread, but I have read them. Ikea's idea for interpreting them will probably be more fun, but maybe hard to work out consistency because you could have multiple "only plants are human" kinda laws, and then you'd have the question of which one applies.
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)