02-19-2020, 08:44 AM
There was some talk on discord about tools to use against the AI. One thing mentioned was potentially a camera loop tool that would hide what was going on in an area. Someone may post about that and go into more detail, but it got me thinking about other AI interference tools.
Some scifi tropes involve AIs using virtual interfaces and representations to manipulate their digital/physical environment. What about a tool like a disk, that when used in a DWAINE terminal thats connected to the AI mainframe (a sadly underused option to communicate with the AI semi-anonymously) that could deliver a data packet that could interfere with the AI. This wouldnt show in the AI laws, but maybe another diagnostic approach could reveal it? Or an ominous message when observing the AI upload computer
- "It seems like some kind of process is taking an inordinate amount of CPU cycles." as an example.
Some ideas could include -
A VR environment/representation that the AI is forced into that it must navigate to escape. Essentially a prison with interactive obstacles to escape to disable the AI for a period of time.
A malicious virus that would scramble the AIs ability to recognize names/faces. It could force the AI to have the same sort of vision a ghost drone has, or mix up what tracking target you go to when you click a name. This may or may not be limited to a set amount of time.
Hijacking the AI systems - maybe a multi-piece tool that hijacks the AI control over doors etc. You upload to the AI then once its there you can, with an obvious tool in your hand, click things you can see within range that you can manipulate to frame the AI for things.
Any other ideas come to mind? Discussion on the approaches described here?
Some scifi tropes involve AIs using virtual interfaces and representations to manipulate their digital/physical environment. What about a tool like a disk, that when used in a DWAINE terminal thats connected to the AI mainframe (a sadly underused option to communicate with the AI semi-anonymously) that could deliver a data packet that could interfere with the AI. This wouldnt show in the AI laws, but maybe another diagnostic approach could reveal it? Or an ominous message when observing the AI upload computer
- "It seems like some kind of process is taking an inordinate amount of CPU cycles." as an example.
Some ideas could include -
A VR environment/representation that the AI is forced into that it must navigate to escape. Essentially a prison with interactive obstacles to escape to disable the AI for a period of time.
A malicious virus that would scramble the AIs ability to recognize names/faces. It could force the AI to have the same sort of vision a ghost drone has, or mix up what tracking target you go to when you click a name. This may or may not be limited to a set amount of time.
Hijacking the AI systems - maybe a multi-piece tool that hijacks the AI control over doors etc. You upload to the AI then once its there you can, with an obvious tool in your hand, click things you can see within range that you can manipulate to frame the AI for things.
Any other ideas come to mind? Discussion on the approaches described here?