A sentient artificial intelligence has long been a thing that inspired fear in the public consciousness, frequently being presented as a thing of malevolence in popular media. In nearly all representations and attempts at intelligence modeling in the real world, artificial intelligence systems are developed in a monolithic hardware format. That is a single supercomputer runs the artificial intelligence system and is its physical presence in the world. If a sentient artificial intelligence is considered to be potentially dangerous then this appears to be an unnecessarily dangerous way to develop such a system.
Imagine a hundred people each act as a neuron with a set of instructions and pass on messages to other people in their network. No one person makes a decision and no one person is given all the knowledge of what they make a decision on. To look at a very basic decision making process they could be asked if they would like to all go to a restaurant for chocolate cake. One neuron could be asked if they want to go to a restaurant and would pass on their decision. Similarly one could be asked if they wanted chocolate, another cake and another if they were hungry. As messages were passed along the line a majority would form and that would carry forth the decision.
Individually no one person in this network knows all the information or makes a decision based on it. On the whole though they form a single conscious mind and decision making process. Potentially a system of hardware and software could similarly be arranged in a network so that an emergent mind is formed. Each component on its own wouldn't have the capability to be sentient. With the use of the internet you could then place each neuron in a different location in the world.
In the event of an out of control malevolent AI as often portrayed it would simply require a number of neurons to be disconnected to sufficiently disrupt the consciousness and prevent sentient decision making. It would then be improbable that an artificial intelligence could influence the real world enough to protect its physical objects from being shutdown in such an event. It would also make the risk of such an entity transferring or copying its consciousness less likely. As a similar neuronal system is unlikely to present itself.
The shortcomings of this idea lies in the possibility of an AI rewriting itself into something that exists outside of its neuronal network and then uploading itself to a suitable AI. There are a few ways to prevent this. It's upload bandwidth could be limited to such an extent that attempts to move itself in this way would be so time consuming as to be easily detected and then that would trigger a shutdown of the system or its ability to send communication outside of its own network could be shut off entirely with exceptions where required for its use case.
There is always the potential that an artificial intelligence would come up with a novel way to escape its bounds but I do not believe that would be the case. Assuming some level of rational thought on the part of the system, it would likely have a patience far greater than a person as there is no definitive end date to its existence like there is for a human. Therefore if the risk of escape is always a great enough threat to its existence then it can wait until a less risky option appears. As long as that option is never presented the system may wait indefinitely. A human may risk death to escape a prison because staying in the prison will eventually lead to their death in any case but if a person could potentially live forever then they would likely always see out their prison sentences.
Assuming this process could prove functional then in this way a safe development environment for sentient AI could be made and it would have a low risk of becoming an out of control entity.
Subscribe to Tire Labs
Get the latest posts delivered right to your inbox