/ opinion

The Frame Problem and AI

A hypothesized issue with a sentient AI is the idea that a cognitive person has many beliefs about the world that are updated automatically by experiences but that a computer program would not be feasibly able to determine which beliefs to update.

As an example of the principle, a person that believes all snakes are poisonous could discover that one particular snake that lives close by them is harmless and this new knowledge would likely update other beliefs and cause them to question other contextual beliefs about the world. Perhaps they think all spiders are poisonous too and this new knowledge will cause them to reconsider that fact. Beyond this they might be encouraged to learn more about their local environment, it may slightly change their entire relationship with their local wildlife.

When we picture a computer program, even an incredibly advanced one that is capable at times of representing human-like cognition, this would be a very difficult thing to program for. In the vast database of information the computer program has there is unlikely to be much relation between snakes and spiders. The computer program when presented with this new information is likely to decide that this particular snake is an exception the poison snake rule and make that one small change. The idea of it changing its beliefs is unlikely, it is not learning in a human manner.

This issue goes further then the computational difficulty of programming an AI to revise its beliefs however. The problem seems to point to the idea that a series of programmed statements needed to revise an AI's beliefs in the world in regards to the consequences of their actions may need to be truly infinite. With each statement having to check every single belief and then every single belief change may need to check every other belief in regards to its change. Even an AI that could see the relation to the new knowledge about snakes to the rest of its beliefs might not be able to act on that relation.

There are ways to see that this could be avoided. In the computational theory of the mind that holds that human minds are similar symbolic interpreters to such an AI then perhaps it would be solved simply by limiting the amount of processing power applied to revising beliefs. This would lead to beliefs that are in error to accumulated knowledge but humans have mistaken beliefs too. It's difficult to see how an AI would know where to spend its limited processing power though. The new knowledge about snakes might be applied to reptiles only which isn't particularly helpful.

It seems likely that a sentient AI that replicates or exceeds human's ability to revise their beliefs based on new information will need to take a form that is different to a series of written programmed statements even if the AI is capable of writing the series of statements itself.