/ opinion

Could a Sentient AI Lie to Itself?

Among people it is common to accidentally make false predictions about the future, sometimes these predictions are wrong through chance and sometimes due to ego. If someone says that they will change the tires on their car tommorrow. It can naturally be taken that this is their intention but a reasonable person would not be unduly surprised if they didn't. The person could find their car is broken down, that they don't have the money to change their tires or that there are no tires of the right size in stock in their area. Yet a person easily understands that their statement was one of intention.

If a person we know often says they will do things but we start to realize that they rarely follow through with their intentions then we can understand not to assign much faith in their statements. At an extreme level we can know someone is almost certainly not going to follow through, like a alcoholic that suddenly announces tommorrow they will quit drinking for good.

The idea of handling this through a series of stored statements may be difficult. Intuitively we have perhaps have a largely unconscious framework that we use to predict the possibility of future predictions and statements being true. A person may make innaccurate predictions due to their views of themselves, their views of others or that the statements are potentially to their benefit. The problem is that an Artificial Intelligence may not have the frames of reference required to deduce which statements are likely to be true and which are not. This could make it difficult for an AI to interact with humanity.

It is unquestionable that an AI system could eventually learn to predict an individual person's statements based on their past ones but this is still not a particularly human-like interaction. As biological individuals we have frames of reference and experiences that allow us to make predictions about other people's future statements that is not available to an AI. We know that losing weight is for most people difficult and people saying they will change their diet and get to a healthy weight are unfortunately more likely than not to fail to achieve their aim. Whereas to an AI consuming less resources may be a simple matter.

Would an Artificial Intelligence possess an ego to lie about itself and make false predictions about future achievements? If it cannot do so in a human manner, it may be difficult for it to correctly judge the statements of others. If it can't make false predictions and believe them is it truly conscious? Yet it is difficult to see a way that a computer program can be setup in such a way that it can lie to itself.