A sentient Artificial Intelligence may need to look at its free will and past decisions in a way that resembles human's own consideration to be effective.
If we look at a simple AI like one that is programmed to play chess it has no need to look at its historical actions outside of learning to play chess better. However a more complex situation requires a far more contextual look at historical behavior. An AI driven freight truck that has been given the task of having a fast average delivery time may need to take risks to achieve that goal. Perhaps it will skip a fuel stop to take advantage of a break in traffic, or push out a service to make a delivery run before snows hit.
An AI that cannot take risks will not be nearly as effective as an AI that can so it is likely that AI's will eventually be programmed to do this. If an AI cannot seize opportunities it will not be able to plan out its future in the most effective way.
Such an AI will need to learn what risks to take as things go wrong though. The probability that something goes wrong and the cost of a problem cannot be entirely predicted. So as problems are encountered the fuzzy predictions that the AI used to make its initial decision become more known by experience. The AI may come to regret past risks taken and develop an aversion to risks that have particularly harmful outcomes. While other risks may through experience be seen as far safer to take.
Subscribe to Tire Labs
Get the latest posts delivered right to your inbox