The Chinese Room describes a thought experiment that concludes that a computer cannot possess a mind regardless of how intelligently it behaves.
In this thought experiment, first imagine a chatbot that takes in Chinese language input and responds with Chinese language output. The chatbot is so good at this that it can convince the Chinese speaker that it is in fact a live Chinese speaker. But the chatbot is a computer program and like all computer programs can be reduced to a series of instructions.
Now imagine the same setup except instead of a chatbot receiving the input, a non-Chinese speaking man in a well stocked office receives the Chinese input. He then follows the written instructions that make up the chatbot program manually. This allows him to type in the appropriate Chinese characters in response. He is also able to simulate a Chinese conversation without having any knowledge of Chinese.
The conclusion being that the Chinese chatbot does not have what can be considered a mind.
The problem I have with this argument is that I hold that a conscious mind and something that cannot be distinguished between a conscious mind can be safely considered to be the same thing. If I give you a diamond but explain to you that it is not a diamond but in fact no possible test could determine that it is not a diamond and it will behave in every way like a diamond in every circumstance. The rational position is to assume that it is in fact a diamond and that the mistake lies with my incorrect categorization of it not being a diamond.
The issue with the Chinese Room example is that it also appeals to a consequence that people do not want to accept naturally, that a series of instructions could represent consciousness and possibly their own uniqueness can be distilled into such a thing. If a series of instructions were capable of perfectly mirroring conscious thought then I would consider it to be indeed conscious thought.
Subscribe to Tire Labs
Get the latest posts delivered right to your inbox