Category : Consciousness.
Note that the recourse used for this blog relates to : Searle, J, Mind, brains and science : The 1984 Reith Lectures, London : Penguin. (For beginners, I recommend pages 28-41.)
Terminology : What do we mean by “thinking,” and what exactly is a computer?
With the advancement of modern technology over the last century : such as smartphones, computers, televisions and much more, technology is now a fundamental component of every day life. But one question that has been depicted by the media through robots, is whether a computer may be able to think.
While this debate is in fact not committing to, or discussing the controversial claim of whether computers can gain genuine consciousness, there still exist valid, widely debated arguments as of whether a computer could express genuine understanding of a given conversation, and even think independently.
Before I begin discussing this topic, it is worthwhile defining the terminology that will will be used in this article. A computer in this context, is a correctly programmed machine that produces an output in accordance with it’s rules, when presented with an input. It is important to note that the highest advancement any laptop can achieve is having the ability to manipulate coding of 1’s and 0’s – and when it achieves this, it is turing complete. (For more depth on this, I would recommend the Computerphile’s “Turing Machines explained” (Youtube.))
Because of that, any computer that you use is nothing more than advanced coding of 1’s and 0’s manipulated in such a way as to be turing complete.
Alternatively, by thinking : I am referring to artificial intelligence. In this scenario, the style of artificial intelligence we are concerned with being strong artificial intelligence. This states that appropriately programmed computers equate, or result in “consciousness.”By this, I am referring to genuine understanding of the output it produced.
So the argument that computers can think, states that any appropriately programmed computer does lead to genuine understanding, or an ability to think.
Such a claim is widely disputed among both contemporary and traditional Philosophers, and this article aims to consider both the challenges and strengths when determining whether a computer could express genuine understanding analogous to a “robot.”
Argument’s against the proposal of thinking computers.
In Philosophy, there are 2 different types of arguments. Deductive, and inductive. The argument in support of thinking computers (ie. appropriately programmed computers = genuine understanding) is inductive, meaning even if the propositions (an appropriately programmed computer) are true, the conclusion (results in genuine understanding) could still be false.
Because of this, the majority of challenges presented aim to provide a counterexample (this being where the proposition is true but the conclusion is false.) They hence do this by giving examples of appropriately programmed computers that do not have genuine understanding. By doing so disproves the argument that all appropriately programmed computers can “think.”
Challenge 1 : J. Searles Chinese Room Experiment (note this is the most common discussion on the topic, and it is certainly worth reading a primary source to gain deeper understanding.)
Searle created a counterexample similar to that mentioned above in the following away :
The Chinese Room Thought Experiment against Strong AI.
Imagine you are in a box, or room, with a rule book of instructions. : with no understanding of Chinese, the only language you are knowledgeable of is English.
Now imagine that people from outside the room start sending symbols into the room, which unknown to you are questions in Chinese. Having received these symbols, you refer to the rule-book : which states that upon receiving symbol x, you should send symbol y to outside the room – this being the correct answer to question x in Chinese.
So, from the outside : it appears to the Chinese person inserting the questions into the machine that they are having a conversation in Chinese with the computer – given that every time they insert a question, they receive an appropriate response. But the issue here is that the individual does not understand Chinese at any stage of the experiment, they are simply following basic instructions. Thus, the experiment allows the individual to pass the turing test for understanding (taking an input and turning into an output,) but at the same time fails to illustrate genuine understanding.
Here is a more contemporary example : Imagine asking a question to Siri, an AI software that has been appropriately programmed to answer such queries like : “Hi Siri.” The computer will itself receive the question – and follow a “rule book” which states that when receiving this command to respond : “Hi Jon.”
Similar to Chinese for the man in the room : the English language is foreign to it. So at no stage does your Iphone express any genuine understanding of the conversation taking place. All it does is follow basic coding.
Searle further strengthens his experiment through the introduction of 2 types of understanding : semantic & syntactical. Syntactical refers to things that can be understood in isolation, such as the shape of a word or it’s colour. Semantic on the other hand refers to things that CANNOT be understood in isolation : for example the linguistic understanding of a word requires a basic understanding of the English language ; this is the information needed for genuine understanding.
So Searle states that a computer has syntactical understanding of the words used in our conversation with it, but fails to meet the requirement for semantic understanding. Because of this, it cannot express genuine understanding and thus does not as such “think.”
Searle finishes his argument by illustrating that an appropriately programmed computer would not have anymore information or resources than the man does. So if the man does not understand the conversation, then neither would a computer.
This challenge appears to successfully provide a counterexample : where an appropriately programmed computer does not have genuine understanding. As such, we may conclude that the argument is unsound – an appropriately programmed computer DOES NOT equate to genuine understanding.
For readers advanced in Philosophy, I strongly recommend reading Leibnez’s Mill (as featured in section 17 of his Monadology) for another challenge to thinking computers.
Overall, the challenge mentioned above appears to attempt to disprove the possibility of thinking computers. But is it successful in it’s refutation?
Arguments for the defence of thinking computers.
Arguments for thinking computers attempt to generally aim to reject Searle’s proposition that appropriately programmed computers may not achieve genuine understanding.
Response 1 : My defence – a concern of refutations towards strong AI.
My main concern of the many challenges to thinking computers stems from the terminology used in the original argument. This states that : “any appropriately programmed computer results in strong AI.”
The counterexample challenge presented by Searle (as mentioned above) follows :
premise 1. All appropriately programmed computers result in Strong Artificial intelligence.
premise 2. There exists an appropriately programmed computer, presented in the Chinese room that doesn’t result in Strong AI.
premise 3. Following P2, not all computers result in Strong AI.
Conclusion 1 : A contradiction between premise 1 and 2.
Conclusion 2 : So not all appropriately programmed computers result in Strong AI. (Resolving the contradiction.)
My main concern with this challenge is that premise 2 could be subject to numerous issues. To me, the term “appropriately programmed” is an ambiguous term and responses to such an argument could manipulate this in such a way. From premise 1, it states that “All appropriately programmed computers result in Strong Artificial intelligence.”
So, a response to the challenge mentioned above could validly argue that premise 2 is false : there does not exist an appropriately programmed computer that doesn’t have strong AI. Searle tried to give an example of one through the Chinese room, but defenders could merely state that as the Chinese room fails to result in strong AI – it cannot be appropriately programmed.
From the perspective of a pro – computer thinker, the response would go something like this (for those with deeper understanding, this is valid due to adopting modus tollens.)
P1. If a computer is appropriately programmed, then it has strong AI. (The standard argument for strong AI.)
P2. The Chinese room does NOT have strong AI.
C. So it is NOT appropriately programmed.
If, following this argument : the Chinese room is not an appropriately programmed computer; then it appears that Searle’s argument collapses due to premise 2 being false. AND thus, the original argument holds true.
Does such a response seem satisfactory to you? Or do you think Searle’s argument still holds? – I do also admit that the thinking I have used above is widely debated, as premise 1 of my response is widely controversial (I have merely adopted the standpoint that might be taken by a pro – computer thinker). I have attempted to respond to one critique (ie. Searle’s argument I laid out as a contradictory (also known as reductio ad absurdum)) argument; but – there is discussion of whether the premise is true.
Nonetheless, I have shown how someone in favour of Strong AI may respond to Searle’s criticism – by closely defining what they mean by “appropriately programmed” (for example, emphasising that by definition anything that fulfils this definition must have strong AI, and as the Chinese room does not – it is not appropriately programmed.)
Response 2 : The Robot Reply, and how it shows that the Chinese Room was not appropriately programmed.
The Robot’s reply is one of many responses to Searle’s chinese room, and was adopted by the likes of Time Crane.
Similar to my response above, the response accepts that the Chinese room successfully shows that a computer of that sort cannot think. But it is still committed to the existence of thinking computers. This being because it similarly does not think that the example used is that of an appropriately programmed computer – so it serves as no issue to their belief.
The reply asks us to depict how a child gains understanding of words. A child does so by seeing, and experiencing things first hand. The robot’s reply states that a computer can gain genuine understanding in the same way.
For example : If you programme the word “football” into an isolated computer (such as that in the Chinese room – then it cannot have semantic understanding.) But alternatively – similar to a child, put this computer in a robots body with a camera and sensors and a camera – then it will be able to gain context to all words, and attach meaning to any words.
By doing this, the computer would be able to achieve Semantic understanding – as it would be able to appreciate the word “football” not in isolation. Instead it will have seen it through a camera, via sensors and have gained context – that is, a footballs purpose, use and meaning.
So this seems to illustrate my original response. On this account – the Chinese room does not represent an appropriately programmed computer, due it being in isolation. Instead, if the computer was appropriately programmed – then it would be able to genuinely think and achieve strong AI.
So this proposal seems to indicate that it is possible for a computer to think : even if we have not yet achieved one that is appropriately programmed. (By giving it sensors, wheels and a robots body – for example.)
Do you think Searle’s argument reflects that of an appropriately programmed computer, and so refutes the claim of Strong AI?
Do you accept the Robots Reply?
Do you think Strong AI is possible in a different way than that mentioned in this article?
If computers with genuine understanding exist, then what does this mean for the future?
Thank you for taking the time to engage with me,
Feel free to leave the answers to my questions below in the “comments section,” or tweet me : @theapeironblog