Glossary (en)‎ > ‎

Chinese Room

Article
 
This column should only be modified by the corresponding editor.
- For discussion about any aspect of this article, please, use the comments section at page bottom.
- Any document or link, considered of interest for this article, is welcomed.
 
 Editor
name e-mail
 Incorporated contributions

 Usage domain

 Type

 French

 German
 
[Guidelines for the editor
1) This text between brackets must be substituted by the article approved by the editor.
2) The upper box of metadata must be actualized (entries, integrated in the current wording of the article; usage domain(s) of the voice, particularly the ones currently treated in the article; type -conpept, metaphor, theory, theorem, principle, discipline, resource, problem-; equivalent terms in French and German).
3) For the bibliographic references the normalized method author-year will be applied. E.g.
...As stated by Bateson (1973)...
...As proven (Turing 1936)...
..."is requisite to make an image?" (Peirce 1867: p. 5)..
The referred documents must be compiled in the reference section following the exemplified normalized format.
4) If the article is long (>1 p) it should be subdivided in numbered sections (including an initial summary section)]
 
References
  • AUTHOR, N. (year). “article title”. Magazine, Vol. xx, pp. yy–zz.
  • AUTHOR, N. (year). Book title. Edition place: editor.
  • AUTHOR, N. (year). Web page title. [Online]. Edition place: Responsible organism. <page url>. [Consulted: consulting dd/mm/yy].
Entries

New entry. For doing a new entry: (1) the user must be identified as an authorized user(to this end, the "sign inlink at the page bottom left can be followed). (2) After being identified, press the "edit page" button at he upper right corner. (3) Being in edition mode, substitute -under this blue paragraph- "name" by the authors' names, "date" by the date in which the text is entered; and the following line by the proposed text. At the bottom of the entry, the references -used in the proposed text- must be given using the normalized format. (4) To finish, press the "save" button at the upper right corner.
The entry will be reviewed by the editor and -at least- another peer, and subsequently articulated in the article if elected.

Author's Name (dd/mm/yyyy)

[Substitute this paragraph with your entry]


Entries under work

Hainsch, David (21. Nov 2018, within the course "Odyssey of Philosophy and Information", facilitated by J.M.Díaz at HM)

[NOTE OF THE FACILITATOR: 
(1) The comments of the facilitator will be edited using this style, brackets, 8 pt, color change. These will be introduced in between your own text to discuss and further co-elaborate the content. Whenever the authors consider to have addressed the issue, they can simply remove the comment
(2) Simple corrections, corresponding to quite obvious missteps or disalignment with editorial style guidelines, are directly corrected, marking the involved characters in red in order to let the author know what was changed. The authors can turn it into black if they agree upon] 

Abstract

The Chinese Room argument by John Searle(1980), is a thought experiment meant to show that syntactical and logical capabilities do not automatically result in a semantic understanding of the matter at hand. It is a contradiction to the numerous claims, at the time, that Artificial Intelligence (AI) may soon be able to understand language, a statement deduced from the fast advances in programming in the 70’s. Beyond that Searle proved the definition of the “strong“ A.I. to be implausible.

1.    Historical Overview

John Searle’s thought experiment dubbed „the Chinese Room“, which was part of his paper „Minds, Brains and Programs“ published in 1980, is part of an argument against the proposition, that a machine (a brief and general definition may be found in the voice on The Turing Test, §1) given sufficient commands to process input with and return the right output would, in turn, possess a mind.

Several thought experiments have compared a mechanical way of operating, or rather computing, with that of a human's and how mechanical computing does not allow thought, unlike the brain’s operational happenings.

The earliest example was Gottfried Leibniz‘ Mill. Leibniz proposed to create a mechanical machine that would be made as to be able to perceive, feel and think. If one were to increase the size of this machine to the point at which a human would be able to enter it just like he enters a mill, he would only see moving parts interacting with one another. Not once would you see and be able to determine a thought or perception being transported. Through this analogy he concluded that mechanics alone cannot create mental states, such as thought, emotion and perception.

Later in 1948, Alan Turing proposed his Paper Machine. This is simply put, a set of instructions written in English, intended to allow someone - A - to play chess without even knowing that he is. Every situation on the board has a unique and abstract ID, as does every change of the situation that could decide upon. How input is managed and output enacted is irrelevant, as long as A is not the one responsible for these tasks. Now by simply associating the abstract input phrases with the corresponding output phrases A would be making a move. If looked at from the outside A would be playing chess, yet A would have no idea what his actions translate into. This setup has a remarkable resemblance with that of the Chinese room.

The latest predecessor to Searle’s argument was that of The Chinese Nation, which also goes by The Chinese Gym or The China Brain. It makes the assumption that, if a mental state is merely the product of the interactions between neurons, then any mental state could be replicated by the Chinese Nation behaving like a brain. Every Chinese Citizen takes the place of one neuron, and every signal exchanged by neurons is represented by a phone call between two citizens. If the entirety of China would place calls in exactly the way signals are causally transmitted in the brain for a certain emotion, theoretically, the Nation of China would feel that exact emotion. And therein lies the error. How could the Nation of China, an entity that is not at all an intelligent or even a living entity feel an emotion? This proves that there is another factor in the creation of mental states, other than causality of internal processes.

2.    The Experiment

Searle’s Argument is focused on showing the gap between syntactical and semantical understanding.

The setup is as follows. A native English speaker, who speaks not a word of Chinese, is locked in a room. In the said room he is provided with a booklet that acts as an instruction set, and a complete database of Chinese characters and the utensils to write them with. Native Chinese speakers on the outside now start writing down questions in Chinese, on a piece of paper slid under the door for example, to ask the person inside. The people on the outside never get to meet the person locked inside. This is the basic setup of the Imitation Game used in the Turing test. The person inside takes the question and uses the booklet to determine which characters he has to return, simply by following the syntactic rules the booklet gives. He never understands a word of what he reads or writes, he simply answers what the instruction booklet tells him to, with respect to which characters he received beforehand. If the Instruction set is extensive and well enough written, he will return enough correct answers to convince the people on the outside that he is to be considered intelligent and that he does, in fact, speak Chinese, therefore, he will pass the Turing test. Despite this, the person inside does not understand Chinese at all, yet pass the test. And the same goes for any Machine, as it only syntactically determines it’s output with respect to the input received beforehand.

3.    Conclusion

And herein lies the point Searle formed into a statement. The fact that the computer can follow an instruction set and convince a human that it is intelligent, by dialoguing in any given language does not automatically allow the assumption that the computer semantically understands said language, in the way a person fluent in it does.

In 2010, Searle publicly stated his conclusion "that the implementation of the computer program is not by itself sufficient for consciousness or intentionality"(Cole, 2015). The Ability to compute does not result in the capability to semantically understand and form outputs with an intention in mind. A computer’s way of operation is purely syntactic, and if any other being were to behave like it, it would not start understanding semantically. Therefore the same instruction set performed by a human will not result in an understanding of Chinese. In conclusion, to understand, it requires more than just a difference in physical manifestation.

And thus Searle contradicted the claim of so-called “Strong” A.I. being able to understand semantics solely through following an appropriate programming. The Concept of “Weak AI“, a form of AI that only replicates intelligent behaviour and does not understand semantics, is not once touched by Searle’s Experiment. Instead, he only proves that the concept of a strong AI cannot be true. Any program is a set of syntactical rules which a machine follows. And by the concept of strong A.I. there exists a program that, if being executed, allows the machine to understand the semantics behind its actions, though merely inheriting the programming to enact said actions. The Chinese Room argument, however, clearly proves that semantics cannot be reached by mere syntactical behaviour. Thus the claim of "Strong AI" is proven wrong. The only remaining classification of Intelligence available is into “weak“ and “true“.

4.    Addendum

Analysing the Conclusion, one can easily determine, that allowing a machine to be able to possess a mind and form thoughts requires more than a syntactic programming, no matter how extensive and complete it may be. Since we also know that a difference in physical manifestation still gives the same results, this may as well be applied to any Natural Intelligence. If any Intelligence is to be formed, the being has to be able to do more than perceive, and follow and apply logical instructions regarding the syntax of the perceived. What more has to be present to allow “True” Intelligence is beyond this experiment.

References



Incorporated entries

Whenever an entry is integrated in the article (left column) the corresponding entry is reflected in this section.

  
Comments