Glossary (en)‎ > ‎

Artificial Intelligence

This column should only be modified by the corresponding editor.
- For discussion about any aspect of this article, please, use the comments section at page bottom.
- Any document or link, considered of interest for this article, is welcomed.
name e-mail
 Incorporated contributions

 Usage domain



[Guidelines for the editor
1) This text between brackets must be substituted by the article approved by the editor.
2) The upper box of metadata must be actualized (entries, integrated in the current wording of the article; usage domain(s) of the voice, particularly the ones currently treated in the article; type -conpept, metaphor, theory, theorem, principle, discipline, resource, problem-; equivalent terms in French and German).
3) For the bibliographic references the normalized method author-year will be applied. E.g.
...As stated by Bateson (1973)...
...As proven (Turing 1936)...
..."is requisite to make an image?" (Peirce 1867: p. 5)..
The referred documents must be compiled in the reference section following the exemplified normalized format.
4) If the article is long (>1 p) it should be subdivided in numbered sections (including an initial summary section)]
  • AUTHOR, N. (year). “article title”. Magazine, Vol. xx, pp. yy–zz.
  • AUTHOR, N. (year). Book title. Edition place: editor.
  • AUTHOR, N. (year). Web page title. [Online]. Edition place: Responsible organism. <page url>. [Consulted: consulting dd/mm/yy].
New entry. For doing a new entry: (1) the user must be identified as an authorized user(to this end, the "sign inlink at the page bottom left can be followed). (2) After being identified, press the "edit page" button at he upper right corner. (3) Being in edition mode, substitute -under this blue paragraph- "name" by the authors' names, "date" by the date in which the text is entered; and the following line by the proposed text. At the bottom of the entry, the references -used in the proposed text- must be given using the normalized format. (4) To finish, press the "save" button at the upper right corner.
The entry will be reviewed by the editor and -at least- another peer, and subsequently articulated in the article if elected.

Author's Name (dd/mm/yyyy)

[Substitute this paragraph with your entry]

Entries under work
Hainsch, David (05. Dec 2018, within the course "Odyssey of Philosophy and Information", facilitated by J.M. Díaz at HM)

(1) The comments of the facilitator will be edited using this style, brackets, 8 pt, color change. These will be introduced in between your own text to discuss and further co-elaborate the content. Whenever the authors consider to have addressed the issue, they can simply remove the comment
(2) Simple corrections, corresponding to quite obvious missteps or disalignment with editorial style guidelines, are directly corrected, marking the involved characters in red in order to let the author know what was changed. The authors can turn it into black if they agree upon] 

NOTE of the AUTHOR (in interaction with the facilitator and colleagues): these are edited using this style, no-brackets, 8 pt, this color. 

[GENERAL COMMENT ON THE REVIEW (6/12/2018): You have done a good job linking together concepts that are discussed within the glossariumBITri and that are very relevant to the understanding of information and it relation to knowledge. On the other hand, besides some missteps the entry is well written and you have used relevant references which are more or less well referenced.
I have entered in direct conversation with your proposition a number of comments with the intention of enhancing your entry and simply advancing in the understanding of the subject. Regarding the general structure of the entry I would say your abstract is rather an introduction into the topic. An abstract should recap all you have cover in the entry very briefly. Therefore it rarely contains any quote. 
There is something I have missed, I think one should start discussing in the first place what intelligence is, but this is maybe a quite broad issue worth deepening in a separate voice/article. This is something we certainly have to do within the glossariumBITri, but for the time being I would at least start with a short clarification. In a sense you do that in a certain way through the inquiry of the difference between natural and artificial intelligence, but maybe is you re-structure the entry providing a new abstract you could do it there since it is always (more or less) in the background of your text... Don't you think so?] 


Artificial Intelligence(AI) is a class of entities - more commonly machines - behaving in a manner that is usually typical for intelligent creatures, such as humans or possible beings of higher intellect. As expected, the definition of AI relies heavily on the definition of intelligence itself. 

One could describe intelligence as less of a state of mind and more as a classification, referring to a collection of abilities a mind has. An intelligent mind is generally able to form thoughts, intentions, is capable of ®cognition, semantic understanding and scientific inference(more significant abilities are feasible).

Therefore AI is divided into at least two categories: weak AI, meaning any machine that behaves intelligently but is in reality only following its programming and true intelligence, meaning a machine that is truly intelligent, therefore capable of forming thoughts, intentions, etc., unlike its inferior sibling.

Moreover, it is, or at the very least, should always be possible to relocate truly intelligent minds into another body, while still maintaining their level of legitimacy. Thus any true intelligence is when considering their legitimacy, independent from their body.

1. Introduction to the Notion

To make analogies and thought experiments in this article easier, I shall refer to artificial intelligence as AI and to its counterpart, natural intelligence, as NI. What exactly shall be defined as NI will be partly subject of this entry.

The Term of Artificial Intelligence is most commonly used in Duality. As Techopedia states in "Artificial Intelligence (AI)" (n.d.), it is:

„an area of computer science that emphasizes the creation of intelligent machines that work and react like humans.“

The Stanford Encyclopedia of Philosophy defines the term AI as following:

“Artificial Intelligence (AI) is the field devoted to building artificial animals (or at least artificial creatures that – in suitable contexts – appear to be animals) and, for many, artificial persons (or at least artificial creatures that – in suitable contexts – appear to be persons).(Bringsjord & Govindarajulu, 2018)

The Encyclopaedia Britannica claims that

"…, the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings." (Copeland, 2018)

plays a major part in the definition of AI Copeland goes on to say that

“The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience.(Copeland, 2018)

Taking in all of the three mentioned entries on AI, it should strike one as quite obvious that in all cases, it is the endeavour to create a machine, something artificial with the ability to behave animal- or human-like. This definition, however, begs the question, where does X-like begin?

I believe that the Stanford Encyclopedia’s definition comes closest to a general answer to this very ambiguous question. It relies on the perception of the being itself, instead of trying to objectify, it leans to personalise. "Alike" does not have to mean sharing every trait a human possesses. Humans are complicated creatures by themselves already, classifying what exactly makes something human worsens the matter. Instead, once one would say “it feels so human“ we can consider it to be human-like (same goes for the aspect of animal-like). This definition makes it easy to define what counts as Artificial Intelligence and what doesn't.

Yet it is not the purpose of science to be democratically approved, its purpose is to define sovereign facts and concepts.

Whether one personally consider an artificial entity intelligent should not be of significance when determining if it truly is. So what should be of significance?

Before we start defining the traits a machine has to show to be considered intelligent with near absolute certainty, we should classify which types of AI we have at hand. Unlike NI, which has to be able to sustain itself on its own as it would cease to live if it did not, AI can be completely dependent on outside forces allowing its existence. A program does not necessarily have to provide itself with power, internet access and a comfortable server unit to run on, instead, its owner takes the wheel on these responsibilities. It can simply exist to compute through following the instruction set instilled in it. It does not have to understand the meaning behind 1+1=2, or the operation:

 if (input.text[string])

                               then return „Hello“


It merely has to process the information in a way that is in accordance with the instructions. It does not necessarily understand what differentiates 1 from 2 except for the assigned mathematical equation and binary identification-serial. Take a human on the other hand, and one will quickly realise that he/she knows, or at the very last grasps the concept that twice means double the amount of one, understanding a mathematical correlation in concept, thus knowing that one plus another one equals double the amount of one - two. A human cannot properly process what he/she does not at least grasp to understand. I shall use Searle’s Chinese Room experiment to expand further on this.

2. Levels of Classification of AI‘s.

For an in-depth explanation of The Chinese Room Experiment please refer to it's dedicated entry (®Chinese Room). In short John Searle proves that a syntactical way of operating and processing of information will never result in a semantical understanding of the matter at hand. Searle compares his machine executing a program to a human doing the same things through following the same instruction set.

Although the human clearly does not understand the semantics of the input he receives and the output he returns, he still has to understand how to read the instructions, understand that which he just read, and execute the instruction the way he interprets them. Nowhere in this chain is the human like a mindless machine. He/She clearly understands the semantics of the instructions, otherwise, how could they be executed? And that is where our ability to apply analogies hits a breaking point. To execute we must first know what is asked of us. Yet a computer does not understand what it executes. Therefore a machine is fundamentally “dumber than“ a human, and to determine whether it has a mind is to determine whether it understands its instructions.

Searle proved that the concept of “strong“ AI cannot possibly exist. Weak AI, however, is very much possible and already among us. Weak AI is any program that seems intelligent by mimicking intelligent behaviour. Never does it truly understand the semantics of its actions and inputs. It can most certainly react to its inputs and do various things. These capabilities have to be given by its programming. It cannot do, what it was never told how to do. And it will not understand what it does.

Video games employ weak AIs since the early 2000s, and they have gotten quite good at having “smart“ AI‘s. As Mark Brown[1], a former writer for video game news, such as Eurogamer and Polygon, states in his episode ‘What makes good AI‘, part of his online Web Series ‘Game Maker’s Toolkit‘, “good“ AI possesses the following characteristics:

1."Good AI is predictable" (Brown, 2018). Feeding an AI a certain input should allow the user to predict the general behaviour of the AI. Its reactions should be reasonable and logical, not random. A certain level of order and logic has to be present.

2. "Good AI can interact with the game’s systems" (Brown, 2018). Taking away the factor of the system being a game, it can generally be truthfully stated that any intelligent being should be able to interact with its surroundings. It can act and react, just as the environment will react to the entities actions. If it could not react to anything it is fed, how can one determine its intelligence by evaluating its reactions.

3."Good AI has its own goals“ (Brown, 2018). It should have goals that do not depend on a user being present or in an interaction with; instead, it should work to achieve its own predefined goals in the environment it is currently in. Such should be able to happen without any observation.

([1] Mark Brown's work is fit for use in a philosophical discussion, seeing that the strongest representation of weak AI happens through video games integrating them as so-called ‘Non-player Characters‘. I, therefore, present him as a citable professional on the subject of video games, the matter I use as an exemplary environment for weak AI. For legal matters, I do have permission from Mr Brown to reference him.

With consideration to the points Mark Brown makes, and the fact that video-game-AIs are all weak AIs, it should become clear that whatever makes video game AI "good", also applies to weak AIs deployed in other fields. I do fret to use 'good' in a general manner since good is used in correlation to how well the AI is for video games. We can, however, easily substitute "good" with "weak", if we look at the general purpose of video game AI, its purpose being to make decisions and take actions that make it seem like the character in focus is behaving like its real-life counterpart. So a virtual tiger should stalk and then pounce on their prey, not run from its prey in fear. A virtual human should respond to being called out, not silently stand or wander off without ever having noticed anything.

Video game AI is made to mimic intelligence. Mimicking intelligence is exactly what a weak AI does. Thus we can easily extract Mr Brown's statements and transform them.

Weak AI should be predictable in it’s general reaction to a known input, as in react in a logical manner, be able to interact with the environment it is located in, and have goals that do not depend upon a user being present. But it does all this whilst not understanding what it is doing and why it is doing. [I not completely sure of the predictability feature. A characteristic of human behaviour is actually a bounded non-predictability, which is by the way mimicked by weak AI. One of the good early examples is called Alice, a chat-bot you probably know, created in the 1980s, it incorporated a limited non-predictability, specially when ambiguity and uncertainty are involved. Cortana or Siri also include it.]I realise how this is confusing and can easily be misunderstood, I am referring to an even more generalised concept of reaction. though humans can be "unpredictable", if fed information their reaction will not be completely random. Asking someone what's your name will most commonly not result in them saying "Boy do i love tomatos". So one can predict that a human will react with this principle in mind: if asked "what is your name?", human will respond with something connected to "name". Similarly a weak AI will  react in a certain or one of many possible ways to an input, one that seems logical. A completely random reaction would feel even less intelligent than one predefined and inevitable, but logical reaction. If what i mean to convey with this comment is conveyed through my addition in the sentence, simply delete this comment, thx. [I understand what you mean, but we need to rely on what we say and how can it be understood. Predictability is quite clear, but your addition makes your point much more clear. I'm still not sure about using predictable, maybe you could say "predictable in a certain degree" or "relatively predictable". Furthermore I'm not sure about the grammaticality of the phrase "as in react in a logical manner".]

3. What is weak AI?

As stated earlier in §2, Searle denied the existence of strong AI - an AI that can understand the semantics of its environment solely through inheriting a syntactic programming and logical computation processors - with his Chinese Room Experiment. The basic setup of the Chinese Room Experiment is the ®Turing Test. An Interrogator questions two subjects, one of which is the machine, the other one being a human. The machine has to convince the interrogator that it is human. If it succeeds in doing that, it can be assumed that it may be intelligent, thereby be considered an AI[This is the core, but how is it demonstrated? I'm not sure,whether understanding the instructions or being able to demonstrate a theorem is enough(enough for what?). The latter can be done starting from proper definitions and axioms, and using a calculus. However finding out patterns, making definitions, proposing axioms, making metaphors to express what is not defined in the language, etc are actions that cannot be performed without understanding the contents, the meanings]i'm sorry, I don't see what you're pointing at in this paragraph that is not properly expanded upon

This leaves two kinds of AIs to be plausible. The category that is able to understand the semantics of its environment, and the category that is not, the latter of which is what we consider weak AI.

A machine that is, through observation of its behaviour, suspected to possibly be intelligent, but is in fact not capable of semantic understanding. It merely replicates intelligent behaviour – a child pretending to be doing its taxes by copying everything that its father, a schooled accountant, is doing.

4. From weak to true AI

Since Searle proved strong AI to be impossible, the leap from weak to strong now turn into a leap from weak to true artificial intelligence.

An AI that can do what a strong AI was supposedly able to do, such as understand semantically, should theoretically be able to exist. There is no reason to believe that such an intelligent machine cannot exist. But as Searle proved its way of operation must exceed that of syntactic and logical computing. So what capability distinguishes it from weak AI?

Intentionality. As Searle stated a weak AI does not act upon intention, as the mere “implementation of the computer program is not by itself sufficient for consciousness or intentionality“ (Cole, 2015). The next higher AI, therefore, will inevitably possess a consciousness and intentions, as this is what separates it from its inferior siblings. But to act upon intention, one has to form intentions first. And to possess a consciousness is synonymous to possessing a mind, a psyche and a soul, as the voice on ®Mind states.

Thus we can form the following statement, with fair certainty of it being true:

A true Artificial Intelligence possesses a mind, a psyche, a soul and a consciousness, manifesting in its ability to form intentions and act upon them, to understand semantically, to interact with its environment, to react determinably logical to a predetermined input, and to pursue its own goals without someone observing. 

It has to be able to uncover the meaning of something presented to it, given that the presented matter is intellectually appropriate². Its capability of inference and understanding of concepts, as well as its ability to apply said understanding, even across contexts, has to be tested and proven to be existent.

[2]If a matter is appropriate to the intellect of the AI is rather hard to determine, as we ourselves do not understand all of the concepts that exist. In that sense, to make this voice more understandable I propose one may read it like this: It has to be able to uncover the meaning of something presented to it, given that we are, or were already, able to uncover the meaning of it. If this is the case, the intellectual ability of the AI can be considered similar or equal to that of a human.

5. The unacknowledgable true AI

As I stated in my paragraph on what weak AI is, Searle’s denial of strong AI left us with the categorisation into AI capable of semantic understanding and AI incapable of it. Searles experiment happened in a manner that has to necessitate the capability to communicate. Now I am to make two assumptions:

Understanding events and the semantics behind their relations

A) will always allow the being to give forth the acquired understanding through communication.

B) can happen internally and does not cause the ability to convey the understanding through an inter-entity language.

A question that pops up, is whether understanding something conceptually will always result in the being giving this concept a defined term or name, therefore creating its own language if necessary. This, however, shall not be discussed in this voice, as it more closely belongs to the fields of research on both linguistics and psychology. [Indeed the recurrent use of metaphors is a sign of the necessity to go beyond the given words and concepts. Actually a metaphors combines known concepts in a relation that transcends the meaning of both, for example "a noisy solitude" conveys a new understanding than cannot be given by the meaning of noisy or solitude alone... The interesting point here is that we can convey it. Other people can understand very well what the metaphor means. Usually art deals with new expression means to convey something our previous communication means didn't allowed to, though it also involves emotion.]

Further, we are to ask whether there could be an intelligent being incapable of communication. Similar to Turing's proposition that a being that suffers from the Lucas-Penrose constraint may falsely fail the Imitation Game (Turing, 1950, §6 (3)). I propose that an intelligent being incapable of communication would be categorised as ‘not intelligent‘ simply because with current methods it cannot be tested on its capability of presenting semantic understanding. If such an entity exists, it should consequently be possible to recreate it as a machine -  A machine able to understand like a true AI, yet working in such a way that no one could ever truthfully classify it simply because there are no suitable testing methods yet.

Objectively we should consider such an entity truly intelligent, yet we still have to search for ways to test such entities. Tests that allow different forms of communication or different ways of determining the existence of something crucial to intelligence can be applied to non-humans. Knowing such universal testing methodologies, we could easily apply them to any entity.

6. The intimate relation between true AI and NI

As I stated in my Paragraph ‘Levels of Classification of AI‘ a human that follows instructions like a machine does still understand the semantics of the instructions, therefore cannot be considered a weak AI, if we were to extract his/her mind, transform it into a processor-compatible language –comparable to C++ or Unicode – and place it in a mechanical body. Now let us reverse the extraction and relocation of intelligence. A true AI‘s mind/soul/psyche/consciousness is extracted, transformed into a brain-compatible language – something that I sadly cannot even name a comparable language for – and place it in a biological brain. Since it is a true AI, it should be indistinguishable from a human, or rather a true NI, when undergoing the Turing Test.

The Interrogator will not be able to certainly differentiate the true AI in a natural body – let us pretend a human body – from the true NI, a human mind which has grown up in his/her body. If reversed, the Interrogator will not be able to tell the true NI in a machine from the true AI which has never left the machine. And in both cases, the relocated true Intelligence has to adjust to its new body. So clearly having to adjust to a new environment and experiencing difficulties due to this applies to both true NI and AI, therefore it can be considered irrelevant if somebody were to determine whether an intelligence is natural or artificial.

The question I mean to ask and also propose an answer for is the following: What makes an intelligence artificial or natural?

If truly intelligent minds can swap bodies and still be considered true, can we even try to classify it further?

7.    Hypothesis about the bodily classification of true intelligence

One argument that may be made is that an intelligent mind's origin determines whether it is artificial or natural. A mind made by someone else, or in a machine is artificial, a mind that grew up in a biological body is natural. Now if I may, what is the process of raising a child?

Its environment is feeding it information, shaping it and nurturing it. A child which receives no input, a child growing up in a pitch black room, with no stimuli whatsoever, will not magically turn into a fully evolved and intelligent human. Its level of intelligence will never truly go beyond that of what it was when initially locked into this 'Darkroom'. Similarly a computer with the most basic of instructions, such as a child with the innate instruction to e.g. cry when in need of something, will never evolve into a true AI. [Here you are adopting the tabula rasa position (there is nothing in intellectus that was not put first in the senses). However, this is not generally accepted. You defined above intelligence as a capacity to interact with the environment... What about if it is able to do it when is moved to an heterogeneous environment where there are objects to collide, to pick them up, etc. Besides that, if the original room has gravitation and a ground it may be able to learn walking, so it would interact without being told..].I did consider gravity and the effect of it a sensation, thereby a stimulus. Should I clarify 'darkroom' as a room of absolute nothingness? after all the if there is no stimulus or anything to interact with, the child will not have anything to learn with, meaning it will not learn anything, just as a computer program that can neither create it's own memorys nor receive new instruction sets. Both will not evolve simply by existing, both need influence from without to grow.  [As you state it now is clearer. However, if there's absolutely no stimulus, there is no environment for that being. Even a system (of any kind) cannot be defined without its environment. Here you are proposing a thinking experiment that is maybe transcending what the self is. I know this has been traditionally put as independent from the environment, but this is in my view incorrect and a fundamental misstep of Modernity, consecrated by Descartes' "I think therefore I am". Actually, he thinks because he is there in the first place; and existence is an ontological and observable property of being referred to the its ability to interact with the reality. In my view, thinking and even being is necessarily linked to interaction]

Stimuli provided by an environment is comparable to the programming a computer receives. Thus we are all made by something else, meaning the only standing part of the earlier mentioned argument, is that the bodily origin remains. Being in a machine or in a brain respectively determines whether you are an AI or an NI. With this I present my answer to the question of what makes an intelligence artificial or natural:

The initial corpora of the intelligence determines whether it is natural or artificial. It's true-ness, however, is not dependent on the body, meaning that any truly intelligent minds are interchangeable. If a true intelligence is observed/interrogated, its "artificiality" is undeterminable until it's original corpora is revealed. Up until that happens it has to be assumed that the intelligence is both artificial and natural at the same time(exactly like the dead-alive-duality of Schrödinger's cat). This duality however only exists to the observer, since a present intelligence has had to be born somehow, meaning that its original corpora has already existed, and consequently it's "artificiality" is already determined. [Do you mean that a mind needs a kind of a "container" in which it can grow up?, but that the mind in itself is kind of independent from the "container"?]yes every mind has to be created somewhere, meaning that a mind that exists must have already been born, and if it was already born then it was either born in an artificial body or a natural body, meaning that it's classification as artificial or natural has already happened, just unbeknownst to us, until we discover it ourselves. Like schrödingers cat, as long as we don't open the box the cat is both alive and dead, but physically the cat has to be either dead or alive, we just don't know which it is[The curious thing of Schrödinger can, and even quantum mechanics, is that it is physically undefined until we make the observation, and, of course, that is completely wird to the very concept of being dead. It is not a question of ignorance, but a question of indeterminacy of reality itself] It is merely unknown to us until we discover the origin. The duality is merely an illusion. [Here, are you referring to the duality you stated at the beginning between machines and humans?]No, i am referring to the duality of being both artifcial and natural at the same time [that's what I meant, machine=artificial, human=natural. Your distinction is more general.]

8. Importance for other scientific fields of expertise

The research on AI has left crater-worthy impacts on other scientific concepts and questions. In that manner, the concept of only mindless machines being able to exist has been abandoned as early as the 20th century. Now, more than ever before, are we able to replicate true intelligence as programs. And through understanding how AIs function, which psychological processes and concepts have to be transformed to be executed by a machine and how to define them, we are able to intricately understand how intelligence itself works, and what it consists of. Only once we know how to build something, can we absolutely know what it is built with.


Incorporated entries

Whenever an entry is integrated in the article (left column) the corresponding entry is reflected in this section.