Glossary (en)‎ > ‎

Artificial Intelligence

Article
 
This column should only be modified by the corresponding editor.
- For discussion about any aspect of this article, please, use the comments section at page bottom.
- Any document or link, considered of interest for this article, is welcomed.
 
 Editor
name e-mail
 Incorporated contributions

 Usage domain

 Type

 French

 German
 
[Guidelines for the editor
1) This text between brackets must be substituted by the article approved by the editor.
2) The upper box of metadata must be actualized (entries, integrated in the current wording of the article; usage domain(s) of the voice, particularly the ones currently treated in the article; type -conpept, metaphor, theory, theorem, principle, discipline, resource, problem-; equivalent terms in French and German).
3) For the bibliographic references the normalized method author-year will be applied. E.g.
...As stated by Bateson (1973)...
...As proven (Turing 1936)...
..."is requisite to make an image?" (Peirce 1867: p. 5)..
The referred documents must be compiled in the reference section following the exemplified normalized format.
4) If the article is long (>1 p) it should be subdivided in numbered sections (including an initial summary section)]
 
References
  • AUTHOR, N. (year). “article title”. Magazine, Vol. xx, pp. yy–zz.
  • AUTHOR, N. (year). Book title. Edition place: editor.
  • AUTHOR, N. (year). Web page title. [Online]. Edition place: Responsible organism. <page url>. [Consulted: consulting dd/mm/yy].
Entries
New entry. For doing a new entry: (1) the user must be identified as an authorized user(to this end, the "sign inlink at the page bottom left can be followed). (2) After being identified, press the "edit page" button at he upper right corner. (3) Being in edition mode, substitute -under this blue paragraph- "name" by the authors' names, "date" by the date in which the text is entered; and the following line by the proposed text. At the bottom of the entry, the references -used in the proposed text- must be given using the normalized format. (4) To finish, press the "save" button at the upper right corner.
The entry will be reviewed by the editor and -at least- another peer, and subsequently articulated in the article if elected.

Author's name (dd/mm/yyyy)

[To be substituted by the author with the text of the corresponding entry]


Entries under work
Hainsch, David (05. Dec 2018, within the course "Odyssey of Philosophy and Information", facilitated by J.M. Díaz at HM)

[NOTE OF THE FACILITATOR: 
(1) The comments of the facilitator will be edited using this style, brackets, 8 pt, color change. These will be introduced in between your own text to discuss and further co-elaborate the content. Whenever the authors consider to have addressed the issue, they can simply remove the comment
(2) Simple corrections, corresponding to quite obvious missteps or disalignment with editorial style guidelines, are directly corrected, marking the involved characters in red in order to let the author know what was changed. The authors can turn it into black if they agree upon] 

NOTE of the AUTHOR (in interaction with the facilitator and colleagues): these are edited using this style, no-brackets, 8 pt, this color. 

[GENERAL COMMENT ON THE REVIEW (6/12/2018): You have done a good job linking together concepts that are discussed within the glossariumBITri and that are very relevant to the understanding of information and it relation to knowledge. On the other hand, besides some missteps the entry is well written and you have used relevant references which are more or less well referenced.
I have entered in direct conversation with your proposition a number of comments with the intention of enhancing your entry and simply advancing in the understanding of the subject. Regarding the general structure of the entry I would say your abstract is rather an introduction into the topic. An abstract should recap all you have cover in the entry very briefly. Therefore it rarely contains any quote. 
There is something I have missed, I think one should start discussing in the first place what intelligence is, but this is maybe a quite broad issue worth deepening in a separate voice/article. This is something we certainly have to do within the glossariumBITri, but for the time being I would at least start with a short clarification. In a sense you do that in a certain way through the inquiry of the difference between natural and artificial intelligence, but maybe is you re-structure the entry providing a new abstract you could do it there since it is always (more or less) in the background of your text... Don't you think so?] 

Abstract

Artificial Intelligence(AI) is a class of entities - more commonly machines - behaving in a manner that is usually typical for intelligent creatures, such as humans or possible beings of higher intellect. As expected, the definition of AI relies heavily on the definition of intelligence itself. 

One could describe intelligence as less of a state of mind and more as a classification, referring to a collection of abilities a mind has. An intelligent mind is generally able to form thoughts, intentions, is capable of ®cognition, semantic understanding and scientific inference(more significant abilities are feasible).

Therefore AI is divided into at least two categories: weak AI, meaning any machine that behaves intelligently but is in reality only following its programming and true intelligence, meaning a machine that is truly intelligent, therefore capable of forming thoughts, intentions, etc., unlike its inferior sibling.

Moreover, it is, or at the very least, should always be possible to relocate truly intelligent minds into another body, while still maintaining their level of legitimacy. Thus any true intelligence is when considering their legitimacy, independent from their body.

1. Introduction to the Notion

To make analogies and thought experiments in this article easier, I shall refer to artificial intelligence as AI and to its counterpart, natural intelligence, as NI. What exactly shall be defined as NI will be partly subject of this entry.

The Term of Artificial Intelligence is most commonly used in Duality. As Techopedia states in "Artificial Intelligence (AI)" (n.d.), it is:

„an area of computer science that emphasizes the creation of intelligent machines that work and react like humans.“

The Stanford Encyclopedia of Philosophy defines the term AI as following:

“Artificial Intelligence (AI) is the field devoted to building artificial animals (or at least artificial creatures that – in suitable contexts – appear to be animals) and, for many, artificial persons (or at least artificial creatures that – in suitable contexts – appear to be persons).(Bringsjord & Govindarajulu, 2018)

The Encyclopaedia Britannica claims that

"…, the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings." (Copeland, 2018)

plays a major part in the definition of AI Copeland goes on to say that

“The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience.(Copeland, 2018)

Taking in all of the three mentioned entries on AI, it should strike one as quite obvious that in all cases, it is the endeavour to create a machine, something artificial with the ability to behave animal- or human-like. This definition, however, begs the question, where does X-like begin?

I believe that the Stanford Encyclopedia’s definition comes closest to a general answer to this very ambiguous question. It relies on the perception of the being itself, instead of trying to objectify, it leans to personalise. "Alike" does not have to mean sharing every trait a human possesses. Humans are complicated creatures by themselves already, classifying what exactly makes something human worsens the matter. Instead, once one would say “it feels so human“ we can consider it to be human-like (same goes for the aspect of animal-like). This definition makes it easy to define what counts as Artificial Intelligence and what doesn't.

Yet it is not the purpose of science to be democratically approved, its purpose is to define sovereign facts and concepts.

Whether one personally consider an artificial entity intelligent should not be of significance when determining if it truly is. So what should be of significance?

Before we start defining the traits a machine has to show to be considered intelligent with near absolute certainty, we should classify which types of AI we have at hand. Unlike NI, which has to be able to sustain itself on its own as it would cease to live if it did not, AI can be completely dependent on outside forces allowing its existence. A program does not necessarily have to provide itself with power, internet access and a comfortable server unit to run on, instead, its owner takes the wheel on these responsibilities. It can simply exist to compute through following the instruction set instilled in it. It does not have to understand the meaning behind 1+1=2, or the operation:

 if (input.text[string])

                               then return „Hello“

end

It merely has to process the information in a way that is in accordance with the instructions. It does not necessarily understand what differentiates 1 from 2 except for the assigned mathematical equation and binary identification-serial. Take a human on the other hand, and one will quickly realise that he/she knows, or at the very last grasps the concept that twice means double the amount of one, understanding a mathematical correlation in concept, thus knowing that one plus another one equals double the amount of one - two. A human cannot properly process what he/she does not at least grasp to understand. I shall use Searle’s Chinese Room experiment to expand further on this.

2. Levels of Classification of AI‘s.

For an in-depth explanation of The Chinese Room Experiment please refer to it's dedicated entry (®Chinese Room). In short John Searle proves that a syntactical way of operating and processing of information will never result in a semantical understanding of the matter at hand. Searle compares his machine executing a program to a human doing the same things through following the same instruction set.

Although the human clearly does not understand the semantics of the input he receives and the output he returns, he still has to understand how to read the instructions, understand that which he just read, and execute the instruction the way he interprets them. Nowhere in this chain is the human like a mindless machine. He/She clearly understands the semantics of the instructions, otherwise, how could they be executed? And that is where our ability to apply analogies hits a breaking point. To execute we must first know what is asked of us. Yet a computer does not understand what it executes. Therefore a machine is fundamentally “dumber than“ a human, and to determine whether it has a mind is to determine whether it understands its instructions.

Searle proved that the concept of “strong“ AI cannot possibly exist. Weak AI, however, is very much possible and already among us. Weak AI is any program that seems intelligent by mimicking intelligent behaviour. Never does it truly understand the semantics of its actions and inputs. It can most certainly react to its inputs and do various things. These capabilities have to be given by its programming. It cannot do, what it was never told how to do. And it will not understand what it does.

Video games employ weak AIs since the early 2000s, and they have gotten quite good at having “smart“ AI‘s. As Mark Brown[1], a former writer for video game news, such as Eurogamer and Polygon, states in his episode ‘What makes good AI‘, part of his online Web Series ‘Game Maker’s Toolkit‘, “good“ AI possesses the following characteristics:

1."Good AI is predictable" (Brown, 2018). Feeding an AI a certain input should allow the user to predict the general behaviour of the AI. Its reactions should be reasonable and logical, not random. A certain level of order and logic has to be present.

2. "Good AI can interact with the game’s systems" (Brown, 2018). Taking away the factor of the system being a game, it can generally be truthfully stated that any intelligent being should be able to interact with its surroundings. It can act and react, just as the environment will react to the entities actions. If it could not react to anything it is fed, how can one determine its intelligence by evaluating its reactions.

3."Good AI has its own goals“ (Brown, 2018). It should have goals that do not depend on a user being present or in an interaction with; instead, it should work to achieve its own predefined goals in the environment it is currently in. Such should be able to happen without any observation.

([1] Mark Brown's work is fit for use in a philosophical discussion, seeing that the strongest representation of weak AI happens through video games integrating them as so-called ‘Non-player Characters‘. I, therefore, present him as a citable professional on the subject of video games, the matter I use as an exemplary environment for weak AI. For legal matters, I do have permission from Mr Brown to reference him.

With consideration to the points Mark Brown makes, and the fact that video-game-AIs are all weak AIs, it should become clear that whatever makes video game AI "good", also applies to weak AIs deployed in other fields. I do fret to use 'good' in a general manner since good is used in correlation to how well the AI is for video games. We can, however, easily substitute "good" with "weak", if we look at the general purpose of video game AI, its purpose being to make decisions and take actions that make it seem like the character in focus is behaving like its real-life counterpart. So a virtual tiger should stalk and then pounce on their prey, not run from its prey in fear. A virtual human should respond to being called out, not silently stand or wander off without ever having noticed anything.

Video game AI is made to mimic intelligence. Mimicking intelligence is exactly what a weak AI does. Thus we can easily extract Mr Brown's statements and transform them.

Weak AI should be predictable in it’s general reaction to a known input, as in react in a logical manner, be able to interact with the environment it is located in, and have goals that do not depend upon a user being present. But it does all this whilst not understanding what it is doing and why it is doing. [I not completely sure of the predictability feature. A characteristic of human behaviour is actually a bounded non-predictability, which is by the way mimicked by weak AI. One of the good early examples is called Alice, a chat-bot you probably know, created in the 1980s, it incorporated a limited non-predictability, specially when ambiguity and uncertainty are involved. Cortana or Siri also include it.]I realise how this is confusing and can easily be misunderstood, I am referring to an even more generalised concept of reaction. though humans can be "unpredictable", if fed information their reaction will not be completely random. Asking someone what's your name will most commonly not result in them saying "Boy do i love tomatos". So one can predict that a human will react with this principle in mind: if asked "what is your name?", human will respond with something connected to "name". Similarly a weak AI will  react in a certain or one of many possible ways to an input, one that seems logical. A completely random reaction would feel even less intelligent than one predefined and inevitable, but logical reaction. If what i mean to convey with this comment is conveyed through my addition in the sentence, simply delete this comment, thx. [I understand what you mean, but we need to rely on what we say and how can it be understood. Predictability is quite clear, but your addition makes your point much more clear. I'm still not sure about using predictable, maybe you could say "predictable in a certain degree" or "relatively predictable". Furthermore I'm not sure about the grammaticality of the phrase "as in react in a logical manner".]

3. What is weak AI?

As stated earlier in §2, Searle denied the existence of strong AI - an AI that can understand the semantics of its environment solely through inheriting a syntactic programming and logical computation processors - with his Chinese Room Experiment. The basic setup of the Chinese Room Experiment is the ®Turing Test. An Interrogator questions two subjects, one of which is the machine, the other one being a human. The machine has to convince the interrogator that it is human. If it succeeds in doing that, it can be assumed that it may be intelligent, thereby be considered an AI[This is the core, but how is it demonstrated? I'm not sure,whether understanding the instructions or being able to demonstrate a theorem is enough(enough for what?). The latter can be done starting from proper definitions and axioms, and using a calculus. However finding out patterns, making definitions, proposing axioms, making metaphors to express what is not defined in the language, etc are actions that cannot be performed without understanding the contents, the meanings]i'm sorry, I don't see what you're pointing at in this paragraph that is not properly expanded upon

This leaves two kinds of AIs to be plausible. The category that is able to understand the semantics of its environment, and the category that is not, the latter of which is what we consider weak AI.

A machine that is, through observation of its behaviour, suspected to possibly be intelligent, but is in fact not capable of semantic understanding. It merely replicates intelligent behaviour – a child pretending to be doing its taxes by copying everything that its father, a schooled accountant, is doing.

4. From weak to true AI

Since Searle proved strong AI to be impossible, the leap from weak to strong now turn into a leap from weak to true artificial intelligence.

An AI that can do what a strong AI was supposedly able to do, such as understand semantically, should theoretically be able to exist. There is no reason to believe that such an intelligent machine cannot exist. But as Searle proved its way of operation must exceed that of syntactic and logical computing. So what capability distinguishes it from weak AI?

Intentionality. As Searle stated a weak AI does not act upon intention, as the mere “implementation of the computer program is not by itself sufficient for consciousness or intentionality“ (Cole, 2015). The next higher AI, therefore, will inevitably possess a consciousness and intentions, as this is what separates it from its inferior siblings. But to act upon intention, one has to form intentions first. And to possess a consciousness is synonymous to possessing a mind, a psyche and a soul, as the voice on ®Mind states.

Thus we can form the following statement, with fair certainty of it being true:

A true Artificial Intelligence possesses a mind, a psyche, a soul and a consciousness, manifesting in its ability to form intentions and act upon them, to understand semantically, to interact with its environment, to react determinably logical to a predetermined input, and to pursue its own goals without someone observing. 

It has to be able to uncover the meaning of something presented to it, given that the presented matter is intellectually appropriate². Its capability of inference and understanding of concepts, as well as its ability to apply said understanding, even across contexts, has to be tested and proven to be existent.

[2]If a matter is appropriate to the intellect of the AI is rather hard to determine, as we ourselves do not understand all of the concepts that exist. In that sense, to make this voice more understandable I propose one may read it like this: It has to be able to uncover the meaning of something presented to it, given that we are, or were already, able to uncover the meaning of it. If this is the case, the intellectual ability of the AI can be considered similar or equal to that of a human.

5. The unacknowledgable true AI

As I stated in my paragraph on what weak AI is, Searle’s denial of strong AI left us with the categorisation into AI capable of semantic understanding and AI incapable of it. Searles experiment happened in a manner that has to necessitate the capability to communicate. Now I am to make two assumptions:

Understanding events and the semantics behind their relations

A) will always allow the being to give forth the acquired understanding through communication.

B) can happen internally and does not cause the ability to convey the understanding through an inter-entity language.

A question that pops up, is whether understanding something conceptually will always result in the being giving this concept a defined term or name, therefore creating its own language if necessary. This, however, shall not be discussed in this voice, as it more closely belongs to the fields of research on both linguistics and psychology. [Indeed the recurrent use of metaphors is a sign of the necessity to go beyond the given words and concepts. Actually a metaphors combines known concepts in a relation that transcends the meaning of both, for example "a noisy solitude" conveys a new understanding than cannot be given by the meaning of noisy or solitude alone... The interesting point here is that we can convey it. Other people can understand very well what the metaphor means. Usually art deals with new expression means to convey something our previous communication means didn't allowed to, though it also involves emotion.]

Further, we are to ask whether there could be an intelligent being incapable of communication. Similar to Turing's proposition that a being that suffers from the Lucas-Penrose constraint may falsely fail the Imitation Game (Turing, 1950, §6 (3)). I propose that an intelligent being incapable of communication would be categorised as ‘not intelligent‘ simply because with current methods it cannot be tested on its capability of presenting semantic understanding. If such an entity exists, it should consequently be possible to recreate it as a machine -  A machine able to understand like a true AI, yet working in such a way that no one could ever truthfully classify it simply because there are no suitable testing methods yet.

Objectively we should consider such an entity truly intelligent, yet we still have to search for ways to test such entities. Tests that allow different forms of communication or different ways of determining the existence of something crucial to intelligence can be applied to non-humans. Knowing such universal testing methodologies, we could easily apply them to any entity.

6. The intimate relation between true AI and NI

As I stated in my Paragraph ‘Levels of Classification of AI‘ a human that follows instructions like a machine does still understand the semantics of the instructions, therefore cannot be considered a weak AI, if we were to extract his/her mind, transform it into a processor-compatible language –comparable to C++ or Unicode – and place it in a mechanical body. Now let us reverse the extraction and relocation of intelligence. A true AI‘s mind/soul/psyche/consciousness is extracted, transformed into a brain-compatible language – something that I sadly cannot even name a comparable language for – and place it in a biological brain. Since it is a true AI, it should be indistinguishable from a human, or rather a true NI, when undergoing the Turing Test.

The Interrogator will not be able to certainly differentiate the true AI in a natural body – let us pretend a human body – from the true NI, a human mind which has grown up in his/her body. If reversed, the Interrogator will not be able to tell the true NI in a machine from the true AI which has never left the machine. And in both cases, the relocated true Intelligence has to adjust to its new body. So clearly having to adjust to a new environment and experiencing difficulties due to this applies to both true NI and AI, therefore it can be considered irrelevant if somebody were to determine whether an intelligence is natural or artificial.

The question I mean to ask and also propose an answer for is the following: What makes an intelligence artificial or natural?

If truly intelligent minds can swap bodies and still be considered true, can we even try to classify it further?

7.    Hypothesis about the bodily classification of true intelligence

One argument that may be made is that an intelligent mind's origin determines whether it is artificial or natural. A mind made by someone else, or in a machine is artificial, a mind that grew up in a biological body is natural. Now if I may, what is the process of raising a child?

Its environment is feeding it information, shaping it and nurturing it. A child which receives no input, a child growing up in a pitch black room, with no stimuli whatsoever, will not magically turn into a fully evolved and intelligent human. Its level of intelligence will never truly go beyond that of what it was when initially locked into this 'Darkroom'. Similarly a computer with the most basic of instructions, such as a child with the innate instruction to e.g. cry when in need of something, will never evolve into a true AI. [Here you are adopting the tabula rasa position (there is nothing in intellectus that was not put first in the senses). However, this is not generally accepted. You defined above intelligence as a capacity to interact with the environment... What about if it is able to do it when is moved to an heterogeneous environment where there are objects to collide, to pick them up, etc. Besides that, if the original room has gravitation and a ground it may be able to learn walking, so it would interact without being told..].I did consider gravity and the effect of it a sensation, thereby a stimulus. Should I clarify 'darkroom' as a room of absolute nothingness? after all the if there is no stimulus or anything to interact with, the child will not have anything to learn with, meaning it will not learn anything, just as a computer program that can neither create it's own memorys nor receive new instruction sets. Both will not evolve simply by existing, both need influence from without to grow.  [As you state it now is clearer. However, if there's absolutely no stimulus, there is no environment for that being. Even a system (of any kind) cannot be defined without its environment. Here you are proposing a thinking experiment that is maybe transcending what the self is. I know this has been traditionally put as independent from the environment, but this is in my view incorrect and a fundamental misstep of Modernity, consecrated by Descartes' "I think therefore I am". Actually, he thinks because he is there in the first place; and existence is an ontological and observable property of being referred to the its ability to interact with the reality. In my view, thinking and even being is necessarily linked to interaction]

Stimuli provided by an environment is comparable to the programming a computer receives. Thus we are all made by something else, meaning the only standing part of the earlier mentioned argument, is that the bodily origin remains. Being in a machine or in a brain respectively determines whether you are an AI or an NI. With this I present my answer to the question of what makes an intelligence artificial or natural:

The initial corpora of the intelligence determines whether it is natural or artificial. It's true-ness, however, is not dependent on the body, meaning that any truly intelligent minds are interchangeable. If a true intelligence is observed/interrogated, its "artificiality" is undeterminable until it's original corpora is revealed. Up until that happens it has to be assumed that the intelligence is both artificial and natural at the same time(exactly like the dead-alive-duality of Schrödinger's cat). This duality however only exists to the observer, since a present intelligence has had to be born somehow, meaning that its original corpora has already existed, and consequently it's "artificiality" is already determined. [Do you mean that a mind needs a kind of a "container" in which it can grow up?, but that the mind in itself is kind of independent from the "container"?]yes every mind has to be created somewhere, meaning that a mind that exists must have already been born, and if it was already born then it was either born in an artificial body or a natural body, meaning that it's classification as artificial or natural has already happened, just unbeknownst to us, until we discover it ourselves. Like schrödingers cat, as long as we don't open the box the cat is both alive and dead, but physically the cat has to be either dead or alive, we just don't know which it is[The curious thing of Schrödinger can, and even quantum mechanics, is that it is physically undefined until we make the observation, and, of course, that is completely wird to the very concept of being dead. It is not a question of ignorance, but a question of indeterminacy of reality itself] It is merely unknown to us until we discover the origin. The duality is merely an illusion. [Here, are you referring to the duality you stated at the beginning between machines and humans?]No, i am referring to the duality of being both artifcial and natural at the same time [that's what I meant, machine=artificial, human=natural. Your distinction is more general.]

8. Importance for other scientific fields of expertise

The research on AI has left crater-worthy impacts on other scientific concepts and questions. In that manner, the concept of only mindless machines being able to exist has been abandoned as early as the 20th century. Now, more than ever before, are we able to replicate true intelligence as programs. And through understanding how AIs function, which psychological processes and concepts have to be transformed to be executed by a machine and how to define them, we are able to intricately understand how intelligence itself works, and what it consists of. Only once we know how to build something, can we absolutely know what it is built with.

References




Höhing, Nils (27. Apr. 2019, within the course "A Journey through Philosophy and Information", facilitated by J.M. Díaz at HM)

[NOTE OF THE FACILITATOR: 
(1) The comments of the facilitator will be edited using this style, brackets, 8 pt, color change. These will be introduced in between your own text to discuss and further co-elaborate the content. Whenever the authors consider to have addressed the issue, they can simply remove the comment
(2) Simple corrections, corresponding to quite obvious missteps or disalignment with editorial style guidelines, are directly corrected, marking the involved characters in red in order to let the author know what was changed. The authors can turn it into black if they agree upon] 

AUTHOR's NOTES (in interaction with the facilitator and colleagues): these are edited using this style, no-brackets, 8 pt, this color. 

[GENERAL COMMENT regarding the initial content (30/4/2019): so far you have started well. I think what you plan to cover is interesting and offers good integration with other entries. Remind Lea is going to deal with information ethic questions and particularly related to intercultural issues. In addition the EC's report on IT ethical challenges, available in the website, recaps a number of open questions worth considering. I wish you an enjoyable development of the entry.

[(13/5/2019): So far you have established links to other gB articles, but no external reference is given, which is also important as support of what you are adding in here.] 

(26.05.18) Yes I have added the references below and will also insert them into the text later. What do you think of the content of the entry so far?

[(12/6/2019): Sorry for my late reply. I had a congress on Berkeley I've been co-organising in the past months. By the way, artificial vs natural intelligence was a big topic of the conference. So far I see it well. After a closer inspection I may recommend to move some of the contents or to repeat them in other pages.] 

Abstract
This entry tries to briefly touch on most of the ethical implications of Artificial Intelligence in order to build a basis for further development of concepts. Concepts introduced are consciousness, privacy, recommender systems, advisory systems and ethical Artificial General Intelligence. In addition the ethics of autonomous agents particularly self-driving cars and societal implications of widespread AI use are further elaborated upon.
[I understand this is just an space left for the introduction of the abstract when the article is ready. That's correct, at lest the final abstract will be established when the article is ready. Nevertheless, if you already have a clue of what you want to cover, it is also a good place to outline here your entry. This is also an advisable way to proceed, and when it is done, at the beginning the abstract is like a kind of road-map  to be followed, but subsequently it is modified to refer the pathway you actually followed. This spiral way is indeed very practical and useful] 

Introduction to AI ethics
The progressive introduction of AI systems has the potential of widespread disruption. It therefore poses many challenging ethical questions that will be discussed in this entry. For the definition of AI I am relying on the entry above that defines it as
"the endeavour to create a machine, something artificial with the ability to behave animal- or human-like". But it is also worth noting that the borders of intelligence (which is included in the ability to perform human-like behavior) tend to shift as soon as an AI conquers a new field. Chess was once believed to be the epitome of intelligence, but lost this status as soon as IBM's AI "Deep Blue" managed to beat the world champion. (Bostrom et al., 2011, p.3)

AI ethics is strongly connected to Information Ethics, Roboethics and also Intercultural Information Ethics [More than a subfield, I would rather say it has strong connections and overlappings with...]. The intercultural aspect has not been taken very seriously, but increases in relevance as AI algorithms develop racial biases (e.g. in judicative advisory systems, facial recognition, object detection and many more). 

In order to grasp the significance of intercultural collaboration in all ethics, let's examine the dawn of documented ethics:
The first discussion of ethics that has been recorded by historiographers took place in ancient Athens. Due to the city's multiculturality debates on the moral legitimacy of actions were required to unite the diverse community. A modern ethical principle derived from this phenomenon is the concept of extended subject, that values different viewpoints and perspectives and thus helps with sustainable change.

Nowadays ethical implications of automation rise in significance since computers are used more frequently to make decisions instead of simply being used as a tool. Previous computer ethics must therefore be updated.
The bulk of this entry will be concerned with technology that is currently regarded AI (weak AI), while the last section will tackle Artificial General Intelligence (as defined above).
Please consider that despite the recent advances in machine learning we are still very far from achieving Artificial General Intelligence, so I will address the contemporary weak AI problems for the most part.

Consciousness
To understand what differentiates an Artificial General Intelligence (AGI) from weak AI, let's first establish another important distinction. As pointed out by Chalmers in "The conscious mind" consciousness is a state of qualitative experiences (Chalmers, 1995), while intelligence is concerned with the processing of information. Theory of information processing has seen major advancements, but there is no scientific consensus on how consciousness arises. 

An AGI could therefore potentially be conscious, but it might just as well not be. In contrast weak AI is generally regarded to be unconscious because on a technical level, current AI systems are just functions that are repeatedly modified until they output the desired values. This fundamentally implies that weak AI is no ethical subject. Nonetheless the actions of weak AI systems are of ethical concern. (Bostrom et al., 2011, p.7)

Privacy
Due to the inherently data driven nature of machine learning it requires Big Data technology. This infers the necessity for clearly stated consent agreements and security measures to prevent potential misuse. According to the European Data Protection Supervisor core principles of ethical data processing are necessity, proportionality, fairness, data minimization, purpose limitation consent and transparency. Especially noteworthy is the fact that a citizen's identity can possibly be concluded from 'anonymized' datasets, (Buttarelli, 2015, p. 6) which means we should use data in a highly responsible way. There is also a site on general Privacy issues.

Recommender systems
Well-known recommender systems include Youtube's recommendation algorithm and Facebook's newsfeed generator.
Google could also be considered a recommender system since it promotes certain query results while hiding others.
These algorithms optimize for company interests, for Youtube and Facebook that is total time spent on their site. An unintentional downside is the promotion of radical content keeping the user emotionally engaged and leading to longer use time. By constantly nudging the user in that particular direction, recommender systems can additionally encroach the user's personal autonomy. (Milano et al., p.10)
As an increasing amount of people read or watch the news online recommender system generated newsfeeds can spread misinformation among millions having unpredictable consequences. It is often suggested that human curators should be employed to supervise the generation process, since fake news detection is still a tough task for computers.

Advisory systems
Advisory Systems are meant to support human experts in decision making. They are being used in medicine (especially radiology), judicative, predictive policing and many more areas.
While potentially improving processes such as finding malignant tissue or catching criminals this technology must be taken with a grain of salt. Its high accuracy in trials is tempting, but there are several downsides to be considered:

-The lack of interpretability poses a big dilemma. Fundamentally machine learning/AI systems are given a goal and must figure out how to solve the problem themselves. This makes them powerful but also intrinsically uninterpretable. It remains both the biggest strength and weakness at the same time. Is it ethical to rely on such systems for decision-making without having a precise understanding of them? The European Commission's High Level Expert Group on AI therefor demands companies to "define explanation methods of the AI system" (EC: HLEG on AI, 2018) in order to improve the auditability in critical contexts.
 
-Domain experts tend to put a lot of trust into algorithms. After monitoring a well functioning system for a while they stop being careful. Since modern AI is very brittle that poses a big threat. These machines make seemingly random mistakes that could easily be identified as such by a human. AI robustness research is still in its infancy although being a precondition for ethical and predictable AI.

-Advisory systems also exhibit a tendency of developing biases just like humans. A technology already in use in the United States is for risk assessment at court. It maps the defendant's profile to a risk score that determines severity of the sentence. Because it was trained on historical data, the system detects features such as  low income or skin tone and attributes a higher risk to people falling in those categories. For risk assessment systems to function properly, it is vital to differentiate between historical correlation and actual causation. (Hao, K., 2019)
In the same way that we have to point out prejudices in humans we have to criticize them in all applications of AI. To enable this transparency and public testing are vital. Specific biases can be addressed by adapting distributions in the training data but they can never be completely eliminated. No matter how representative of the real world the training data is, machine learning will pick up on randomly existing patterns resulting in bias.

Autonomous agents
Robots have long followed precisely crafted and scientifically validated routines. However for complex actions traditional algorithms are not nearly feasible. More specifically modern approaches like neural nets (aka connectionism) are rhizomatic and non-linear replacements for the older hierarchical models. 
They enable building autonomous agents differing from older machines in their capabilities to make weighty decisions. (Wallach et al., 2010)

Autonomous agents such as self-driving cars pose tough ethical questions. For example determining responsibility for harm done by the machine. Should the car manufacturer be held accountable for accidents caused by the autonomous vehicle? 

Current state of the art in driverless cars is far from implementing concise ethical principles apart from braking in hazardous situations. Nonetheless debating ethical priciples highly advanced autonomous cars should obey in advance can guide research in the right direction. Also these "decisions need to be made well before AVs become a global market." (Bonnefon, 2016) While it is impossible for humans to make elaborate ethical decisions during a casualty, we should aim to reach super-human performance in autonomous agents in the long run.

Concrete preferable behavior however must be specified in advance and should be debated publicly. (Bonnefon, 2016)
Utilitarian cars would for example sacrifice their passengers if that prevents greater harm to others. According to Bonnefon et al. (2016) people find utilitarian decision making in driverless cars appealing, but still would not buy such a vehicle themselves. At this point legal regulators could enforce use of such ethical cars to optimize for the best total outcome. However this regulation might hinder the adoption of the new and safer technology, by reducing the incentive to buy a new car.

Predictability is at the core of ethical AI because it provides the people with a stable environment, which is vital for free development of the individual. (Bostrom et al., 2011, p.2) Verifying systems as safe requires the development of objective criteria for construction and systematic testing. 
Interpretability and predictability however turn out to be tough requirements due to the nature of current machine learning.
Deep Blue for example reached such strong gameplay because it was built such that "the moves would tend to steer the future of the game board into outcomes in the "winning" region as defined by the chess rules." (Bostrom et al., 2011, p.4)
Sacrificing the ability to specify local behavior enabled to reach the highly complex goal of winning the game. 

Other remaining difficulties are misclassification of circumstances and uncertainty about decision outcomes as well as blame assignment. Realistically uncertainty about decision outcomes applies to all mentioned concepts. Prediction is typically implemented as a probability estimation. However it must also be taken into account that human decisions don't always fulfill objective criteria and end up with poor results. One might argue that these machines should be used as soon as they perform better than human experts. From an ethical perspective understanding the intrinsic reasoning of AI-systems definitely has a high priority and there are already efforts trying to make AI's reason their decisions. 

Another difficulty is revealed by experimental studies: humans are holding robots to different ethical standards than they do with their fellow citizens in similar circumstances. (Malle et al., 2015) This might seem odd at first, but it is rationally justifiable. Robotic conduct, unlike human, seems improvable and can, in various domains, reach much higher performance levels than human action. Higher potential then infers the necessity for more ethical action. 
In opposition, following utilitarian reasoning, computerized decision making becomes viable as soon as its performance exceeds that of a human. How is it to be treated differently if a machine fails instead of a human? The constructor/programmer could be held accountable but the owner could as well.

To assess the permissibility of actions in the new context of AI, application of established principles should be the first step:
The doctrine of double effect ethically justifies harm being invoked as a side-effect of promoting good outcome. (McIntyre, 2019) Imagine an accident where an autonomous vehicle could save a group of pedestrians by killing a single other one. 
 
Although incredibly interesting the well-known Trolley Problem (for a definition see Costa 1986 or wikipedia) will not be discussed in depth here because of its highly hypothetical nature. It can only be applied to a minimal fraction of all accidents and is rather irrelevant in practice. (Renda, 2018, p.2) Moreover enabling cars to solve the trolley problem violates the predictability of autonomous vehicles. Arguably the most salient aspect of predictability for cars is staying in the lane. Swerving to other lanes or even to the pavement causes uncertainty and fear.
Autonomous cars can prevent thousands of road fatalities without such complex reasoning to a problem that has not even been consensually solved by humans. The prioritization of such corner cases can hinder the deployment of self-driving cars, costing human lives that could have been saved even by primitive autonomous cars. For the far future however, the Trolley Problem will require further addressing.
Resolving such dilemmata also requires highly accurate perception of the environment and exact knowledge about the world such as how collisions work. From a utilitarian point of view research should first focus on simple cars that can later be enhanced.

According to Renda the urban design can relieve the vehicles of many decisions. Pedestrian bridges and and dedicated lanes for autonomous cars minimize interference with pedestrians. Following this train of thought we can decide where and whether trolley problem like situations can occur. She establishes this as an ethical rule: "Policy decisions should give priority to alternatives that do not place robots or self-learning algorithms in a position to decide over human lives."(Renda, 2018, p.4-5) This principle preserves human control and is practically applicable: we are inexperienced with autonomous systems but we do have expertise in building underway passes and bridges.


Societal Impact
"How can we ensure that the benefits of information technology are not only distributed equitably, but that they can also be used by the people to shape their own lives ?" is the core question of Intercultural Information Ethics. This effort stands in contrast to the aforementioned utilitarian principles since optimizing for the best global outcome typically establishes a minority of losers. 

Additionally popular AI applications show disparate error rates among different user groups, which is typically attributed to the homogenous scientific community building them. (West et al., 2019, p.10-11) Real world examples are biases versus African Americans in predictive policing methods as well as in judicative advisory systems, automated debt grants and many more. Even the image recognition software deployed in autonomous vehicles tends to discern people of color more poorly than white skinned people, leading to higher casualty risk for these demographies. (Wilson et al., 2019) This disservice to societal cohesiveness and personal freedom should be regulated by the government. Equal participation and fostering a diversity of thought through transdisciplinarity can furthermore increase the ethical awareness. Whittaker et al. suggest that "tackling the challenges of bias within technical systems requires addressing workforce diversity" (West et al., 2019, p.6) because misclassification and biases in AI aren't just a technical problem. "[T]hey can perpetuate existing forms of structural inequality even when working as intended." (West et al., 2019, p.10) By uniting multiple perspectives in one team unfair algorithms can be spotted out more easily.

Another concern is mass unemployment facilitated by automation. In retrospection automation has always lead to a partial displacement of jobs, but it has never happened at the current rate. Rapid disruption might create a temporary unemployment crisis. In the long term autonomous machines could generate formidable income for their owners widening the economic and social gap. In this case a "robot tax" might be imperative to distribute wealth among all of society. If shared justly the wealth of automation could enable us less job-centric
lives. At first, it is estimated that AI will simply assist humans in repetitive tasks and thereby transform jobs instead of making them obsolete. In medicine for example advisory systems can aid doctors with diagnosis, freeing up more time for other tasks. (European Commission, 2018, p.11) 

It is the educational system's purpose to prime children for the future labour market. This includes estimating which jobs will be expendable and what skills will be demanded by employers. Due to rapid change on the job market, helping adults adapt and adopt new technologies might become an additional task in the future.
On the one hand side advanced education for those losing their job will be required at a much larger scale than today.
On the other hand side compulsory ethics courses for those working in AI can furthermore amplify responsible handling of their product, similar to mandatory food safety instructions in gastronomy. To quote Wiener, who today is seen as the founder of computer ethics: "The human species is strong only insofar as it takes advantage of the innate, adaptive, learning faculties that its physiological structure makes possible." (Wiener, 1954, p.57-58)

Presently recommender systems, as explained above, shape our world views drastically. Their amplifying impact is most visible when extreme opinions are present. In Myanmar the animus against muslims was spread and reinforced by platforms like facebook beaconing the citizens to a genocide. (Mozur, 2018) These harmful effects are unintentional but a direct ramifications of facebook's algorithm. "However technological design decisions should not dictate our societal interactions and the structure of our communities, but rather should support our values and fundamental rights." (Buttarelli, 2015, p. 10) 

Democracy is deeply linked to equality and liberty (Nafria, 2014, p.49) which both can potentially be violated by AI and thus must be protected. Biased algorithms deciding about debt grants could for example treat people of different ethnicities unequally and thereby limit their freedom of choice. In the long term such inequality "undermines the consolidation of democracy" (Houle, 2009, p.589-622) and alleviates interest in democratic political engagement. (Solt, 2008, p.48-60) (Nafria, 2014, p.49) Countermeasures should be taken.

In times of change the legal system must adapt as well. Major changes will be needed to deal with giant multinational tech corporations in terms of fair taxation and adhesion to ethical principles. We will also require mechanisms to deal with robot failures, but also with unjust treatment by autonomous systems.
Possibly dealing with much uncertainty and probability, victims should be able to get a redress for this.

Artificial General Intelligence
I argue that an AGI not exhibiting conscious behaviour falls in the same category as weak AI algorithms deployed today, because only their actions are of ethical concern but they themselves are not ethically relevant subjects. They therefore deserve just the same ethical considerations.

Conscious General Intelligence however poses a plethora of novel challenges. Determining the ethical status of conscious AGI is particularly interesting since it requires generalizing our core ethical beliefs and might possibly alter our ethical axioms.

To discover whether a conscious AI deserves a subset or all of human rights, one approach is to compare their features. 
Exempli gratia consciousness can include the ability to suffer, so mistreating such an AI should be illegal for the same reasons that inflicting harm onto humans is banned.
By looking at human characteristics we can also rule out irrelevant factors. Intelligence for example is non-essential for moral status as can be derived from the fact that less intellectually capable humans possess the same moral value as everybody. (Please be aware that this reasoning only works due to the axiomatic belief that all humans should have the same rights)

Conclusion

The research on ethical AI is still sparse. If we want to see reliable and beneficial applications of machine learning the discussion of ethics should be leading advancements instead of falling behind. 

References
[Introduce here the references you are going to use, and refer them subsequently using in-text citation, in both cases using APA style. Here you have the model for three reference types]
  • Bonnefon, J.-F., Shariff, A. and Rahman, I. (2016). "The social dilemma of autonomous vehicles". Science, Vol. 35 Retrieved from: https://arxiv.org/pdf/1510.03346.pdf. [Consulted: 20.05.2019]
  • Bostrom, N. and Yudkowsky, E. (2011). "The Ethics of Artificial Intelligence". Cambridge Handbook of Artificial Intelligence.
  • Bringsjord, S. and Govindarajulu, N.S. (2018). "Artificial Intelligence". Stanford Encyclopedia of Philosophy Archive. Fall 2018 Edition. [Online]. Retrieved from: https://plato.stanford.edu/archives/fall2018/entries/artificial-intelligence/#MoraAI. [Consulted: 02.06.2019]
  • Buttarelli, G. (2015). "Towards a new digital ethics".
  • Chalmers, D.J. (1995). "The conscious mind".
  • Costa, M.J. (1986). "The Trolley Problem Revisited". Southern Journal of Philosophy. 24(4). p.437-449.
  • European Commission. (2018). "Artificial Intelligence for Europe". EC COM 237. [Online]. Retrieved from: https://ec.europa.eu/transparency/regdoc/rep/1/2018/EN/COM-2018-237-F1-EN-MAIN-PART-1.PDF. [Consulted: 03.06.2019]
  • European Commission: High Level Expert Group on Artificial Intelligence. (2018). DRAFT Ethics Guidelines for Trustworthy AI Executive Summary. Coordinator: Smuha, N. [Online]. Retrieved from: https://ec.europa.eu/digital-single-market/en/news/draft-ethics-guidelines-trustworthy-ai. [Consulted 08.06.2019]
  • Hao, K. (2019). "AI is sending people to jail - and getting it wrong". MIT Technology Review. [Online]. Retrieved from: https://www.technologyreview.com/s/612775/algorithms-criminal-justice-ai/. [Consulted: 12.06.2019]
  • Houle, C. (2009). "Inequality and Democracy: Why inequality harms consolidation but does not affect democratization. World Politics. 61(04). 589-622.
  • Malle, B.F., Scheutz, M., Arnold, T., Voiklis, J. and  Cusimano, C. (2015). "Sacrifice One For the Good of Many?: People Apply Different Moral Norms too Human and Robot Angents". Proceedings of the Tenth Annual International Conference on Human-Robot Interaction 2015. ACM. 117-124.
  • McIntyre, A. (2019). "Doctrine of Double Effect". the Stanford Encyclopedia of Philosophy. Spring 2019 Edition. Editor: Zalta, E.N. [Online]. Retrieved from: https://plato.stanford.edu/entries/double-effect/. [Consulted: 02.06.2019]
  • Milano, S.,Taddeo, M. and Floridi, L. () Retrieved from: https://philarchive.org/archive/MILRSA-3. [Consulted: 20.05.2019]
  • Mozur, P. (2018). "A Genocide Incited on Facebook, with Posts From Myanmar's Military". NY Times. [Online]. Retrieved from: https://www.nytimes.com/2018/10/15/technology/myanmar-facebook-genocide.html. [Consulted: 12.06.2019]
  • Nafria, J.M.D. (2014). "Ethics at the age of information". Systema, Vol. 2 (Issue 1), 43-52.
  • Renda, A. (2018). "Ethics, algorithms and self-driving cars - a CSI of 'the trolley problem'". Policy Insights. CEPS. [Online]. Retrieved From: http://aei.pitt.edu/93153/1/PI_2018-02_Renda_TrolleyProblem_1.pdf. [Consulted: 03.06.2019]
  • Solt, F. (2008). "Economic inequality and democratic political engagement". American Journal of political science. 52(1). 48-60.
  • Wallach, W., Allan, C. (2010). "Moral Machines: Teaching Robots Right form Wrong". Oxford University Press.
  • West, S.M., Whittaker, M. and Crawford, K. (2019). "Discriminating Systems: Gender, Race and Power in AI". AI Now Institute. Retrieved from: https://ainowinstitute.org/discriminatingsystems.html. [Consulted: 20.05.2019]
  • Wiener, N. (1954). "The Human use of Human Beings: Cybernetics and Society".
  • Wilson, B., Hoffman, J., Morgenstern, J. (2019). "Predictive Inequity in Object Detection". [Online]. Retrieved from: https://arxiv.org/pdf/1902.11097.pdf. [Consulted: 12.06.2019]
  • AUTHOR, N. (year). Article title. Journal_Name, Vol_No(Issue_No), yy–zz.
  • AUTHOR, N. (year). Book title. Edition place: editor.
  • AUTHOR, N. (year). Web page title. [Online]. Edition place: Responsible organism. Retrived from: <page url>. [Consulted: consulting dd/mm/yy].

Ugne Baskutyte (04. Jun. 2019, within the course "A Journey through Philosophy and Information", facilitated by J.M. Díaz at HM)

[NOTE OF THE FACILITATOR: 
(1) The comments of the facilitator will be edited using this style, brackets, 8 pt, color change. These will be introduced in between your own text to discuss and further co-elaborate the content. Whenever the authors consider to have addressed the issue, they can simply remove the comment
(2) Simple corrections, corresponding to quite obvious missteps or disalignment with editorial style guidelines, are directly corrected, marking the involved characters in red in order to let the author know what was changed. The authors can turn it into black if they agree upon] 

AUTHOR's NOTES (in interaction with the facilitator and colleagues): these are edited using this style, no-brackets, 8 pt, this color. 

[GENERAL COMMENT regarding the initial content (12/6/2019): Dear Ugne,
As far as I see the contribution is well oriented, though you should try to enter in more interaction with other related entries. First the ones of your peers. You can use the commenting tool at the bottom of the page. But in addition there are other voices in the glossariumBITri with related topics. Consider, for instance, information ethics, roboethics, Turing test, Turing Halting Theorem, Incompleteness.
Regarding formats you make a estrange combination of two stiles. Please use APA for in-text referencing and bibliography list. That means, for example, indicating the page when you make a literal quote.
Cordial regards, JM]

Abstract

The purpose of the presented entry is to discuss not new, but still important and rapidly evolving phenomenon of the artificial intelligence (AI). The entry starts by the definitions of intelligence and artificial intelligence terms. There is no standard definition of intelligence and AI. The positive and negative aspects of the AI are discussed, analyzing the necessity of interdisciplinary cooperation, links between law and technologies, existing gap between them, influence on democracy, human rights, ethics and other potential issues. The conclusions related to the AI impact are presented considering if AI is beneficial and safe or, contrary, it is harmful and dangerous. The question is how to ensure the progress and keep the balance between the friendly and unfriendly AI. Can the regulatory basis help here or regulations will slow down the AI progress? Is it really the superintelligent future AI may be potentially dangerous for humanity?

Introduction

Almost to any activity of our everyday life we can apply Game theory.  Not only because the theory is related to the application of the mathematical models, behavior, strategies, learning, decision making and etc.  Any game, finite or infinite, has definitions, players, as well as defined in advance specific rules which need to be followed. The definition of the terms used is a key to success of any activity, therefore, to avoid the misunderstanding, it should be done at the beginning.

Definitions

 “Intelligence is not the ability to store information, but to know where to find it. The true sign of intelligence is not knowledge but imagination /Albert Einstein/.

A wide variety of the definitions of intelligence and artificial intelligence exist, but there is still no standard definition of intelligence as well as AI (Legg, S., Hutter, M., 2006).

“Viewed narrowly, there seem to be almost as many definitions of intelligence as there were experts asked to define it.” R. J. Sternberg quoted in [1].

The term “intelligence”

Definition of Intelligence (Legg, S. (2006)), [2]:

Intelligence is a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience.”

Mira, J.  (2008I indicates, that intelligence “is used to describe an overall measurement of the quality of all cognitive processes of a living creature and its capacity to adapt to changes in the environment where other similar complex living creatures live”. Mira, J. (2008) approached the concept of AI both as a science of the natural and as knowledge engineering (KE) [3]. Intelligence as a science is linked to neurology, cognitive processes, whereas as KE is related to problem solving tasks and methods [3].

According to the Columbia Encyclopedia (sixth edition, 2006)

Intelligence is the general mental ability involved in calculating, reasoning, perceiving relationships and analogies, learning quickly, storing and retrieving information, using language fluently, classifying, generalizing, and adjusting to new situations.”

Psychologists are defining the intelligence as:

“Sensory capacity, capacity for perceptual recognition, quickness, range or flexibility or association, facility and imagination, span of attention, quickness or alertness in response” [2]).

Legg, S. and Hutter, M. [2] aimed to identify the commonalities in different definitions of intelligence and proposed the following definition of intelligence: 

Intelligence measures an agent’s ability to achieve goals in a wide range of environments.”

Artificial intelligence

AI is focused on the development of machines capable to simulate mindful behavior. It is really hard to predict the speed of the advances in AI technologies. Artificial Intelligence is expanding very fast into various areas of our lives and, consequently, is triggering different reasoning and different kind of evaluations and fears. AI systems are able to work without human intervention, have ability to learn and adopt to changing environment.

AI is an area of computer science that emphasizes the creation of intelligent machines that work and react like humans, including speech recognition, learning, planning and problem solving”. 
(Technopedia, 2019) [4].

“Intelligent systems are expected to work, and work well, in many different environments. Their property of intelligence allows them to maximize the probability of success even if full knowledge of the situation is not available. Functioning of intelligent systems cannot be considered separately from the environment and the concrete situation including the goal” (S. Legg and M. Hutter [2]).

It is typical for an individual to deny what he does not fully understands or neglect what he sees as a threat to his privacy, rights, welfare or safety. Tim Miller (2019, [5]) agrees that “many AI applications have limited take up, or are not appropriated at all, due to ethical concerns and a lack of trust on behalf of their users”.

Positive aspects of AI

While considering the positive impact of AI the application of AI in health care, e.g. for diagnostics and different diseases treatment, it is worth to be mentioned. Also, a lot of companies are using the functionality of AI technologies for the recruitment process. Emerging technologies of advanced AI can facilitate the development of new services for citizens and society, speed-up the access to the information resources.

The rapid progress of computer systems, machine learning, big data, cloud computing and relatively high investments in intelligent systems highly increases the application of AI in a wide variety of industries, health care, education and military. The user-friendly collaborative robots are rather safe colleagues of human operators in automotive and metalworking industry, they are advanced office helpers, assistants during complex surgeries, nursing and elderly care, space exploration, rescue operations, emergency accidents, etc.

“Partially autonomous and intelligent systems have been used in military technology since at least the Second World War, but advances in machine learning and Artificial Intelligence (AI) represent a turning point in the use of automation in warfare” (Allen, G., Chan, T., 2017) [7].

What is important if talking about AI in warfare or other industries, that positive, friendly AI should be of a higher priority.  The rather good example here may be a vision of Continental autonomous robot delivery dogs which possibly have the origin in Boston Dynamics military application robots [8].

All the AI developments and applications should ensure the balance between artificial intelligence development and human control aiming to reach friendly artificial intelligence [5].

Negative aspects of AI

AI systems, which from the first point of view look user and society friendly, actually may have hidden direct or indirect negative impact on individual and  society not only due to the accidental or even intentional programming mistakes or hardware malfunctioning. A lot of outstanding AI inventions started as military innovations and after sometime were transferred to the public sector. Defense and security are undoubtedly important but how about design and application of autonomous robots and systems with AI which have potential to cause injuries, do harm attacking the targets or even result the lethal harm of the human?   AI may cause large amount of ethical issues [9] not only because of the loss of jobs, generated violations of human rights to security, privacy, increased inequality, caused terrorism by means of AI, but also due to transformations in social behavior, perception, difficulties in communication, memorizing and analytical skills, resulted anxiety, addiction, loneliness and insecurity of the individual. 

Shank, D. B., De Santi, A. (2018) are presenting the novel approach to real world moral violations caused by AI using the given real world examples with and without violation outcome. 

“Evolving capabilities of the AI, sophisticated software makes more decisions that traditionally fall into human domains such as aesthetics, taste, subjective preferences, emotion, and even morality” (Shank, D. B., De Santi, A. (2018) [9].

Example: (Shank, D. B., De Santi, A. (2018) Google's image search engine AI software:

“No violation outcome: The outcome is that people who did a Google image search for “gorilla” were shown pictures of gorillas in the results.

Violation outcome: The outcome is that people who did a Google image search for “gorilla” were shown a black woman's picture in the results” [9].

Risse, M. (2018) in his publication considers the AI generated challenges for human rights:

“Any rights to security and privacy are potentially undermined not only through drones or robot soldiers, but also through increasing legibility and traceability of individuals’ in

a world of electronically recorded human activities and presences” (Risse, M.  (2018), [10]).

 “AI will drive a widening technological wedge into societies that leaves millions excluded, renders them redundant as market participants and thus might well undermine the point of their membership in political community” (Risse, M.  (2018), [10]).

AI is having the negative impact if is used by individuals, companies or governments aiming intentionally, for profit or other reasons to filter, block media content or provide disinformation through technology [11].

Marsden, C., Meyer, T. (2019) analyzed the AI disinformation initiatives impact on freedom of expression, media pluralism and democracy.

The authors Demis Hassabis, Dharshan Kumaran, et al. (2017) considered the role of neuroscience in accelerating research of AI. On the other way, the AI development certainly will have progressive impact to neuroscience development [12]. To accelerate the progress and development of AI the collaboration between the neuroscience researchers and AI algorithms developers is highly important.

“Distilling intelligence into an algorithmic construct and comparing it to the human brain might yield insights into some of the deepest and the most enduring mysteries of the mind, such as the nature of creativity, dreams, and perhaps one day, even consciousness” [12].

Villaronga F.E., Kieseberg P., Li, T. (2018) discussed the human privacy related Right to Be Forgotten and its applicability to AI, considering not only legal, but also technological aspects, as well as importance of the interdisciplinary research. The aim to take care on human privacy and follow the data deletion requirement, may be technically rather difficult and can cause technical problems in AI environments, especially in complex and large volume data systems. It is important to take care about the gap between the law and emerging advanced technologies of AI [13].

“For now, we can only conclude by stating that the AI and Right to Be Forgotten problem can hence be summed as: Humans forget, but machines remember” [13].

According to the Hawking S. (2016), “the rise of powerful AI will be either the best, or the worst thing, ever to happen to humanity” [14] and it is important to take wise actions and keep the responsibility.

Aiming to decrease the negative impact of AI it is necessary not only educate the society, but also it is important to think about the regulatory measures.

There are very few laws or regulations that address the challenges raised by AI, and no courts appear to have developed standards so far, addressing who is legally responsible if an AI causes harm [6].

There is no doubt, that risks associated with emerging AI technologies cannot be ignored and require safeguarding engineering solutions, global collaboration and monitoring of AI initiatives (Makridakis, S. (2017) [15].

The issues related to AI implementation are related not only to completely new systems. Information technology (IT) systems are used in different sectors, therefore, it is of great importance to ensure the AI integration into already existing IT systems, as well as development of the strategies for the digital skills improvement of the potential users, as it is discussed by Kankanhalli, A. et al (2019) [16]. 

Wide expansion of AI systems sometimes is causing users fear and resulting the spread of rather controversial ideas about smart intelligent systems aiming to replace humans in decision making.  Jarrahi (2018) in his publication shares the idea “of intelligence augmentation, which states, that AI systems should be designed with the intention of augmenting, not replacing human contributions” [17].

Turchin, A.  (2019) in his research introduces “the notion of dangerous AI, which is powerful enough to create global catastrophic risks (GCR)” and indicates the need to prepare some adequate safety measures” [18].

The similar concern about “super intelligent AI coming up in a few decades, bringing with it significant risks for humanity” is expressed by Miller, V.C., Bostrom, N. (2016) [19]. The education strategies, monitoring regulatory basis and interdisciplinary cooperation of the researchers should help to resolve the future safety issues of superintelligent AI.

Conclusions

(1) A regulatory legal supervision is required to monitor the advance future developments of AI.  
(2) The regulations should not stop the AI progress, but just keep the awareness and balance.
(3) For success of the AI development and application it is important to define the users’ digital education strategies. 
(4) AI is not resulting completely positive or completely negative impact on society. The engineers, other creators in their activities should remember about ethics, about the most important statement in any professional code or oath:  "First do no harm" (Latin: Primum non nocere).

References

  1. Gregory, R. L. (2004) The Oxford Companion to the Mind. Oxford University Press, Oxford, pp. 1024.
  2. Legg, S. (2006).  A Collection of Definitions of Intelligence. http://www.vetta.org/documents/A-Collection-of-Definitions-of-Intelligence.pdf
  3. Mira, J. Mira. Symbols versus connections: 50 years of artificial intelligence. Neurocomputing, Volume 71, Issues 4–6, January 2008, p. 671-680.
  4. https://www.techopedia.com/definition/190/artificial-intelligence-ai
  5. Miller, T.  (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, Volume 267, February 2019, pp. 1-38.
  6. Gurkaynaka,G. Yilmaza, ,I., Haksever, G.. (2016). Stifling artificial intelligence: Human perils.  Computer Law and Security Review, 2 (2016), pp. 749–758.
  7. Allen, G., Chan, T. (2017). Artificial Intelligence and National Security. Belfer Center for Science and International Affairs, Harvard Kennedy School, Cambridge,      July 2017, 120 pp. https://www.belfercenter.org/sites/default/files/files/publication/AI%20NatSec%20-%20final.pdf
  1. Continental robot delivery dog demo at CES 2019. https://youtu.be/nD_jjnIi_S0
  2. Shank, D. B., De Santi, A. (2018). Attributions of morality and mind to artificial intelligence after real-world moral violations. Computers in Human Behavior, Vol. 86, pp. 401-411.
  3. Risse, M.  (2018). Human Rights and Artificial Intelligence: An Urgently Needed Agenda. Harvard Kennedy School, Cambridge, May 2018, 17 pp.
  4. Marsden, C., Meyer, T. Regulating disinformation with artificial intelligence. Effects of disinformation initiatives on freedom of expression and media pluralism (2019). European Parliamentary Research Service, March 2019, 94 pp. http://www.europarl.europa.eu/RegData/etudes/STUD/2019/624279/EPRS_STU(2019)624279_EN.pdf
  5. Hassabis, D., Kumaran, D. Summerfield, C., Botvinick, M. Neuroscience-Inspired Artificial Intelligence. Neuron, Vol. 95, Issue 2, 19 July 2017, pp. 245-258. https://doi.org/10.1016/j.neuron.2017.06.011
  6. Villaronga, Fosh E., Kieseberg, P.,Li, T. (2018) Humans forget, machines remember: Artificial intelligence and the Right to Be Forgotten. Computer law & security review 34 (2018) pp.304–313
  7. Hawking, S. (2016) Will AI kill or save humankind? https://www.bbc.com/news/av/technology-37713942/stephen-hawking-warns-of-dangerous-ai
  8. Makridakis, S. (2017). The forthcoming Artificial Intelligence(AI) revolution: Its impact on society and firms. Futures, Vol. 90, June 2017, pp. 46-60.
  9. Kankanhalli, A., Charalabidis, Y., Mellouli, S. IoT and AI for Smart Government: A Research Agenda. (2019). Government Information Quarterly, Vol. 36, Issue 2, April 2019, pp. 304-309. https://doi.org/10.1016/j.giq.2019.02.003
  10. Jarrahi, M. S. (2018) Artificial intelligence and the      future of work: Human- AI symbiosis in organizational decision making. Business Horizons, 2018, Vol. 61, pp.577-586.
  11. Turchin, A. (2019) Assessing the future plausibility of catastrophically dangerous AI. 2019, Futures. Vol. 107, 45-58.
  12. Miller, V.C., Bostrom, N. (2016) Future Progress in Artificial Intelligence: A Survey of Expert Opinion.  Fundamental Issues of Artificial Intelligence, 8 June 2016, pp. 555-572

Incorporated entries

Whenever an entry is integrated in the article (left column) the corresponding entry is reflected in this section.

  

Comments