Talk:Philosophy of artificial intelligence

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Wvbailey (talk | contribs) at 02:41, 2 October 2007 (→‎Some interesting stuff re Turing Test and Marvin Minsky and List of open problems in computer science: cc'd the open topics in computer science entry over). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

WikiProject iconPhilosophy: Science / Mind / Contemporary Unassessed High‑importance
WikiProject iconThis article is within the scope of WikiProject Philosophy, a collaborative effort to improve the coverage of content related to philosophy on Wikipedia. If you would like to support the project, please visit the project page, where you can get more details on how you can help, and where you can join the general discussion about philosophy content on Wikipedia.
???This article has not yet received a rating on Wikipedia's content assessment scale.
HighThis article has been rated as High-importance on the project's importance scale.
Associated task forces:
Taskforce icon
Philosophy of science
Taskforce icon
Philosophy of mind
Taskforce icon
Contemporary philosophy

"A major influence in the AI ethics dialogue was Isaac Asimov who fictitiously created Three Laws of Robotics to govern artificial intelligent systems." I've removed "fictitiously." While the Three Laws of Robotics were created for a fictitious universe, Asimov really did create them. It might be appropriate to somehow add that he developed them for his science fiction books. goaway110 21:57, 22 June 2006 (UTC)[reply]

I don't see the point why Ethical issues of AI should be an independent encyclopedia entry. The lexicographic lemma here is surely "Artificial Intelligence". --Fasten 14:41, 7 October 2005 (UTC)[reply]

I disagree Fasten, I think the main AI article should briefly mention ethical issues and we should keep this as a separate article; the subject can be much more extended to include uses of AI(in wars, in saving people of dangerous conditions, in working under unhealthy circunstances(like mining)), AI as a possible future singularity Technological singularity(ie, what will happen if AI eventually become more intelligent and capable than humans), it could also include more deep discussions about the possibility of sensations(qualia) and consciousness in AI, some comments on what will happen if AI gets widespread in the future society with behavior, appearance and activities very similar to ours, and about issues as "should AI have rights and obligation?", "does it makes sense to create laws for AI beings to obey?" Rend 01:29, 17 October 2005 (UTC)[reply]

The first question I would have regarding the ethics of AI would be whether it is possible for a machine to be capable of conciousness. This is obviously a very difficult question given the fact that no human being can really be capable of knowing anyone's internal existence other than his own. Hell, maybe computers really to have conciousness. But if they do, they would be the only one's who would know this for certain since it is difficult to ask a computer if it exists without programing it to say it exists beforehand. I believe animals have conciousness and are capable of feeling emotions even though they cannot tell us this. Also, would the fact that a computer wasn't capable of conciousness, much less emotions, mean that is should not be protected and given rights. This may seem impropper but I can help bring to mind the Terri shivo case. It is very possible that she was fully concious and fully capable of emotions even though she was in a permenantly catatonic state.207.157.121.50 12:49, 25 October 2005 (UTC)[reply]

That might get difficult without OR. I changed the merge suggestion from "Artificial Intelligence" to "Artificial intelligence (philosophy)", which is referred to by the Portal:Artificial_intelligence --Fasten 13:51, 19 October 2005 (UTC)[reply]

The subject can be much more extended to include:

  • Use of AI in wars
  • Use of AI in conditions hazardous to human(saving people from fire, drowning, poisoned or radioactive areas)
  • Use of AI in human activities(doing human work, AI failing, substituting human jobs, doing unhealthy or dangerous work(eg: mining), what will happen if AI gets better than us in most of our work activities, what AI will not be able to do(at least in near future))
  • AI as a possible future singularity Technological singularity: what will happen if AI eventually become more intelligent and capable than humans, being able to produce even more intelligent AIs, possibly to a level that we won't be able to understand.
  • More deep discussions about the possibility of sensations(qualia) and consciousness in AI
  • Some comments on what will happen if AI gets widespread in the future society with behavior, appearance and activities very similar or even better than ours(could it bring problems about machine "treatment"? I mean, could we still throw them away as they were a simply expensive toy, if they become better than us in all our practical activities?)
  • As AI usage and presence become greater and widespread, should we discuss issues as: "should AI have rights and obligation?", "does it makes sense to create laws for AI beings to obey?"

I ask anyone who has references and contents to include them properly. Rend 23:11, 21 October 2005 (UTC)[reply]

Just following up on some of Rend's questions...

  • Is it ethical for a person to own an AI "being"? Would an AI being necessarily "prefer" to be unowned?
  • A computer is owned by whoever owns the factory that makes it (until the factory sells the computer to a person) -- is the same true of an AI being?
  • If an AI being is unowned, and it builds another AI being, then does the first AI being own the second one?
  • Are the interests of human society served by incorporating unowned AI beings into it? Would humans in such a society be at a competitive disadvantage?
  • Would the collective wisdom of AI beings come to the conclusion that humans are but one of many forms of life on the planet, and therefore humans don't deserve any more special treatment than, say, mice? Or lichen?

Whichever way these questions are answered, more questions lie ahead. For example, if we say it isn't ethical for a person to own an AI being, then can or should society as a whole constrain the behavior of AI beings through ownership or through "laws of robotics"? If we are able to predict that the behavior of AI beings will not be readily channeled to the exclusive benefit of humans, then is there a "window of opportunity" to constrain their behavior before it gets "out of hand" (from human society's point of view)?

A survey of current philosophical thought on questions such as these (and the slippery slope issues surrounding them) would be very helpful here.—GraemeMcRaetalk 05:55, 3 November 2005 (UTC)[reply]

Maudlin

The article might benefit fromm a discussion of Maudlin's "olympia" argument. 1Z 00:32, 12 January 2007 (UTC)[reply]

The Real Debate.

This article should contain more discussion of the serious academic debates about the possibility/impossibitity of artificial intelligence, including such critics as John Lucas, Hubert Dreyfus, Joseph Weizenbaum and Terry Winograd, and such defenders as Daniel Dennett, Marvin Minsky, Hans Moravec and Ray Kurzweil. John Searle is the only person of this caliber who is discussed.

In my view, issues derived from science fiction are far less important than these. Perhaps they should be discussed on a page about artificial intelligence in science fiction. Is there such a page? CharlesGillingham 11:02, 26 June 2007 (UTC)[reply]

Yes. -- Schaefer (talk) 12:59, 26 June 2007 (UTC)[reply]
Some text could be moved to Artificial intelligence in fiction. Critics can be listed here but maybe a discussion of the debate may belongs in Strong AI vs. Weak AI? --moxon 15:20, 12 July 2007 (UTC)[reply]

Some interesting stuff re Turing Test and Marvin Minsky and List of open problems in computer science

I cc'd this over from the Talk:Turing machine page:

> Turing's paper that prescribes his Turing Test:

Turing, A.M. (1950) "Computing Machinery and Intelligence" Mind, 59, 433-460. At http://www.loebner.net/Prizef/TuringArticle.html

"Can machines think?" Turing asks. In §6 he discusses 9 objections, then in his §7 admits he has " no convincing arguments of a positive nature to support my views." He supposes that an introduction of randomness in a learning machine. His "Contrary Views on the Main Question":

  • (1) The Theological Objection
  • (2) The "Heads in the Sand" Objection
  • (3) the Mathematical Objection
  • (4) The Argument from Consciousness
  • (5) Arguments from Various Disabilities
  • (6) Lady Lovelace's Objection
  • (7) Argument from Continuity in the Nervous System [i.e. it is not a discrete-state machine]
  • (8) The Argument from Informality of Behavior
  • (9) The Argument from Extrasensory Perception [apparently Turing believed that "the statistical evidence, at least for telepathy, is overwhelming"]

re Marvin Minsky: I was reading the above comment describing him as a supporter of AI, which I was unaware of. (The ones I do know about are Dennett and his zombies -- of "we are all zombies" fame, and Searle. Then I am reading Minsky's 1967 and I see this:

"ARTIFICAL INTELLIGENCE"
"The author considers "thinking" to be within the scope of effective computation, and wishes to warn the reader against subtly defective arguments that suggest that the difference beween minds and machines can solve the unsolvable. There is no evidence for this. In fact, there couldn't be -- how could you decide whether a given (physical) machine computes a noncomputable number? Feigenbaum and Feldman [1963] is a collection of source papers in the field of programming computers that behave intelligently." (Minsky 1967:299)
  • Marvin Minsky, 1967, Computation: Finite and Infinite Machines, Prentice-Hall, Inc., Englewood Cliffs, N.J. ISBN: none. Library of Congress Card No. 67-12342.

I have so many wiki-projects going that I shouldn't undertake anything here. I'll add stuff here as I run into it. (My interest is "consciousness" as opposed to "AI" which I think is a separable topic.) But on the other hand, I have something going on at the List of open problems in computer science article (see the talk page) -- I'd like to enter "Artificial Intelligence" into the article ... any help there would be appreciated. wvbaileyWvbailey 02:28, 2 October 2007 (UTC)[reply]

List of open problems in computer science: Artificial Intelligence

Here it is, as far as I got:

Source:

In the article "Prospects for Mathematical Logic in the Twenty-First Century", Sam Buss suggests a "three-fold view of proof theory" (his Table 1, p. 9) that includes in column 1, "Constructive analysis of second-order and stronger theories", in column 2, "Central problem is P vs. NP and related questions", and in column 3, "Central problem is the "AI" problem of developing "true" artificial intelligence" (Buss, Kechris, Pillay, Shore 2000:4).

"I wish to avoid philosophical issues about consciousness, self-awareness and what it means to have a soul, etc., and instead seek a purely operational approach to articial intelligence. Thus, I define artificial intelligence as being constructed systems which can reason and interact both syntactically and semantically. To stress the last word in the last sentence, I mean that a true artifical intelligence system should be able to take the meaning of statements into account, or at least act as if it takes the meaning into account." (Buss on p. 4-5)

He goes on to mention the use of neural nets (i.e. analog-like computation that seems to not use logic -- I don't agree with him here: logic is used in the simulations of neural nets -- but that's the point -- this stuff is open). Morever, can I am not sure that Buss eliminate "consciousness" from the discussion? Or is consciousness a necessary ingredient for an AI?

Description:

Mary Shelley's Frankenstein and some of the stories of Edgar Allen Poe (e.g. The Tell-Tale Heart) opened the question. Also Lady Lovelace [??] Since the 1950's the use of the Turing Test has been a measure of success or failure of a purported AI. But is this a fair test? [quote here?] (Turing, Alan, 1950, Computing Machinery and Intelligence, Mind, 59, 433-460. http://www.loebner.net/Prizef/TuringArticle.html

A problem statement requires both a definition of "intelligence" and a decision as whether it is necessary to, or if so how much to, fold "consciousness" into the debate.

> Philosphers of Mind call an intelligence without a mind is a zombie (cf Dennett, Daniel 1991, Consciousness Explained, Little, Brown and Company, Boston, ISBN 0-316-180066-1 (pb) ):

"A philospher's zombie, you will recall, is behaviorally indistinguishable from a normal human being, but is not conscious. There is nothing it is like to be a zombie; it just seems that way to observers (including itself, as we saw in the previous chapter)". (italics added or emphasis) (Dennett loc cit:405)

Can an artificial, mindless zombie be truly an AI? No says Searle:

"Information processing is typically in the mind of an observer . . . the addition in the calculator is not intrinsic to the circuit, the addition in me is intrinsic to my mental life.... Therefore, we cannot explain the notion of consciouness in terms of information processing and symbol manipulations" (Searle 2002:34). "Nothing is intrinsically computational. computation exists only relative to some agent or observer who imposes a computational interpretation on some phenomenon" (Searle 2002:17).

Yes says Dennett:

"There is another way to address the possibility of zombies, and in some regards I think it is more satisfying. Are zombies possible? They're not just possible, they're actual. We're all zombies [Footnote 6 warns not to quote out of context here]. Nobody is conscious -- not in the systematically mysterious way that supports such doctrines as epiphenomenalism!"((Dennett 1991:406)

> Gandy 1980 throws around the word "free will". For him it seems an undefined concept, interpreted by some (Sieg?) to mean something on the order of "Randomness put to work in an effectively-infinite computational evironment" as opposed to "deterministic" or "nondeterministic" both in a finite computational environment (e.g. a computer).

>Godel's quote: "...the term "finite proceedure" ... is understood to mean "mechanical procedure". concept of a formal system whose essence it is that reasoning is completely replaced by mechanical operations on formulas" ... [but] the reults mentioned in this postscript do not establish any bounds for the powers of human reason, but rather for the potentialities of pure formalism in mathematics."(Godel 1964 in Undecidable:72)

Importance:

> AC (artificial consciousness, an AI with a feeling mind) would no less than an upheavel in human affairs

> AI as helper or scourage or both (robot warriors)

> Philosophy: nature of "man", "man versus machine", how would man's world change with AI's (robots)? Will it be good or an evil act to create a conscious AI? What will it be like to be an AI? (cf Nagel, Thomas 1974, What Is It Like to be a Bat? from Philosophical Review 83:435-50. Reprinted on p. 219ff in Chalmers, David J. 2002, Philosophy of Mind: Classsical and Contemporary Readings, Oxford University Press, New York ISBN 0-19-514581-X.)

> Law: If conscious, does the AI have rights? What would be those rights?

Current Conjecture:

An AI is feasible/possible and will appear within this century.

This outline is just throwing down stuff. Suggestions are welcome. wvbaileyWvbailey 16:13, 6 September 2007 (UTC)[reply]

cc'd from Talk:List of open problems topics computer science. wvbailey Wvbailey 02:41, 2 October 2007 (UTC)[reply]