Talk:Inferno Peak and Talk:Artificial general intelligence: Difference between pages

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
(Difference between pages)
Content deleted Content added
rating
 
No edit summary
 
Line 1: Line 1:
{{archive box|[[/Archive|Talk before name change on 11 Oct 2007]]}}
{{WikiProject Antarctica}}

{{Mountain|class=stub|importance=low}}
== Requirements of strong AI ==
{{WikiProject Geography}}
The first sentence of 'Requirements of strong AI': The Turing test is not a ''definition'' of intelligence, it is a ''test'' of intelligence. --[[Special:Contributions/62.16.187.35|62.16.187.35]] ([[User talk:62.16.187.35|talk]]) 19:45, 12 June 2008 (UTC)
{{reqphotoin|Antarctica}}

Hello,
I have a problem with these requirements.
For example 'be able to move and manipulate objects (robotics)'. So you can't have Strong AI if it is not a robot? Strong AI cannot exist inside a computer by definition?
Paralyzed human is still considered intellegent. Or maybe this article means that IF Strong AI has a physical form, it then has the intellegence to use it? Or something..

It also has to see? It can't be intellegent without vision? Oh all blind people in the world fail this test. I'm fine it has to perceive something, but not "especially see". Right under these requirements reads "Together these skills provide a working definition of what intelligence is, ....".

Maybe I'm just not understanding Strong AI correctly.
[[Special:Contributions/88.114.252.161|88.114.252.161]] ([[User talk:88.114.252.161|talk]]) 19:33, 2 October 2008 (UTC)

== Computer implementation of brain ==

A few points worth adding

(1) The parralel vs speed issue is a red herring, because computers can be designed to operate in parallel, in the same way as the brain. Given a sufficiently large number of transistors, one could create a simulation of a brain which simulated all neurons in parallel, and operated incredibly fast compared to a human brain. (The number of transistors would be vast, however).

(2) If one accepts that it is possible to simulate the operation of a single human cell to a high degree of accuracy, then one is forced to accept that it is possible in principle to create a strong AI via simulation of a human from conception, at a cellular level.

(3) Though the computing power required for such a simulation would be huge, it is likely that a highly detailed model may not be required, and it would need to be done only once, since the resulting artificial person could be copied/probed/optimized as required. This makes the possibility somewhat more feasible. It might take a million processor supercomputer 20 years to generate a simulated brain, but it might then be possible to reduce the complexity required by a factor of a million.

(4) Using a distributed.net/SETI@home style architechture, a million CPU grid supercomputer isn't as unlikely as it might seem.

[[User:Pogsquog|Pog]] 15:00, 25 July 2007 (UTC)

== Including "Artificial Intelligence System" ==
I recommend the project [[Artificial Intelligence System]] be mentioned under 2.3 (Simulated Human Brain Model). [[Blue Brain]] is mentioned instead, but I believe this project is more directed toward a Strong AI outcome and better suited for this article. [[User:CarsynG1979|CarsynG1979]] ([[User talk:CarsynG1979|talk]]) 21:45, 10 March 2008 (UTC)CarsynG1979

:I've been thinking about collecting all this material into the article [[artificial brain]] (see my notes on [[Talk:Artificial brain]]), and shortening the section in this article to two or three paragraphs that just mention the names of the projects underway and focus on making it very clear what this means to [[strong AI]] specifically. ---- [[User:CharlesGillingham|CharlesGillingham]] ([[User talk:CharlesGillingham|talk]]) 18:39, 11 March 2008 (UTC)

== Incorrect Reference: ==

{{Harvtxt|Haikonen|2004}} does not link to a valid reference. Is this the same as the {{Harvtxt|Haikonen|2003}} above? [[User:HatlessAtlas|HatlessAtless]] ([[User talk:HatlessAtlas|talk]]) 15:41, 30 April 2008 (UTC)

== Archive ==

I have archived all the comments that refer to parts of the article that no longer exist. ---- [[User:CharlesGillingham|CharlesGillingham]] ([[User talk:CharlesGillingham|talk]]) 02:01, 17 May 2008 (UTC)

== Intelligence is not, does not require "seeing". ==

Under the requirements of Strong AI, there is listed:

"perceive, and especially see;",

but many intelligent people are visually impaired or blind. This requirement must be wrong, no?

[[User:Natushabi|Natushabi]] ([[User talk:Natushabi|talk]]) 08:18, 2 June 2008 (UTC)



== Threshold to Sentience ==

You know, there's a complete school of thought that does not agree with this term ("AI") and insists on separation of the term AI (which they feel describes any computerized machine, including your very own pocket calculator) from the term Digital Sentience (which is the self-aware machine).

The reasoning is that while intelligence not only exsts, it has also surpassed human intelligence long ago (for example, your pocket calculator can make complex calculations faster and far more efficiently than you, probably), it is a mistake to look at intelligence as the threshold to self-awareness.
Sentience, the reasoning goes, does not come from intelligence but from having feelings. Seeing how it is widely agreed that simplistic creatures with a limited intelligence (for example: cows), do have an awareness to themselves, through their feelings, their wants and needs. Those may be the result of biologically-encoded signals in the brain, yet anyone who's seen a cow showing affection to one cow and hostility to another would be able to relate.

Ergo, the reasoning continues, the self-aware machine would evolve from feelings rather than intelligence, making the cruel '''heartless''' AIs of movies like the Terminator and the Matrix into nothing more the a modern form of the mythological [[Golem]]. The Self-Aware machine would feel compassion, love, hate, fear and right about any other feeling we feel. Advancements in Cybernetic sensors could equip this Digital Sentience with a body capable of the full rainbow of sensory experiences we can experience, leading to Digi units that would marvel at the taste of Cheese, despise the smoke of Cigars and when the time is right, even have orgasms....

Stacking "Digital Sentience" into "Strong AI" is a mistake IMO

--[[User:Moonshadow Rogue|Moonshadow Rogue]] ([[User talk:Moonshadow Rogue|talk]]) 14:54, 27 June 2008 (UTC)

:I think these are interesting issues, and are, IMHO, poorly covered in Wikipedia. What's needed are ''sources'', which I don't have. It ''is'' clear to me that the usage of "sentience", "Strong AI'" and "self-awareness" overlap, and so it makes sense to attempt to disaentangle them here. This article, so far, makes a weak attempt to disentangle "intelligence" (as AI researchers understand it) from these other concepts. What's needed is a better treatment of "sentience" and this requires sources which take the concept seriously. ---- [[User:CharlesGillingham|CharlesGillingham]] ([[User talk:CharlesGillingham|talk]]) 19:08, 5 July 2008 (UTC)

==Image copyright problem with Image:RIKEN MDGRAPE-3.jpg==
The image [[:Image:RIKEN MDGRAPE-3.jpg]] is used in this article under a claim of [[WP:NFC|fair use]], but it does not have an adequate explanation for why it meets the [[WP:NFCC|requirements for such images]] when used here. In particular, for each page the image is used on, it must have an [[Wikipedia:Non-free use rationale guideline|explanation]] linking to that page which explains why it needs to be used on that page. Please check

:* That there is a [[Wikipedia:Non-free use rationale guideline|non-free use rationale]] on the image's description page for the use in this article.
:* That this article is linked to from the image description page.
<!-- Additional 10c list header goes here -->

This is an automated notice by [[User:FairuseBot|FairuseBot]]. For assistance on the image use policy, see [[Wikipedia:Media copyright questions]]. --02:52, 2 October 2008 (UTC)

=="Important topic"==
I'm concerned about the line "an important topic for anyone interested in the future." I'm not sure whether the problem is the word "anyone," or "future." Either way, this sentence needs to be made more specific. At this time, I do not personally have a suggestion for a change. --[[Special:Contributions/65.183.151.105|65.183.151.105]] ([[User talk:65.183.151.105|talk]]) 03:58, 13 October 2008 (UTC)

Revision as of 03:58, 13 October 2008

Requirements of strong AI

The first sentence of 'Requirements of strong AI': The Turing test is not a definition of intelligence, it is a test of intelligence. --62.16.187.35 (talk) 19:45, 12 June 2008 (UTC)[reply]

Hello, I have a problem with these requirements. For example 'be able to move and manipulate objects (robotics)'. So you can't have Strong AI if it is not a robot? Strong AI cannot exist inside a computer by definition? Paralyzed human is still considered intellegent. Or maybe this article means that IF Strong AI has a physical form, it then has the intellegence to use it? Or something..

It also has to see? It can't be intellegent without vision? Oh all blind people in the world fail this test. I'm fine it has to perceive something, but not "especially see". Right under these requirements reads "Together these skills provide a working definition of what intelligence is, ....".

Maybe I'm just not understanding Strong AI correctly. 88.114.252.161 (talk) 19:33, 2 October 2008 (UTC)[reply]

Computer implementation of brain

A few points worth adding

(1) The parralel vs speed issue is a red herring, because computers can be designed to operate in parallel, in the same way as the brain. Given a sufficiently large number of transistors, one could create a simulation of a brain which simulated all neurons in parallel, and operated incredibly fast compared to a human brain. (The number of transistors would be vast, however).

(2) If one accepts that it is possible to simulate the operation of a single human cell to a high degree of accuracy, then one is forced to accept that it is possible in principle to create a strong AI via simulation of a human from conception, at a cellular level.

(3) Though the computing power required for such a simulation would be huge, it is likely that a highly detailed model may not be required, and it would need to be done only once, since the resulting artificial person could be copied/probed/optimized as required. This makes the possibility somewhat more feasible. It might take a million processor supercomputer 20 years to generate a simulated brain, but it might then be possible to reduce the complexity required by a factor of a million.

(4) Using a distributed.net/SETI@home style architechture, a million CPU grid supercomputer isn't as unlikely as it might seem.

Pog 15:00, 25 July 2007 (UTC)[reply]

Including "Artificial Intelligence System"

I recommend the project Artificial Intelligence System be mentioned under 2.3 (Simulated Human Brain Model). Blue Brain is mentioned instead, but I believe this project is more directed toward a Strong AI outcome and better suited for this article. CarsynG1979 (talk) 21:45, 10 March 2008 (UTC)CarsynG1979[reply]

I've been thinking about collecting all this material into the article artificial brain (see my notes on Talk:Artificial brain), and shortening the section in this article to two or three paragraphs that just mention the names of the projects underway and focus on making it very clear what this means to strong AI specifically. ---- CharlesGillingham (talk) 18:39, 11 March 2008 (UTC)[reply]

Incorrect Reference:

Haikonen (2004) does not link to a valid reference. Is this the same as the Haikonen (2003) above? HatlessAtless (talk) 15:41, 30 April 2008 (UTC)[reply]

Archive

I have archived all the comments that refer to parts of the article that no longer exist. ---- CharlesGillingham (talk) 02:01, 17 May 2008 (UTC)[reply]

Intelligence is not, does not require "seeing".

Under the requirements of Strong AI, there is listed:

"perceive, and especially see;",

but many intelligent people are visually impaired or blind. This requirement must be wrong, no?

Natushabi (talk) 08:18, 2 June 2008 (UTC)[reply]


Threshold to Sentience

You know, there's a complete school of thought that does not agree with this term ("AI") and insists on separation of the term AI (which they feel describes any computerized machine, including your very own pocket calculator) from the term Digital Sentience (which is the self-aware machine).

The reasoning is that while intelligence not only exsts, it has also surpassed human intelligence long ago (for example, your pocket calculator can make complex calculations faster and far more efficiently than you, probably), it is a mistake to look at intelligence as the threshold to self-awareness. Sentience, the reasoning goes, does not come from intelligence but from having feelings. Seeing how it is widely agreed that simplistic creatures with a limited intelligence (for example: cows), do have an awareness to themselves, through their feelings, their wants and needs. Those may be the result of biologically-encoded signals in the brain, yet anyone who's seen a cow showing affection to one cow and hostility to another would be able to relate.

Ergo, the reasoning continues, the self-aware machine would evolve from feelings rather than intelligence, making the cruel heartless AIs of movies like the Terminator and the Matrix into nothing more the a modern form of the mythological Golem. The Self-Aware machine would feel compassion, love, hate, fear and right about any other feeling we feel. Advancements in Cybernetic sensors could equip this Digital Sentience with a body capable of the full rainbow of sensory experiences we can experience, leading to Digi units that would marvel at the taste of Cheese, despise the smoke of Cigars and when the time is right, even have orgasms....

Stacking "Digital Sentience" into "Strong AI" is a mistake IMO

--Moonshadow Rogue (talk) 14:54, 27 June 2008 (UTC)[reply]

I think these are interesting issues, and are, IMHO, poorly covered in Wikipedia. What's needed are sources, which I don't have. It is clear to me that the usage of "sentience", "Strong AI'" and "self-awareness" overlap, and so it makes sense to attempt to disaentangle them here. This article, so far, makes a weak attempt to disentangle "intelligence" (as AI researchers understand it) from these other concepts. What's needed is a better treatment of "sentience" and this requires sources which take the concept seriously. ---- CharlesGillingham (talk) 19:08, 5 July 2008 (UTC)[reply]

Image copyright problem with Image:RIKEN MDGRAPE-3.jpg

The image Image:RIKEN MDGRAPE-3.jpg is used in this article under a claim of fair use, but it does not have an adequate explanation for why it meets the requirements for such images when used here. In particular, for each page the image is used on, it must have an explanation linking to that page which explains why it needs to be used on that page. Please check

  • That there is a non-free use rationale on the image's description page for the use in this article.
  • That this article is linked to from the image description page.

This is an automated notice by FairuseBot. For assistance on the image use policy, see Wikipedia:Media copyright questions. --02:52, 2 October 2008 (UTC)[reply]

"Important topic"

I'm concerned about the line "an important topic for anyone interested in the future." I'm not sure whether the problem is the word "anyone," or "future." Either way, this sentence needs to be made more specific. At this time, I do not personally have a suggestion for a change. --65.183.151.105 (talk) 03:58, 13 October 2008 (UTC)[reply]