Seed AI

from Wikipedia, the free encyclopedia

Seed AI ( English for "KI - Saat (korn)") is a theory developed by Eliezer Yudkowsky about a self-learning artificial intelligence (KI), which improves and expands itself through recursion . According to the theory, the AI ​​must be able to adapt and improve its own program code in such a way that the next generation can make further improvements that would have been impossible in the previous versions. The “seed” is sown by the first generation, so to speak, and the next generation harvests the fruits in order to improve them again and to replant them. This cycle will continue until at some point human intelligence is reached and surpassed in the next cycle.

Today's programming compilers represent a greatly simplified form of this intelligence . These optimize your own program code with regard to efficiency and speed, but this type of improvement is by no means sufficient to achieve the unlimited recursive self-improvement of a Seed AI. The previous compilers examine their own code and improve it, but only once and not recursively. The resulting compilers work faster, but not better than their predecessors. This is where the previous development ends, the new version never discovers new optimization methods or can improve its own program code. A real Seed AI must understand the purpose and the design of its own program code in order to create a further generation with improved intelligence.

This self-optimizing artificial intelligence does not yet exist, but some research groups have dedicated themselves to the goal of creating Seed AI. The Singularity Institute for Artificial Intelligence or Artificial General Intelligence Research Institute are the best known of these.

In futurology the view is held that Seed AI can lead to a status in which the technical development is so fast that an average intelligent person can no longer follow it.

See also

Web links