Re: AI big wins

Doug Bailey (
Mon, 28 Sep 1998 10:39:05 -0400

Tangential to this discussion is a scenario I considered recently. Since, presumably, an SI should be able to modify its hardware and software to maximize its information processing and information storage abilities, it would seem the most optimal track would be something along the following lines:

After digesting all of HI knowledge and assessing it with its own heuristics, the SI would then have to decide what to do with itself. No doubt, it would identify several paths of inquiry from its analysis of HI knowledge. It might initially attempt to determine which path of inquiry would prove most rewarding (i.e., in terms of assisting it with the other paths of inquiry). Eventually, the SI would realize the area that would allow it to maximize its efforts most would be to determine how to increase its information processing and storage abilities.

The SI would then embark on a period of self-modification. The hope would be the development of a positive feedback loop where its enhanced processing abilities would allow it to find the next modifications in quicker fashion. Along the way, this process would allow the SI to accumulate knowledge about the physical universe. At some point the SI may be able to reliably determine how much time it has left, i.e., how long will the universe last since the solution to that query lays a upper bound/limitation that would contribute importantly to its evaluation of tasks.

Even with its monstrous processing abilities, the SI might set for itself herculean objectives that would still entail eons of calculation and data analysis. Such events as entropy increase, cooling of the universe, proton decay, etc. may become important since such events might infringe on the SI's ability to complete certain inquiries. The SI might develop the capability to initiate universes on its own, either exactly like our own or optimized universes geared towards its inquiries.

Its uncertain whether this "Self-Modification Period" would ever end since only two conditions would appear to lead the SI to abandon modification efforts: (1) the SI reaches a processing ceiling that it determines it can not improve upon with its current knowledge base [at this point the SI would embark on other inquiries not related to modification and might find some piece of knowledge that would allow it to return to its modification efforts], or (2) the SI realizes, unable to create designer universes or unable to create designer universes with enough durability, that the time required to research and implement the next modification would require more time than it has, wherein it would begin the non-modification inquiries... rifling through them at amazing speeds [again the possibility exists that some of these inquiries might give the SI a way to circumvent the time constraints it faced before].

It seems to me, whatever form this self-modification period takes, that an SI would gravitate towards it in its natural self-evolution.

Doug Bailey