What version of AI do you have in mind?
There's a fascinating essay by Vernor Vinge (Whole Earth Review, Fall 1994
issue) in which he makes a distinction between "weakly superhuman" AI
and "strongly superhuman" AI.
In a nutshell: weak superhuman AI is qualitatively human-equivalent. It
may run faster than us (cramming a month of thinking into a few minutes)
but it's still not fundamentally better at thinking than a human being
with equivalent information resources and subjective time.
A strongly superhuman AI is a different kettle of fish: it compares to us
-- or to a weakly superhuman AI -- the way we compare to a dog or a squirrel.
Vinge postulates that if AI is possible, what we get to start with is a
weakly superhuman AI. However, by adding resources we can enable the
WSAI to do a lot more thinking in a given period of time than we can --
so if it is _possible_ for human-equivalents to give rise to something
qualitatively superior, then the WSAI probably will generate a SSAI
(and do so rather faster than human onlookers expect).
An intereting point is that Vinge bases his idea of the singularity on
the immediate consequences of an SSAI emerging within our light cone. We
simply _can't_ guess at such an entity's motivations or interests. It
might be an omnibenevolent Tipler-style god-wannabee, or it might be a
Terminator-style kill-em-all Skynet; if you ask me, both
contingencies are about equal (low) probabilities.
One possibility that occurs to me is that there's an obvious trade-off
between expansion in space and expansion in time. If you want to get lots of
thinking done (but a finite amount -- this isn't a rehash of the old debate
over closed/open universes) do you prefer to expand into a large spherical
region of space, or to stay tightly focussed, run slower, and occupy a long
tubular slice of spacetime? Hmm...
-- Charlie