in my opinion, the motive to create ai isn't really the driving force behind ai. what i see is a civilization sufficiently advanced to begin the automation of useful tasks. as such, each step along the way gives immediate economic payoffs, which drives the creation of machines that _behave_ intelligently. the functionalist extrapolation that these machines, when behaving sufficiently intelligently, will also become sentient, is really a kind of accident. my guess is that any sufficiently advanced civilization will almost necessarily develop an intelligence greater than itself.