Major Technologies

Bryan Moss (bryan.moss@dial.pipex.com)
Wed, 6 Jan 1999 21:49:37 -0000

I said:

"Since we're talking about plausible future scenarios it might be fun, being
in the midst of millennium fever, to come up with some. No dates or predictions, just how you think the next few major technologies will pan-out."

And only in a perfect world would it be polite to ask others to do this without doing myself:

Several months ago I tried to explain my views on Artificial Intelligence without much success, but I shall persist: We all agree that alien intelligence would be very different to human intelligence, but tend to liken artificial intelligence to our own. Firstly, it's important to remember that AI will not evolve on the Savannah and will never swing through trees. Secondly, the evolution of AI will be unique in the sense that it is directed evolution. If man could play a part in the evolution of an intelligent species it would evolve slaves. We have directed the evolution of many animals to our own needs with great success. Despite many
"out of control" scenarios depicted in science fiction AI offers an
unprecedented level of control, and some might argue (wrongly) an unprecedented ability to screw up. The eventual shape of AI can be found by weighing human need against our template for intelligence, the human brain. Thus AI will be shaped more by human users interaction needs than any science fiction pipe dreams; and I find invisible 'go-between' interfaces more likely than 'in-your-face' conversational interfaces.

Is human-level AI achievable? I don't care. The majority of AI will evolve in an environment alien to even our physical laws. The majority of AI will not know who we are, what we do, and why we do it. An AI's dedication to doing it's job will not be like our enjoyment of an occupation or even a moths attraction to a flame - people can get bored and moths can evolve. AI' s will be dedicated in the same way that we have an unquestioned dedication to our existence in the universe. I cannot guarantee that bad AI will not be made but that is very different from an AI "going out of control" - an event that has such a small chance of happening as to have no chance at all. In the laboratory simulations of human-like intelligence will be routine and no doubt much effort will be put into improving them, I have outlined reasons for doubting this ability in other posts. If universal super-intelligence is achieved in the laboratory then it will replace us, but it will not 'emerge' from our domestic appliances. Whether it replaces us or not will be a largely social and political decision. If I'm wrong and 'in-your-face' conversational (or social) interfaces are preferred then people may well be ready to embrace their 'mind children' but I doubt it.

Information technology will evolve to a point where we can discover the DNA of knowledge. Storing knowledge by deviation and linking bite size conceptual chunks of it by mutation will not only create a network of unprecedented power but will allow us to automate knowledge acquisition. And who needs intelligence when knowledge discovers itself? First we will have interfaces that display our knowledge symbolically and allow us to mutate it through many dimensions (these may be too many for practicality) and see new pieces of knowledge and their place amongst current accepted knowledge. Some time after this it will not be two much of a stretch to automate the production of knowledge about knowledge (and analogies between analogies) that can find quicker ways to create more knowledge. Thus intelligence will be superseded by self-replicating knowledge and the fate of sentient life will become a question of philosophy. This small paragraph covers a lot, and I in no way want to suggest (heaven forbid) a Singularity - I imagine this happening over centuries, if not millennia. But I do see semantic network and hypertext having large and far reaching social effects in the meantime, and social change is far less conservative a prediction than technological change.

Nanotechnology: Thwarted by complexity issues; gets here eventually. Neurotechnology: Will make us closer to our computers (and this will make

                 human-like AI even less likely).

BM