When I am in a room, and the ligth goes out I still know how the room is laid out, I can still manouver around in the room. I can walk somewhere and pick up something.
I have a model of the room in my mind.
I then use my senses to continuously update the model of the world which I hold in my mind. When I cannot see I listen or feel, and get a somewhat less complete model in my mind than I do with vision. But the model is still there.
I suspect that bats uses their sonar for similar purpose. They have a bad eyesight but good hearing. Their model of the world could be just as good as mine. In some cases it is no doubt better, as i can practically never hit a fly in the air which they do all the time for food.
Furthermore I can imagine what will happen if I walk thru the room and then stumble over the table. Therefore I am able to choose a different path thru the room.
I can simulate/make a scenario in my mind. Playing it out in my minds model. Therefore I can think and plan ahead.
I suspect that the the idea of having a consantly updated model of the world in my mind is a significant idea when it comes to AI.
When I cross that idea with the fact that I have a hard time beating the bots in Quake 3 who are running around in a model of a world, and the fact that it is allready possible to build 3D objects from 2 cameras to model a 3D world. Then I could imagine that if it would be somehow possible to get real robots to move around in the real world combining these three abilities.
A modeler for input and world building, and a simulator trying out different moves in the model before doing them in the real world.
Does anybody know about research material among these lines?
Regards
#------------------------------------------------------------------------
# Max M Rasmussen, New Media Director http://www.normik.dk Denmark
# Private mailto:maxmcorp.worldonline.dk