If the enhanced senses are the result of near-term enhancements such as
able computers/sensors, people will need to train themselves to multitask between
multiple screens. The best solution at this level of enhancement is a sort of synesthesia, presenting the new data as a visual, audio or tactile signal, since we already have a fair amount of bandwidth
for these sensory channels. Please see the 'cyborg in search of community' thread
for a more indepth discussion on this level of enhancement.
In the somewhat longer term involving implanted sensors, the need would
be for some onboard processor interfaced to the nervous system to
provide the bandwidth needed; this would be in addition to the sort of
held over from wearables.
In the longer term involving enhancements from genetic redesign from the
there are 2 levels of difficulty:
those born with the enhancements (designer offspring) will probably just adapt to their new sense modes without thinking about it, like learning to speak, walk, etc. The plasticity of the brain at that stage of life should be all they need.
those who are upgraded later in life (us)
will probably still require an extra processor to handle the increased
will probably be possible to return the brain to a state of plasticity like that of a newborn, enabling a similar learning experience, but I wonder if that increased plasticity would preserve the older connections which, to some extent at least, constitute our identities. Perhaps the old patterns would be backed up onto the implanted processor (didn't Greg Egan
do a couple of stories along these lines?)
Either way, the subjective experience of a being with such an inherently
different sensorium is a bit difficult to imagine. To
be given a new sense as an adul without an upgrade in processing power
would produce problems akin to those experienced by the fellow in 'At
(blind from birth, sight restored as an adult, extreme difficulty
adjusting led to the real-life person whose tory this was to reteat into an hysterical blindness).