At 09:27 AM 5/20/99 -0500, Billy Brown wrote:
>> To increase your range of hearing, you could easily shift higher
>> frequencies down into the audible range with a computerized
>> hearing aid.
>That doesn't accomplish the same result. If you take all the data from a
>broad-ranged hearing sense and squash it down into the normal human range,
>you will inevitably loose a lot of the information. You won't be able to
>hear the actual pitch of anything beyond the normal range, you won't be able
>to listen to all those different pitches at the same time, and you won't be
>able to hear patterns (like music) that occur across large frequency ranges.
I think you are a bit off-base with respect to the technique and the
First of all, the correct way to do this is *not* to compress the spectrum
to fit within the human range. With regard to this technique,
First of all, the correct way to do this is *not* to compress the spectrum to fit within the human range. With regard to this technique,the problems mentioned above are valid. The correct technique is to overlay our natural frequency window with shifted windows from other parts of the spectrum. Audio spectrum overlays are a lot more processing intensive, but they are much more effective at representing information outside the human range, and do so with little or no loss of information.
Even within the normal audio range, the difference between the amount of information in the signal and the amount of information we can perceive in a given frequency range is quite large, and gets larger the closer you get to the ends of our audio range. This frequency dependent information loss is a primary reason spectrum compression doesn't work. Windowing maintains the normal frequency separation, but creates a "busier" environment within the normal range, which is something our brains are adept at handling. Our audio cortex works much better with good frequency separation and a busy spectrum than a quiet spectrum and poor frequency separation. The thing to remember is that the human ability to separate sounds is a logarithmic function of frequency, which goes a long way towards explaining why spectrum compression doesn't work very well.
There are signal processing algorithms that allow us to shift frequencies outside of our normal range into our audio range with very little loss of information. With good signal processing hardware available today, the processing loss floor will fall below the perceptive loss at its smallest. The two biggest problems with this kind of augmentation are 1) it increases the amount of signal in our fixed perception space that we have to process, and 2) it misrepresents the actual frequency of the sound (e.g. you hear a 32 kHz tone as a 4 kHz tone).
The first probably won't be much of a problem most of the time. Our brains are already adept at dealing with noisy audio environments. The second is a little more of a problem. The best solution I can think of is to add an unusual or unnatural harmonic to shifted signals (e.g. "wave shaping") that will allow the listener to associate a sound with a frequency range. If well done it would allow the listener to differentiate signals without being able to consciously "hear" the added harmonics. Many audio harmonics act as something of a meta-signal to the brain. They describe the nature of the sound to the brain (the "feeling"), but we do not consciously perceive the harmonics themselves.