The year is 2030. A large Japanese electronics company has just come out with a new television: the UltraStarNavTV. And this television is so advanced that it now generates light in the spectrum from 400 nm to 10 nm. And the marketing flyers read as follows: “With our new LCD projection technology, we can now display colored light in the spectrum beyond blue, beyond violet… fully into the ultraviolet range of light. An image so realistic… it’s unreal.”
Tens of thousands of early adopters run out and purchase the UltraStarNavTV. They write glowing and breathtaking reviews about the new light technology. “I can totally see a difference,” they will say. “The blues and the deeper colors are much more vivid with my UltraStarNavTV. There’s less aliasing because of oversampling with the higher frequencies. I don’t see myself ever going back to lower colors.”
The first problem relates to the source content that we see on our new TV. Unless every single phase in the content production pipeline accurately captured and processed ultraviolet — meaning the cameras used to shoot the film, the film processing pipeline, the visual effects — unless all that stuff took into account the ultraviolet spectrum — there’s no way that the purchaser of the UltraStarNavTV would be able to see ultraviolet… because it doesn’t exist in the source content.
And the second problem is even more fundamental. Human beings can’t actually perceive light in the ultraviolet spectrum. The human eye drops off at around 390 nm. Anything above that is not perceptible to the normally functioning human eye. But that doesn’t matter. There are certain people who believe that the more they pay, and the bigger the numbers are, the better the result is… perceptual theory be damned.
And that’s even assuming they could see ultraviolet light.
Which they can’t, if their eyeballs are functioning correctly.
Enough of the future; let’s talk about today.
I’ve seen a recent trend in audio recording where a number of engineers are mastering audio at very high bitrates. More and more, I’m seeing sampling rates of 96 kHz and 192 kHz WAV files being processed, with 64-bit floating point accuracy.
There are solid psychoacoustic reasons why, in 1980, 44.1 kHz and 16 bits were chosen as the Philips Red Book (CD-Audio) standard. No human being can hear frequencies that occur at frequencies of greater than 20 kHz. Twice that is 40 kHz. Therefore, 44.1 kHz is higher than the Nyquist frequency required to adequately represent human-perceivable sound. You can entertain dolphins, bats and dogs by recording frequencies higher than that. But it’s all wasted on us bipeds.
Now the dynamic range of 16 bits is about 96 dB. That’s a little less than the expected dynamic range of human hearing, which is around 140 dB, but keep in mind that one end of that dynamic range of human hearing is really, dangerously, awfully loud, and therefore rarely occurs in practice. (The dynamic range of a typical concert is 80 or 90 dB.)
Yes, on the Internet there are many experts who claim to be able to hear, given their own unreplicatable experiments with no control group, a higher frequency range than 44.1 kHz can represent. But in blind listening tests, even experts consistently can’t tell the difference between 16 bit audio and higher bit depths.
But let’s say you’re committed to doing all your mixing and mastering and effects of your content at 192 kHz, despite the fact that audio receivers are not capable of reproducing those audio frequencies, and humans are incapable of perceiving content at those frequencies. Even imagining that you had a receiver that operated at those frequencies (which you don’t) and even imagining that human beings could perceive audio at those frequencies (which they can’t)… there is still a reason why that 192 kHz content would not contain any frequencies above 20 kHz.
And here’s the reason:
That there is the frequency response of the Shure SM-57 microphone. You see there at the right end of the chart, where the response wiggles and drops off around 15 kHz? That’s showing you that, even if your entire audio pipeline is set up to record and process sounds above 20 kHz… well, this particular microphone will not respond to those high frequencies.
So even if you got yourself an ultrasonic microphone and recorded 80 kHz signals and kept the signal chain pristine all the way through your DAW, and found a receiver that passed the high frequencies without cutting them and found speakers that accurately rendered the ultrasonics… you’d still need to have ears that are capable of perceiving those ultrasonic frequencies.
Which you don’t.
So do yourself a favor, and save the disk space, by mastering all your audio files in 24 or 32-bit floating point, at either 44.1 kHz or 48 kHz. That’s more dynamic and frequency range than you’ll ever need.