July 1, 2010
Audio Evolution:
1950-2010
The trigger
Recently, while reorganizing my library, I
perused an article in the September 1983 issue of
The Abso!ute Sound
(Vol.8 No.31), "The Threat of the Compact Disc to the Sound of
Music," by renowned mastering engineer Douglas Sax, of Sheffield Lab
Recordings. Overwhelmed by curiosity and nostalgia, I resurrected a
blue T-shirt I’d bought at the 1984 Winter Consumer Electronics
Show, in
Las Vegas.
Sax’s commendable crusade and its catchy slogan, "Stop Digital
Madness," had received sustained support from skeptical audiophiles.
In concluding his piece, Sax wrote, "For me, all digital attempts
thus far have been a failure. I simply cannot enjoy music that has
been digitally processed, and the enjoyment of music in the home is
the sole reason we have a high-fidelity industry. I support analog
recording because it works.
"It is a time-proven process that contains musical information which
is accessible to all and which has a resolution that allows the
listener to continually discover hidden nuances as he improves the
abilities of his home playback system."
The words thus far
in that first sentence, reinforced by his description of analog as
being "a time-proven process," prompted me to conclude that Sax,
like most audiophiles, would eagerly anticipate future research and
development into the optimization of digital sound, as audiophiles
continued the quest to hear more and more, until the resolving
capabilities of home playback systems approached its horizontal
asymptote of live musicians performing in real spaces.
Sax’s article was published at a point in
audio’s history halfway between the birth of stereophonic high
fidelity and the current level of refinement of analog-to-digital
conversion techniques. In the 27 years since that article’s
publication, what sort of evolution has taken place? How much more
are we
hearing today at home, and how far away does utopia remain?
But first: Where and when did stereo begin?
Alan Dower Blumlein
British
electronics engineer Alan Dower Blumlein was born June 29, 1903. In
1931, while working for EMI, he invented stereo, a discovery
regarded as being so revolutionary and ahead of its time that it was
shelved. Blumlein then focused on television, and was on a design
team which produced the first modern TV system, adopted by the
British Broadcasting Corporation in 1937.
At the beginning of World
War II, Blumlein switched to telecommunications, and by 1942 was a
key member of a top-secret team designing a revolutionary radar
system that would enable the Royal Air Force to bomb from above the
clouds. However, the flight of a
Halifax
bomber outfitted with the system ended in a crash in
Wales
on June 7, 1942, killing Blumlein and ten others.
According to his widow and son, Blumlein struggled with spelling,
and could not read until age 12. But he was clearly a genius who,
according to one distinguished colleague, "had he lived would have
been considered the Faraday of our age." Blumlein is remembered for
his technique of recordings with a
coincident pair of microphones, to
overcome some of the inherent deficiencies of spaced pairs of mikes.
The digital blueprint
We are all aware that a plethora of
high-resolution formats exists today. However, in 1983, the
limitations of early digital sound frustrated such engineering gurus
as Doug Sax, Keith O. Johnson, Jan-Eric Persson, Jack Renner, John
Eargle, and their colleagues worldwide. In that year a unique
technology had emerged, and was by then about one year old. In it, a
laser beam of high-intensity coherent red light scanned under the
surface of a shiny disc of polycarbonate. This 120mm-diameter
compact disc (CD)
rotated at a speed that continuously varied from 200 to 500rpm in
playback devices known simply as compact
disc (CD) players.
In contrast to an LP, the laser would begin its tracking of the data
spiral from the center of the disc, moving spirally outward from
track to track, toward the disc’s outer edge. A first-generation CD
accommodated up to 74 minutes of playing time, almost twice what was
stored in the two grooves of the average LP.
The laser made sense of the data by
discriminating between the pits and land
encrypted on the medium. A resultant
bitstream of pulse-code modulated (PCM)
digital data, converted into electrical signals, was amplified and
reproduced as sound by a pair of loudspeakers. It was as simple as
that. Or was it?
Late analog vs. early digital
Few
doubted that digital formats offered potentially significant
advantages over their analog counterparts. Everyone wanted to hear
pristine recordings with lower noise floors, wider dynamic range,
higher fidelity, and no surface noise.
In addition, mastering engineers preferred the
many obvious conveniences of digital editing on computers. This
technique was considerably friendlier and far more precise than the
razor-blade
process used in the editing of analog tapes.
The digital restoration and archiving of analog recordings would
require substantially less storage space. Prolonged shelf life,
coupled with ease of maneuverability and retrieval, would be
guaranteed. However, the (in)famous marketing slogan of Philips and
Sony, coinventors of the CD -- "pure, perfect sound forever" -- was
certainly inappropriate.
In common parlance, the problem with
first-generation digital sound relates to the phenomenon of
number crunching.
Since digital is essentially a statistical process, it will
always be an
approximation compared to analog. Evidently, it was not commonly
realized or accepted in those early days that a sampling rate of
44.1kHz at a resolution of 16 bits was woefully inadequate for
optimal conversion of sound waves to binary code.
At that time, it was impossible to digitally
encode all or even a substantial portion of the information
contained in analog signals, especially the signals of audiophile
recordings. The latter sounded special
because the best of them were recorded
in real time, direct to two tracks,
and pressed on virgin vinyl from half-speed masters.
High-resolution recordings include enough
information to reproduce, on playback, subtly defined soundstages.
Ambient information gives spatial cues to listeners by revealing
nuances that make them sound eerily similar to live microphone
feeds. Such ambience is usually several decibels lower in volume
than the music, and must be eked
from the recorded source using audiophile playback systems.
Psychoacoustic determination of size of space,
depth of image, timbre, placement of instruments and musicians, is
essential for good listening. Such information must be
accurately captured
on whatever storage medium is used. True fidelity of reproduction
would be obtained only when digital formats could sample the live
sound more often; in other words, when they could record at a higher
level of resolution.
Those with golden
ears were obviously alarmed by digital’s
very sterile sound. This was coupled with unpleasant residual
artifacts and distortions superimposed on music. Their verdict:
Digital had been insufficiently field-tested.
The vital ambient information described above was stored in the
least significant bits of the analog-to-digital conversion process.
However, this data was inadvertently truncated, similar to how
numerical values are rounded off in mathematics.
As a result, the "perfect sound forever"
campaign set off loud alarm bells and unprecedented pother. It may
never be known if Sony and Philips ever actually believed in their
slogan.
Audio evolution and revolution
After World War II, several innovations created favorable conditions
for substantial improvements in audio recording and playback
quality. Technology developed for the military and aerospace
industry gradually filtered down into commercial products. This was
an era in which vacuum-tube electronics, long-playing microgroove
vinyl records, open-reel tape recorders, and multiway loudspeaker
systems reigned supreme.
The term high
fidelity, or
hi-fi for short,
was universally used to describe equipment, records, and tapes
intended to provide faithful sound reproduction in the home. Even in
those early days, knowledgeable consumers paid detailed attention to
technical specifications. Some preferred to assemble sound systems
comprising individual components. Hence markets were created for
separate turntables, tuners, preamplifiers, and power amplifiers to
drive separate loudspeakers. All-in-one systems secured an even
larger market share.
Names such as Marantz, McIntosh, Bozak,
Thorens, and Wharfedale became well known. The evolution of the
transistor into an electronic device for use in hi-fi gear
instigated heated debate because it sounded
and measured
different. It seems that the respective intrinsic characteristics of
tubes vs. solid-state (i.e.,
transistor) devices will forever remain an emotionally subjective
and divisive issue.
Audio magazines published feature articles
about these developments, and reviews of new products, and consumers
began to depend heavily on them for guidance in making purchases.
Well-known publications such as Wireless
World, High
Fidelity,
Stereo Review, and
Audio
were eventually superseded at the very pinnacle of high-end audio by
newer competitors both in print and online. Audio shows, such as the
summer and winter CESes, and specialized events like
Munich’s
High End and Canada’s
Festival Son & Image grew in popularity.
Magazines flourished through advertising and online sales. The birth
and growth of the Internet brought forth tremendous innovation in
publishing online, and publications such as those produced by the
SoundStage! Network began to prosper. Consumers now expect new
products from manufacturers at least once per year. This expectation
has propelled the industry.
Meanwhile, engineers, scientists, and marketing executives were
feverishly at work. More precise high-resolution digital formats
evolved, all of them based on higher sampling rates and longer word
lengths to capture or interpolate and store all of the missing
information that first-generation digital converters misplaced.
Everyone was excited, and there were encouraging prospects for a
bright future. Optimization of digital knowhow would mean goodbye
forever to analog’s intrinsic problems of restricted dynamic range,
surface noise, pops, wow, and flutter.
The future
Analog-to-digital
conversion systems such as Direct Stream Digital (DSD), High
Definition Compatible Digital (HDCD), and Greater Ambience
Information Network (GAIN) have gradually gained niche status. They
are now readily available if you know where to look, to the delight
of discerning audiophiles. But audiophiles continue to be restless.
HRx is a trade name for
high-resolution audio WAV files on DVD-R data discs. Developed by
Keith Johnson for the independent label Reference Recordings, it has
been commercially available since 2007. HRx delivers exact
bit-for-bit copies of Johnson’s HDCD master recordings, made at a
resolution of 24-bit word length at a sampling rate of 176.4kHz. It
is already being acclaimed by many as the ultimate in fidelity for
two-channel sound.
Audio’s digital revolution seems to be finally
finding a sound
footing. In addition, significant advancements in signal processing
have made it possible to synthesize a reasonably good approximation
of the sound of a concert hall at home. Room-correction techniques
have become commonplace and reasonably priced.
Ultramodern recordings and playback systems can
deliver the full 96dB of dynamic range of a symphony orchestra to
your sweet
spot. Thousands of audio-related patents
have been granted to engineers, scientists, inventors, and
innovators all over the world. The industry is sustained by pioneers
driven relentlessly by passion, unselfish dedication, and a burning
desire to hear more.
They have made sterling contributions to the fabulous and fiercely
competitive domain affectionately and reverently known as high-end
audio.
I
am honored to have met and talked with the likes of Peter Perreaux,
Keith Johnson, Bob Carver, Ralf Ballmann, Marcel Riendeau, Michael
Pflaumer, Noel Lee, and Andrew Payor. They all have one thing in
common: They are driven, not by the almighty dollar, but by their
love of music and pursuit of excellence.
However, analog is still very much alive, as
evidenced in the recent and continuing resurgence of LPs, and the
turntables, tonearms, and cartridges to play them with. Very soon,
Andrew Payor intends to launch his Rockport Technologies System V
Sirius turntable, with its patented tangential-tracking, air-bearing
tonearm and colossal price tag. One thing is certain: Many members
of the high end’s so-called lunatic
fringe anxiously await the arrival of
this engineering/architectural masterpiece.
On the digital side, Berkeley Audio Design’s Alpha DAC is one of the
latest toys to have attracted the attention of reviewer and consumer
alike. Members of its design team, the inventors of HDCD, claim that
"Unequaled interpolation technology upsamples 44.1kHz CDs to produce
almost 176.4kHz quality and produces superb fidelity at all sampling
rates from 32kHz to 192kHz."
Conclusion
The output quality of high-end audio recordings is approaching a
threshold. The industry has evolved from its preferred format of
retrieving information stored in the grooves of long-playing vinyl
records using a phonographic pickup stylus. It is now beginning to
move on from music stored in the pits and land of polycarbonate CDs
retrieved using a laser beam. Playback of high-resolution encoded
data files from hard disks running on computerized music servers is
now commonplace. What else will avid listeners request? Time will
tell. Perhaps Doug Sax’s famous slogan should be modified: "There’s
No Stopping Digital Madness."
Dedicated to my
grandson, Matthew Sandiford.
. . . Simeon Louis Sandiford
simeons@soundstagenetwork.com
|