The distortion of sound (and its real culprits)


The Distortion of Sound. The above documentary has seen over three million views on YouTube since its 2014 release. It was made in association with Harmon Kardon as a way to raise awareness of the evils of compression, citing MP3s and streaming music as the main culprits.

Unfortunately, like all great pieces of propaganda, it offers a few grains of truth that are then exploited as way to justify its highly dubious conclusion. Several of the statements made by numerous industry experts featured in this film are somewhat laughable.

Chris Ludwig, a chief engineer at Harman, describes MP3s as “dangerous”. Mixer/producer Andrew Scheps gives a description of how MP3s actually work that is clearly more psychotic than psycho-acoustic. Greg Timbers of JBL describes vinyl as audio nirvana, and then goes on to explain how MP3s remove all the sonic nuances from a recording.

The coupe de grace however is how the film demonstrates the deleterious effects of compression on digital music. If you don’t want to watch the whole thing, then start right here and watch the waveform go by.

Question: does the compressed version of Lianne La Havas’ sample recording sound worse because a) it has been shot through an MP3 algorithm or b) because of the overzealous application of dynamic range compression (DRC)?

We owe it to ourselves as audiophiles to talk about the word “compression” with a little more sophistication than the usual rhetoric of “MP3 bad, FLAC good”.


Let’s start with word “compression” itself. In the video, it’s short form for “data compression,” a field of information theory that has been around for decades and has single handily revolutionized the way we store and retrieve data.

Data compression’s first and foremost goal is to reduce the size of the original signal. Information theorists classify data compression algorithms as being lossy or lossless but do you know that the main difference between lossy and lossless is how data is classified?

In the lossless world, data is reduced by analyzing its statistical redundancy and then exploiting it to reduce its size. Over on the lossy side of things, data maybe classified as unnecessary in order to reconstruct the original signal, which incidentally is why lossy encoding schemes are not considered “bit-perfect” because some information loss can occur during the encoding process.

Although lossy encoding schemes throw away bits, it doesn’t necessarily follow that their data quality is really worse than their lossless counterparts. Much depends on the type and application of the data we are trying to encode.

For example, let’s say our target data is text, and we want to encode a document of plain text in order to reduce its size to save on storage. Would we use a lossless or lossy encoding scheme?

You might answer ‘lossless’ because if random letters get tossed from the original document (thanks Scheps!) the message becomes incomprehensible. But what if we apply an encoding scheme that analyzes a piece of text, measures the whitespace between each word and the paragraph boundaries, and reduces their size by a smidge? Let’s call it the “Lazy English Major” encoding scheme. Guess what? LEM is lossy. And also guess what? Depending on how aggressive we set LEM in reducing whitespace, we may not even notice the textual claustrophobia it creates. For the majority of message readers, LEM yields a 100% acceptable reproduction of the original signal despite being lossy.

Side note: I’d wager that if this LEM codec does indeed take off, audiophiles will spend countless hours online arguing over how LEM documents are far inferior to their lossless counterparts and that if anyone enjoys reading a LEM document, they are clearly missing out on all the hidden beauty that whitespace has to offer. Count on it.

Audiophiles might not find text data compression schemes all that exciting. How about audio? If we encode a piece of audio, would we rather it be lossy or lossless? As an upstanding audiophile, you holler “lossless”!

Not so fast. What if the audio we are encoding is cellphone speech? Would we really want our voice to be encoded to DSD before being transmitted over the air? If you do then you probably also prefer all of your voice mails to be pressed directly to vinyl once a month as well. Obviously, a lossy encoding scheme makes a lot more sense since the spectral content of speech is extremely limited and the medium in which it gets transported is generally unreliable. Once again, lossy FTW.


But what about music? Would we use a lossy or lossless encoding scheme to store and play back our favorite piece of music?

Any audiophile worth his/her salt will opt for a lossless encoding scheme like FLAC (or ALAC). But which sounds worse, Metallica’s Death Magnetic as a low bitrate MP3 or in FLAC? What about The Red Hot Chilli Pepper’s last release, I’m With You? Heck, feel free to compare the 24bit/96kHz high-res version to an equivalent Spotify stream.

I’ll help you out here a little: both albums have been subjected to significant dynamic range (DR) compression. Usually an artistic decision, DR compression is where the difference between the quitest and loudest sounds on a recording have been significantly reduced. The smaller the difference, the lower the DR score. Death Magnetic scores DR3 whilst I’m With You returns DR4.

For all of you folks out there who have patiently been waiting to remind me that you can indeed tell the difference between MP3s and FLAC (thank you very much), hold that thought just a little longer.

Assuming a quiet environment and thousands of dollars of audiophile gear, is lossy data compression audible with so much dynamic range compression applied to the recording itself. Feel free to start your race to the bottom in the comments section.

The sad truth is that no matter what format or streaming provider you choose, both of these records will come out sounding very poor indeed. The reason? High fidelity was not a chief concern during the recording process; volume was.


For these two albums, any gains in fidelity we might’ve otherwise heard from a strict adherence to a “bit-perfect” lossless encoding scheme are all but lost. Forever. No format, lossless or otherwise, is going to change that. So it begs the question: is dynamic range compression lossy or lossless?

Moreover, is the process of recording music lossy or lossless?

Almost all recordings see some sort of processing applied to them, whether that equates to heavy amounts of DRC or throwing a mix through a number of digital processors. Depending on what the engineer is trying to accomplish, not all bits will make it to the final cut.

Instead of having our data compression algorithm make the call on what bits are necessary and what bits aren’t, an engineer is now making those choices. Manually. The recording and mixing process, in the loosest sense, is lossy.

An album’s fidelity has a lot more to do with the decisions made by recording and mastering engineers than any psychoacoustic or linear predictive coding codec used to carry the digitally carry the recording from the studio to our hifi system or smartphone’s D/A converter. Don’t hate the player, hate the game!

Hopefully now, you have a slightly more refined view of the whole lossy versus lossless debate. Yes, I prefer FLAC. Yes, you should archive all your data in FLAC. And yes, most of the time, it doesn’t even matter that much since in the end, a lot of those bits were ruined with respect to high fidelity before you even had a crack at them. That’s why I believe we audiophiles spend way too much time infighting over different formats and delivery mechanisms, when our real focus should be on the true culprit behind the degradation of fidelity in digital music. And that is how those bits were made in the first place. Think about it.

You can read more of Alex’s thoughts on audio over at his own Metal-Fi.

Written by Alex M-Fi

Alex M-Fi

Alex is co-founder and Chief Editor of, a website dedicated to the head-banging audiophile. His blood type is Type O Negative. Alex derives his income from writing software.


Leave a Reply
  1. Can’t really argue any of this. That’s why for my mobile phone listening I use mp3’s. Can’t appreciate the difference between that and lossless outside in an urban environment.

  2. I’ve struggled with this and the HiRes movement. Is there any easy way to know how an album was recorded so that as a consumer I can decide if buying the an album at 16/44, 24/96, etc. I’ve heard some at 24/96 that sounded no better than their 16/44 counterparts.

  3. Ben: If you subscribe to Qobuz or Tidal Hifi for 16/44.1 streaming and run their stream through player software like PureMusic, you can set the player’s display to show recorded dynamic range. This brings up a dancing meter that shows you either 20 – 30dB of action, between quiet and loud passages; or hardly any because everything happens between -8dB and 00dB (full blast for a measly 8dB or less). It’s fair to call the latter dynamically strangulated.

    With experience, one also learns which labels and artists tend to care more about the sound to be less heavy handed in post production. Jacques Loussier, Thierry Titi Robin, Renaud Garcia-Fons and artists of that calibre nearly always are recorded well because that’s what they insist on and pay for. Whether they’re the type artist you want to listen to… that’s another matter entirely. But a quick Qobuz, Tidal & Co. check with a PureMusic type software player will certainly let you know whether an album you’re considering buying is congealed loudness muck or has some proper recorded dynamic range to offer.

  4. at the risk of sounding a total prick who took the author’s bait, I’d still note that white space in visual representations is equivalent to silence in audio. Therefore, the example given in the opinion demonstrates exactly the case of ‘dynamic compression’ is its visual form. Which is why ‘videophiles’ who design fonts, text layouts or good displays for either computers or electronic book readers DO care about such things and consider them to be essential for the final product’s quality. Rightly so, in my opinion.
    In general, this touches upon and interesting topic. How much distortion in our thinking and talking of audio do we introduce by using analogies with visual perception? At some point, a great semiotician – and after all, both video and audio artefacts are symbolic systems – Roman Jakobson wrote an interesting essay about how categorially different the two are.

  5. Thank you guys for bringing attention to this problem. As a Metallica fan I’d like to poke Lars Ulrich in the eye every time I hear him defend the production of DM. I’d like him to offer a coherent explantion as to why the cd version of DM has a DRC of 3, but the Guitar Hero version had a DRC value of 12. I know the answer, of course, and it has to do with how the majority of music is consumed by the buying public. If an artist or label perceives that the album is going to end up on an iPod or other like player, they will heavily compress it so that it doesn’t sound any less loud then whatever else might be there. The same motives are now in place with streaming services. This is why if you look up most albums the vinyl version has a higher DRC than the digital version. I know that the physics of vinyl place limits on how much DRC can be applied, but I also believe that the expected form of consumption plays a role. In any event, this is a big problem for people like me who were more than happy with the Redbook format.

    • Not to totally thread-crap here, but we plan to cover the new Metallica remasters in detail. Stay tuned and thanks for sharing!

    • You need to be careful when looking at DR measurements done on vinyl that’s been recorded to PC. Vinyl has a number of limitations on the *peak* volume level, in terms of the lathe being able to cut the initial master disk for reproduction, the physical limitations of the grooves themselves so that they don’t cause the needle to jump, and the fact that the louder a record is, the shorter each side has to be.

      So while vinyl can’t simply ride maximum volume like CDs and all other digital files can, there is *NO minimum* of dynamic range that can be cut to vinyl. If you have a DR3 album and you want to cut that unchanged, you can absolutely do that. You just can’t cut it at 0dBFS (the loudest possible volume). All you have to do is drop the overall level, which doesn’t affect DR in any way. This is basically the same thing as turning your volume knob down. The sound is quieter, but DR is completely unaffected. This is how vinyl cutting engineers routinely work with the terrible, crushed CD masters that they are given to cut to vinyl.

      So why does a DR5 CD measure as DR10 on vinyl when its obvious that the same master was used for both versions? The answer is that recording a vinyl record on to a computer requires an A-to-D process, and the resulting digital waveform that will be created from this recording will appear different, and usually much more dynamic than the equivalent factory pressed CD. The sound you actually hear will be the same limited, crushed sound as the CD, the waveform just *looks* more dynamic. Unfortunately the DR meter software doesn’t have ears, all it can do is measure the waveform and give a falsely inflated result.

      The Guitar Hero version of Death Magnetic was made from the raw tracks for each instrument, before they were even mixed for the album. According to the mastering engineers who worked on that album, the mixes they were given were already completely crushed dynamically, so a simple remastering wouldn’t fix anything because the mixes were already no good. The only way to fix the album would be a complete remix from the original multi-tracks, which is basically what the Guitar Hero version is.

      As to why modern albums are made that way, the usual excuses are either that louder albums sell better (they don’t) or that louder albums sound better on low end audio equipment (they don’t) or that louder albums are better for people listening in noisy environments like buses and subways (they aren’t). While it’s certainly possible that a lot of engineers and executives still believe that nonsense, I think in a lot of cases the bands, engineers, and labels just want their music to sound like everybody else’s. This is known as being “competitive.” Since everybody else’s music is already stupidly loud, your music has to be the same way if you want to keep up, and the self perpetuating cycle of Loudness Wars continues.

      • Thanks for the info. 90% of my vinyl collection are AAA jazz reissues. There wasn’t much info available on the Metallica remasters, so I just took a chance a bought the vinyl.

    • Oh yes – that’s a good point! That petition is getting quite a bit of media traction right now. Well worth signing.

  6. Alex, great article.
    In your last paragraph you narrow it down to digital music. DRC has been used since ever. Actually, it is mandatory for most types of content to be released on analog formats like vinyl or tape without noise reduction.
    I would also add that in the analog domain, each generation or stage of the process, from recording to mixing to mastering to final release, is a inevitably lossy process.
    Best regards.

  7. Tiago, that is very true, no doubt about it. It’s just that the vast majority of records produced are in the digital domain.

  8. Alex, thanks for your article. It’s very interesting and edifying.
    Please allow me however to note the difference between the non-bit-perfectness introduced on purpose (i.e., sound engineers, mixing, mastering, etc., agreed with the artist / producer) and the damage all the lossy media formats are causing in order to reduce the original amount of information stored/sent. One can enjoy (or not, like me) DRC and / or other sound effects, but we should keep in mind these were meant to be delivered to the listener. Further discard of information (MP3, AAC, DTS, AC3, etc) can only make the listening experience even worse by adding additional harmonic components, quantization noise etc., non-existent in the source.