← BACK TO SHOP
← BACK TO SHOP

The [COMPRESSED] History of Mastering

Artwork provided by Michael Zhang.

Artwork provided by Michael Zhang.

This episode was written and produced by Casey Emmerling .

Join us on a musical journey from the Golden Age of analog mastering to the digital methods of today. We’ll find out why the music industry became obsessed with loudness, and learn how the digital era transformed the way that music sounds. Featuring Greg Milner and Ian Shepherd.

MUSIC FEATURED IN THIS EPISODE

Isn't it Strange by Spirit City
Stand Up by Soldier Story
Lonely Light Instrumental by Andrew Judah
Who We Are by Chad Lawson
No Limits Instrumental by Royal Deluxe
Crush by Makeup and Vanity Set
Rocket Instrumental by Royal Deluxe
Light Blue by UTAH
Love is Ours Instrumental by Dansu
Shake This Feeling Instrumental by Kaptan
Wrongthink by Watermark High
Rocket Instrumental by Johnny Stimson
Lola Instrumental by Riley and the Roxies
Quail and Robot Convo by Sound of Picture

Twenty Thousand Hertz is produced out of the studios of Defacto Sound, and hosted by Dallas Taylor.

Follow the show on Twitter & Facebook.

Become a monthly contributor at 20k.org/donate.

If you know what this week's mystery sound is, tell us at mystery.20k.org.

To get your 20K referral link and earn rewards, visit 20k.org/refer.

Check out Ian Shepherd’s podcast The Mastering Show.

Check out Greg Milner’s book, Perfecting Sound Forever.

Consolidate your credit card debt today and get an additional interest rate discount at lightstream.com/20k.

Go to forhims.com/20k for your $5 complete hair kit.

Check out SONOS at sonos.com.

View Transcript ▶︎

You’re listening to Twenty Thousand Hertz. I’m Dallas Taylor.

[music in]

Even those of us who know next to nothing about the music industry probably have some idea what mixing is. For instance, we all know mixing involves some sort of leveling— like how loud [SFX], or quiet [SFX] you want something to be. It also involves panning—whether you want an instrument or vocal part to be on the left [SFX], the right [SFX], or somewhere in the middle. And while you might use some effects while recording, a lot of other effects get added during the mixing phase. Maybe you want to add some reverb to the vocals [SFX], double-track to give it a little more oomph [SFX], or autotune those sweet vocals [SFX].

While working on a song, a mixing engineer will make a ton of decisions like these, both big and small. But after being mixed, songs go through a whole other process before they get released. This stage is much harder to explain, and while it’s definitely more subtle than mixing, it still ends up having a huge impact on the final sound. This process is called “mastering,” and even inside the music industry, it’s considered something of a dark art—something that only a small group of elite specialists know how to do.

[music out]

Greg: Mastering is the final step in making a commercial recording.

That’s Greg Milner. He’s written about music and technology for publications like Slate, Wired, Rolling Stone and The New York Times.

Greg: It's taking the fully mixed recording and essentially making it absolutely pristine and correct to actually make it into something that people will listen to or buy. In the old days, before digital technology, the mastering engineer was the person who would literally make the physical master from what the recordings would be stamped from.

Ian: Back in the day that would've been a vinyl master, then cassette, then CD, and these days for digital files.

This is Ian Shepherd.

Ian: I'm a mastering engineer. I have a podcast called The Mastering Show, and I run the Production Advice website.

Ian says mastering isn’t just about preparing music for public consumption.

Ian: It's also an opportunity to get the music to sound the best that it can be.

Ian: If it's a hard rock song [music in] maybe you want to bring even more aggression and density into the sound [music out]. Or, if it's a gentle ballad, [music in] maybe you want a lovely, soft, sweet, open sound… [music fade out]. So it's very much a collaboration between you and the artist.

[music in]

So how is mastering different from mixing?

Greg: Mixing is when you take all the individual tracks, the separate tracks that go into making a recording and you mix them together. I like to visualize it as if you had a lot of jars of different colored sand and you poured them all into one big jar, and you wanted to control how much of each color was there. You might pour a little of one, more of another, into the big jar, but then the sand would be in that jar permanently. You couldn't actually extract the different colors, so that's a finished recording. That's mixing. And then mastering is maybe taking that jar of sand and doing little things to it, maybe moving stuff around here and there, but it's already mixed. You're not doing any mixing when you're mastering. You're working with a fully mixed recording.

Ian: The other analogy is that mastering is like Photoshop for audio. So, we've all taken photographs, you know, on a mobile phone or a camera, and then maybe you have one that you actually want to print out or put on the wall. And you look at it, and actually you suddenly realize it's not quite as good as you thought it was. So, maybe you want to tweak the color balance, or enhance the contrast and the brightness, or maybe take out some red eye from a flash.

Ian: Mastering is the same thing for audio. So, you might adjust the equalization, which is the overall amount of bass [SFX] and treble [SFX] and mid range [SFX] in the sound, to get the tonal balance as good as it can be. You might want to adjust the balance of loudness and dynamics, which is like adjusting the contrast and the brightness in a picture. You might want to take out clicks [SFX] or thumps [SFX] or hiss [SFX] or buzz [SFX], and that's a bit like fixing red eye in a photograph.

[music out]

Ultimately, the mastering engineer is responsible for making an album sound cohesive, rather than just a random collection of songs.

Ian: Often, if you have a collection of recordings maybe from a bunch of different studios, and over quite a long length of time, it's a chance to balance those against each other, optimize the levels, the overall sound, to get the best possible results.

That includes deciding whether songs have gaps of silence between them, or whether they flow naturally into each other.

Ian: The final thing about mastering is to actually choose all of the starts and ends of the songs, and put them in sequence, and choose the gaps between them. And if you widen out the Photoshop analogy a little bit, that's maybe like doing a presentation of your images, maybe laying them out in a photo book or even a little exhibition, you know, and saying... what frame am I going to put this in? How am I going to light this? Should this be large, should it be small? All those kind of things.

Let’s compare the way a song sounds before and after it’s been mastered. Here’s a clip from the song Closer from Nine Inch Nails, before mastering:

[Music clip: Nine Inch Nails Closer Unmastered]

And here’s the mastered version:

[Music clip: Nine Inch Nails Closer Mastered]

Now, they both sound great, but the mastered version sounds fuller, clearer, and noticeably louder. It’s the same song, just...a little better. This shows how simple the effects of mastering can be.

But mastering engineers don’t just work on new music. It’s also common for older albums get remastered using newer technology.

Ian: The advantage is quite often you can go back to the original master tapes, you can make a clean transfer with the best possible equipment.

Ian: And the remastering is also an opportunity to maybe correct come faults.

For instance, Ian was once hired to restore and remaster a 1967 song called “Hush,” by the British songwriter Kris Ife. You may know Deep Purple’s version of the song, from a year later.

[Music clip: “Hush”]

Unfortunately, the original master tape of the track had been lost, so all Ian had to work with was an old vinyl 45. As you’ll hear, the record was in pretty bad shape. But through the magic of mastering, Ian managed to cut out the hiss and crackle. He also tweaked the EQ to make the song sound warmer and punchier. Here’s the original:

[Music clip: Remaster (first section)]

And here’s Ian’s remaster:

[Music clip: Remaster (second section)]

Ian: Sometimes, what was on the vinyl didn't sound as good as what was on the master tapes. And remastering is an opportunity to let people hear that. So that’s the ideal.

But the most controversial part of mastering has to do with loudness.

Ian: Part of the process of mastering is to get a great balance between the dynamics of the music and the loudness. So, the dynamics mean contrast in the music. So, in an orchestral score, you have pianissimo for the quietest moments [music in] and fortissimo for the loudest moments [music up]. And the same thing applies to a rock song [music in], for example. You want the introduction to be quiet and gentle, maybe, and then the verse and the chorus to get louder [music up], and you want the screaming guitar solo to really lift up in level to have the right emotional impact [music up].

[music out]

The natural difference between loud and soft sounds in music is referred to as dynamic range. The word “loudness” has an easier definition. It works just like your volume knob - basically a mastering engineer will change the overall loudness of each song so they all play nicely together as an album, and you don’t have to reach for the volume knob on your sound system.

In the ‘70s and ‘80s, when vinyl was king and recording was all analog, songs could only be as loud as the equipment would allow.

The machines that physically cut music into vinyl records were especially fragile.

Greg: In an analog system... you're really limited.

Greg: So I think their mindset was a little bit different in the '70s and '80s. The mindset was that there is this limit beyond which we really can't go so we have to be very, very careful about the way we master these recordings.

As a result, music from this period tends to have a very high dynamic range. So, there’s a lot of contrast between the quietest parts of a song and the loudest.

Greg: So many things back then had a great dynamic range. You know, you listen to Abbey Road for example, “Here Comes the Sun.” If you really listen closely you can really hear the range.

Here’s the quietest part of “Here Comes the Sun:”

[Music clip: Here Comes the Sun (intro)]

And here’s the loudest part:

[Music clip: Here Comes the Sun (loud)]

Just to be clear, we didn’t adjust the volume at all between the two clips, that’s the exact dynamic range from the album.

Greg: But you know what? If you listen to a Black Sabbath song that came out about a year later, a lot of those actually have an even greater dynamic range.

The song “Black Sabbath,” from Black Sabbath’s first album, Black Sabbath, shows off it’s impressive dynamic range within the first minute. At the start, it’s extremely subdued, with nothing but the sounds of rainfall and church bells.

[Music clip: Black Sabbath (intro)]

Suddenly, the song erupts into a monstrous guitar riff.

[Music clip: Black Sabbath (main riff)]

The energy peaks in the final seconds.

[Music clip: Black Sabbath (end riff)]

If you grew up on.

[Music clip: Black Sabbath (actual end riff)]

...I always forget about that. Anyway, If you grew up on classic rock radio, then you have heard these songs many times but may never have realized how they were affected by mastering.

This also applies to all genres of music, from hip hop to classical. Nearly all music gets mastered before it is released.

If you’re a classic rock fan, you’re probably sick of the song “Stairway to Heaven,” but there’s no denying that the song is a powerful example of dynamic range.

[Music clip: Stairway (intro)]

Greg: There's a reason, I think, that “Stairway to Heaven” was so popular. There's several reasons, but one thing is it just has striking dynamic range…

[Music clip: Stairway (drum verse)]

You can tell by how rich the drums often sound. Drums and vocals are I think the things that benefit most from really strong dynamic range.

[Music clip: Stairway (outro)]

From start to finish, that’s a huge change. We’re not just talking about in increase in energy, but in actual volume. A lot of the most beloved music from this era just is like this.

Ian: Pink Floyd, Wish You Were Here is a classic audiophile album with amazing dynamics.

[Music clip: Wish You Were Here].

Greg: Then of course the Eagles, love 'em or hate 'em, those early Eagles records had really stunning dynamic range, especially when they were mastered on to the Greatest Hits album that became the biggest selling album of all time. There's just a spaciousness to those records.

Like in the Song Witchy Woman.

[Music clip: Witchy Woman]

Greg: It was really kind of an embarrassment of riches in a way, but you could almost pick and choose, and chances are you'd be listening to something with strong dynamic range.

[music in]

But starting in the late ‘80s, the spread of digital technology caused seismic shifts in the music industry. For one thing, songs could made be louder than ever.

Ian: The new digital technology just allowed people to go even further, push the loudness higher and higher.

One of the main ways they did this was through dynamic range compression. Essentially, this type of compression clamps down the loudest parts of a track so they’re closer to the quiet parts, and once everything is evened out, you can boost the whole thing up. That way, the song stays closer to a maximum level the whole time, with less dynamic range from second to second, or minute to minute.

Of course, compressors weren’t invented in the 80s.

[music out]

Greg: Compression has been something that's been around at least since the advent of multitrack recording.

Ian: In fact, the reason that The Beatles got Abbey Road to buy the first Fairchild compressor, was to try and compete in terms of loudness with the music that was coming out of Motown.

[Music clip: You Can't Hurry Love]

Like this song, “You Can’t Hurry Love” by The Supremes.

[Music clip continued: You Can't Hurry Love]

But while analog compression had been around for decades, digital compression was a whole new ballgame.

Greg: With the advent of the compact disc it became easier to employ very, very harsh dynamic range compression to make things sound louder.

Ian: But there's also a limit in digital formats as well. There's this ceiling, basically, above which you can't go any higher because, at the end of the day, there is a number that is the largest number you can store in the digital format, and there are no numbers larger than that.

In other words, in a digital format, we can now make the volume max out riiiight before it’s absolute maximum possible level. With old analog tech, you it was very wishy washy, so mastering engineers had to be much more conservative in their approach.

[music in]

Ian: I have a bit of a crazy analogy to explain this, which is, if you imagine that the music is a person on a trampoline. If they're in a big sports hall with high ceilings, they can bounce as high as they like, and there's no restriction. But if you then think about raising the floor of the room up towards the ceiling, for a while that's no problem, there's plenty of headroom for the person bouncing on the trampoline, or for the music. But as you get closer and closer to the ceiling, the person bouncing is going to have to maybe start ducking their head or curling over, and twisting and turning to avoid crashing into the ceiling. And exactly the same thing happens with music. For a while, you can lift the loudness up with no problems. But as you get towards that digital ceiling, the highest level that can be recorded, you have to start processing the audio, squashing the audio down into a smaller and smaller space to make it fit.

Ian: You can do that quite gently, which can be beneficial and help things sound glued together and dense and powerful. But if you go too far, it can dull things down, and they start to sound lifeless and weak.

And by the time you’re hearing me right now, we’ve slowly compressed Ian’s voice, my voice, and the music. So right now, what you’re hearing is super compressed. Can you tell? [music plays] ...and here it is back with much lighter compression... Ahhhhh [music clip without compression].

So why was the music industry so obsessed with loudness? If hyper compression can degrade the sound quality of a song, why would an artist ever want it?? And how did all of this affect the future of the music industry? That’s coming up, after the break.

[music out]

[MIDROLL]

[music in]

In the analog era of recorded music, songs were mastered to be very dynamic. This meant that there could be a lot of contrast between the quietest parts of a song, and the loudest. But once digital technology hit the scene, mastering engineers could make songs louder than ever before. To do so, they used extreme compression, which boosts the volume but reduces dynamic range. So why were artists so eager to make their songs louder?

[music out]

Ian: If I play you two identical pieces of audio, but one of them is just a fraction louder than the other, they will actually sound different to you, even though they're the same and the only difference is the loudness. So the louder one might sound like it's got a little bit more bass, a little bit more treble, and on the whole, people will tend to say that they think the louder one sounds better.

So let’s try it. Here’s a clip from the song “Juice” by Lizzo. Which one sounds better to you? This:

[Music clip: Juice (quiet)]

Or this?:

[Music clip: Juice (loud)]

You probably picked the second one, and if so, you’re not alone.

Ian: Even though the audio is identical…

Greg: Their initial reaction is often going to be, "Oh, the loud one sounds better. It's just fuller. It's, you know, coming out of the speakers."

Ian: And that means that, if you're producing any kind of audio where you want to catch people's attention, there's a benefit to being loud.

And music isn’t the only place where some people think louder is better. There’s one industry in particular where getting people’s attention matters more than anything else.

[SFX clip: Billy Mays: Hi, Billy Mays here for the Grip and Lift, when you need some extra help for those outdoor chores, it’s a must have!]

That’s right: commercials. And just like music, the volume of commercials used to be limited by analog equipment.

[SFX clip: Bounty Ad (60s): That’s why I switched to Bounty paper towels. They absorb faster than any other leading brand. Bounty is the quicker picker upper.]

But as technology improved, commercials kept getting louder and louder.

[SFX clip: Bounty Ad (00s): The quilted quicker picker upper, Bounty!]

Eventually, things got so bad that Congress had to be the noise police for the entire country. In 2010, Congress passed the CALM Act, which stands for Commercial Advertisement Loudness Mitigation. Under this law, ads are prohibited from being broadcast louder than TV shows.

It basically works by measuring loudness over time. TV shows are longer, so they have time for peaks and valleys in the volume. But TV ads are often just a block of maximum loudness for 30 seconds, so they can still feel a lot louder even though they’re technically the same.

Greg: It's still at the same level, it's just that it's hitting those maximum peaks much more often than the TV show before it.

Ian: We've actually seen a similar thing happen in music, where people have been using loudness to try and get music to stand out as well. On record, originally, and on the radio, and these days, on CD and online... and it's called the Loudness War, because it's basically a sonic arms race. Because people know that if they can be a little bit louder, maybe they'll stand out a little bit more, or sound a little bit better.

Greg: Imagine a jukebox in a crowded bar [SFX] It's set as some kind of master volume. If the song that comes before yours has been mastered to sound louder, naturally that's where the volume is going to be set. [Music clip : Love Is Ours - Instrumental - by Dansu] When your song comes on it's gonna sound kind of weak and wimpy by comparison. Maybe you won't even be able to hear it over the crowd noise, [Music clip: Shake This Feeling - Instrumental - by Kaptan] There was this thought that music really had to just jump out of the speakers and really attack you.

Greg: What's the Story Morning Glory by Oasis: really, really, really aggressively compressed…

[Music clip: Morning Glory - by Oasis]

But on the other hand…

Ian: By modern standards, Nevermind by Nirvana.

Ian: Is quite a quiet record. But nobody ever complained that it didn't sound loud enough, because they just crank it up.

[Music clip: Lithium by Nirvana]

Greg: And that's the thing. We have plenty of volume to go around. All we need to do with records if they're not as loud as we want is just turn up the volume.

Still, Nirvana’s Nevermind ended up being something of an outlier, as more and more artists opted for a loud, ultra-compressed sound.

Greg: While this was all going on, the same thing was happening in radio. Radio stations were facing the same sort of problems. You want your radio station to pop out of the speakers so someone listening to it if they turned to it on the dial and less likely to go to someone else's. So, you had this Loudness War in radio and this Loudness War in recordings and it just combined to be this really crazy morass of loudness and compression.

Ian: Over time, the loudness levels just creep up, and creep up, and creep up.

By the end of the millenium, the Loudness War had spiralled out of control. Music was being hyper-compressed by mastering engineers, and again by radio broadcasters. Just when it seemed like things couldn’t get any worse, mp3s appeared, and music got compressed even more. This time, it was through a process called “data compression.”

Unlike dynamic range compression, which is applied while mixing and mastering, data compression happens when a recording is encoded from one digital format to another, like when you used to rip a CD onto a computer.

[SFX: Vintage CD tray SFX]

So let’s rewind to say 2001, [Show Me the Meaning of Being Lonely - dream sequence-y] and you want to get the music from your new Backstreet Boys CD onto your computer, and then put it on your mp3 player, or most likely you want to share it to Napster. Of course, you shouldn’t be uploading other peoples’ music to the internet, but it’s 2001, and you don’t know that yet.

So you open a program that turns CDs into mp3’s. But you probably didn’t pay attention to the settings. [SFX and Music out] And something most of us don’t realize is that when you turn those CD’s into a bunch of mp3 files, you are throwing away a huge amount of the actual sound of the music through data compression.

Greg: When MP3s came on the scene, they figured out that you could apply algorithms that would take out a huge amount of the music, and I'm talking like a gigantic amount of the music, because at any given moment there are certain frequencies that our ears are not going to hear because they're being overwhelmed by other frequencies.

[music in]

Ian: So I actually find it pretty impressive that lossy data compression works at all. When you think that often as much as 90% of the information is being discarded in order to get the file size down, it’s amazing that they sound as good as they do.

At higher-quality settings, most people probably won’t notice any loss in sound quality. But when you compress the file down enough, the sound really starts to suffer.

Ian: So what you tend to get back has similar tonal balance to the original, you can hear all of the instruments, it still sounds like the same piece of music. But when you do a direct comparison, you’ll often find, if you’re listening in stereo, what used to sound wide and spacious and lush collapses down into the center of the stereo image. You get much less of that sense of space and depth, and everything sounds a bit claustrophobic, a bit constrained… And the other thing that you hear as the data rate goes down is extra mulch, to use a technical term. It’s just this kind of squelchy, scrunchy, slightly distorted quality to the sound.

We’ve actually been gradually compressing the data of this audio over the last minute or so. Here’s how it sounded when we started, [Back to normal, high bitrate] and here’s where we ended up [Back to low bitrate]. It’s one of those things that if you don’t know what’s happening, you can’t really pick it out. But when you compare the two, you can definitely hear the difference.

[music out]

Ian: It probably won’t leap out at you, but once you start to hear it, it’s quite distinctive. For me, it just makes things sound duller, less interesting, less involving. I’m less likely to be sucked into a recording, and lose myself in it. It’s much less likely to give me goosebumps.

[music in]

Data compression in audio is still a big issue today. When you stream music, or listen to a podcast, the audio files gets encoded down pretty heavily to save bandwidth. This does make sense up to a point, since higher-quality files do take longer to buffer. And of course, a lot of us pay by the gigabyte for our mobile data. On the other hand though, internet speeds are faster than ever these days, and unlimited data plans are pretty common. You can stream 4K video from YouTube and Netflix, so why hasn’t audio caught up?

Unfortunately, audio still often gets treated like a second-class citizen compared to video, and the bar for what’s considered acceptable is significantly lower. Between over-compression at the mastering stage, and over-compression at the encoding stage, most of us have to put up with subpar sound all the time, whether we realize it or not.

[music out]

Ian: It’s quite interesting; because it’s such a subtle effect, if you didn’t do the comparison, you might never notice it. But I think it has quite a profound effect on the way that we feel when we listen to the music, and the way that we’re likely to keep on listening, or switch it off and do something else instead.

[music in]

Here at Twenty Thousand Hertz, we care about sound quality, and we think you do too. If you want to make the music you hear sound a little better, go into the settings of your music streaming app, and turn on “High Quality Streaming.” It’s not going to fix all of the issues we’ve talked about, but it does make a difference.

At this point, things seem pretty dire, but there are some signs of hope. While music has been getting pummeled by the Loudness War, some artists and mastering engineers have been fighting to keep dynamics alive. And while streaming services don’t have a great track record when it comes to sound quality, they might end up being the biggest game changers in the Loudness War. How?

We’ll find out next time.

[music out]

[music in]

Twenty Thousand Hertz is presented by Defacto Sound, a sound design team dedicated to making television, film and games sound insanely cool. Go listen at defactosound.com.

This episode was written and produced by Casey Emmerling and me, Dallas Taylor. With help from Sam Schneble. It was sound Edited by Soren Begin. It was sound designed and mixed by Nick Spradlin.

Special thanks to our guests Greg Milner and Ian Shepherd.

If you want to dive deeper into these subjects, be sure to check out Ian’s podcast, it’s called The Mastering Show. His website is called Production Advice. And check out Greg Milner’s book, Perfecting Sound Forever. You’ll find links in the show description.

The background music in this episode came from our friends at Musicbed. Visit musicbed.com to explore their huge library of awesome music.

What album captivates you with its amazing sound? I’d love to know. You can get in touch with me and the rest of the 20K team on Twitter, Facebook, or through our website at 20k.org. And if you enjoyed this episode, tell your friends and family.. And be sure to support the artists you love by buying their music… and preferably in high quality.

Thanks for listening.

[music out]

Recent Episodes