May 17, 2020 at 12:46 am #46329Andy BrownParticipant
“So one thing I’ve learned about doing vinyl for KRVM is that when the entire vinyl track is more or less at the same level, normalizing to -6dB is fine.”
There, fixed it for you.
Don’t ever assume a workaround is good anywhere or anytime other than where and when you use it.
Not to be harsh, and I have read all the posts and kept quiet sans my initial suggestion, but you brought us the issue. You have not identified the problem nor solved it.
Here’s what needs to be said: You build a program audio chain starting from the source material. If you start with compromised source material, you remain compromised no matter if everything else is perfect (which it never is).
If you normalize -6 dB from the get go, you are giving up half the signal you could have at that stage. That means your signal to noise ratio could be twice what it is at the start. This is not recommended. You want to build a program audio chain such that all the conversions back and forth between analog and digital and any subsequent reformat through a digital processor or e.g. AAC for an STL/Barix pair has the best possible input at each stage along the way. Without knowing the exact program audio chain including the sequence of events going on in pre-production, you will never identify your problem or root out the cause and make the correction (or even suggest the correction to someone that can implement it).
Think of it this way. If you hook up narrow pipes at the water pump and keep increasing the pipe diameter as you go to the faucet, you can never obtain the full flow that the pump/large pipes are capable of. When you hook the biggest pipes up at the pump and narrow the diameter down as you go along, you can maintain full flow rate throughout the network over many splits and long distance.
I’ve done my shair of designing, constructing and proofing broadcast audio program chains and troubleshooting and repairing them when they aren’t working correctly (or at all) both in the analog domain and in the hybrid (analog/digital) domain.
FYI: 320 kbps MP3s sound just fine when it’s all set up and done correctly. You’re misled if you think WAV files @ -6 dB will solve a problem like you have, and from what you say, the work flow isn’t set up for them anyway. All that does is change the symptoms.
Other than the -6 dB thing, you also shouldn’t use compression. It’s not the job of the source material dub/rip to do that. First of all, recordings are already highly compressed and processed before they get pressed into vinyl, replicated to CD or prepped for download. At the station, there is processing at the end of the program audio chain (before and/or after the STL) to shape the waveform in a way that’s best for the mode of transmission, be it analog or digital including but not limited to equalization, expansion, compression and limiting.
It sounds to me like you’re shooting yourself in the foot by normalizing at such a low level, processing an already processed signal that’s not only headed to D/A A/D ping pong but also the main processing, and expecting it to sound good. It will not. Sure you’re dancing around between “that one sounds OK” and “that’s awful” because the various source tracks aren’t all created with the same amount of processing.
Needless to say, you you can only get out what you put in.
You have to find out what they are doing (wrong) because otherwise finding a consistent solution will not be found. My guess is that there is no one there who really knows the architecture of the program audio chain and production methods have been devised to just work in a system that no one really has a full picture of. I’ve seen it before.May 17, 2020 at 3:47 am #46336
I don’t disagree with anything you say Andy. But I have to work within the boundaries of their system as it seems to be set up.
I would never normalize to -6 dB for my own stuff, makes no sense. But if that’s what they need then that’s what I do. I also hate applying even mild compression to dynamic audio.
I have no control over anything that happens after I hand off the deliverables and I have only one opportunity a week to hear the results and then possibly make adjustments.
Just trying to help them fill 2 hours of airtime a week during lockdown and make it sound reasonably good via some trial and error.May 17, 2020 at 11:04 am #46340semoochieParticipant
Would it be possible to set for peak levels and then “ride the gain”, upon playback, automatically or would that just bring up the noise level?May 17, 2020 at 2:11 pm #46341
Of the 30 tracks I made for the next show (which they typically assemble on Monday morning), I diddled around with 2 of them last night using the Compressor tool in Audacity. One of them turned out good. The other (Foo Fighters) is really a tough one. It’s ok but if you listen carefully you can hear a level change in the middle. I’m still an amateur trying to figure out the advanced audio tools without making things worse instead of at least a perceived better.
In the “old days” of film, it wasn’t uncommon for projectionists to “ride the gain” during a show.May 17, 2020 at 2:13 pm #46342NotalentParticipant
You want to leave the gain alone and let the on air processing take care of that. That’s what it is there for.May 17, 2020 at 11:35 pm #46363
Since I have the source material, I might record some of it off the air to compare with in terms of dynamic range and spectral content etc. I can probably do both HD and analog. It’s consumer gear not a lab but it might be interesting anyway.
I submitted a WAV version of one of the files along with a note to please use it if possible instead of the MP3. We’re doing this via cloud sharing so if they assemble the show Monday morning they’ll have the choice of MP3 or WAV for that one track. If “assemble” means re-encode, and they use the WAV file, it should be one less round of lossy compression.May 19, 2020 at 8:35 am #46398Dan PackardKeymaster
Don’t use mp3 for any kind of music content! It’s destructive and as “Notalent” mentioned in an earlier post, you get the specter of cascading codecs, which can completely degrade the sound even more.
In this day of cheap storage costs, record those old precious vinyl tunes in the lossless WAV or FLAC format.
Simian automation accepts WAV files. I can’t recall about FLAC. If Simian doesn’t, get a better automation system like RadioDJ (it’s free, by the way).
As for Normalizing, I usually use, the 90% setting in Goldwave (and similar type setting in Audacity). If you do have to stoop so low with mp3’s, use Mp3gain to normalize. It’s a cool tool that allows you to mass normalize a large amount of mp3 files in a reversible, non-destructive way using the metadata of the file.May 19, 2020 at 12:56 pm #46407
Thanks for all the tips Dan and everyone else.
Last night’s show sounded pretty ok, best one so far. I captured the entire 2 hours in HD* at CD quality. When I have time I’ll do some comparisons.
And yes, storage is so cheap these days it seems kind of ridiculous to use lossy compression anywhere except perhaps on smartphones and such.
*Yet another round of lossy compression but useful for evaluating dynamic range and spectral content.May 19, 2020 at 3:34 pm #46408nosignalallnoiseParticipant
Don’t use MP3 for any kind of music content to be broadcast on the air!
There, finished your sentence for you.
MPx are perfectly acceptable as general-purpose codecs for headphones listening or to be pumped directly into a PA amp/speaker chain from a playback device. I mean, the “big two” music services have used modeate-bitrate MP2 for years in their disk products, but they also have mostly or all-PCM production workflows and audio chains for their satellite broadcasts. They can get away with it. And in a lossy 70/40 mono PA chain in a noisy store or restaurant or a quiet office, when properly EQd and companded for the specific playback environment it’s mostly not noticeable anyways.
Generational quality loss is inevitable in analogue-digital hybrid broadcast audio chains like all the other guys have touched on and since PCM is much more widely deployed (due to its age and ubiquity) than the much newer FLAC format, it’s the format that should be used for broadcast content.
FLAC website, for those who don’t feel like redirecting through a Wikipedia page: https://xiph.org/flac/
Most of the popular acrimony and discontent against the MPEG1 audio codecs, particularly in audiophool circles, comes from improperly encoded files (i.e. lower than 44100/320/stereo or 160/mono), usually by (deaf and dumb) people using the default settings which usually specifies a far-from-optimal psych model (RTFM!!!!!), or using suboptimal or outdated codecs (cough Itunes cough). That’s why there’s so much shitty MP3 audio floating around the Internetz today. The thing is, encoding using the highest bitrate/transform settings takes time. It’s the whole “instant gratification” stereotype. For most people it’s quicker and easier to generate a file that sounds like shit than to wait a few extra seconds (or minutes) and generate files that are indistinguishable from the original.
It’s a notoriously easy format to totally screw up and do wrong, but it doesn’t have to be. When done right it can really sound great.May 19, 2020 at 4:53 pm #46409tombrooksParticipant
Interesting reading all the info on .Mp3 and quality…I volunteer at a local Low Power station and just recently (After Many Years) changed my Voice Track to .WAV Files….although huge files, thanks to Google DOCS able to Upload and then the Station Manager downloads… made a huge difference in the quality of sound (Although maybe not the content of what the announcer says…lol..) .. .WAV is differently a better format than .mp3….. I can’t imagine how bad .mp3 music would sound, needless to say…May 19, 2020 at 7:08 pm #46415Andy BrownParticipant
It is noteworthy to understand that most LPFMs and small non profit NCEs use Barix (or other) codecs and the internet as an STL. These modems usually work in two ways, PCM or AAC. PCM is expensive because of the bandwidth it uses, so most use AAC which is a compressed format like MP3. So this notion of audio purity is horseshit. “Oh, we play vinyl because it’s so good” or “We use WAV files” doesn’t matter because all that goodness is lost in the program audio chain on the way to the transmitter.
Also, it’s more about the bit rate than the format.
Most folks can not tell the difference between a high rate MP3 or a WAV file, especially over the air. You need a sound room with expensive equipment and God’s own hears to hear the difference on most music.
Then all this talk about WAV brings up another caveat. 24 bit WAV is not 16 bit WAV. All CDs created to the blue book standard (so they play in your player) eventually end up as 16 bit AIFF (AIF) files. They might be mastered in 24 bit 96 kHz but they end up as 16 bit 44.1 kHz files.
Then there is the bandwidth issue itself. LPFM is analog and doesn’t pass anything over 15 kHz. If you’re not broadcasting in digital HD1 a lot of the extra response between 15 kHz and your ears limit is not going to make it through the program audio chain. Even on HD radio, if you have high rate compressed material, and you’re listening on a small speaker portable or in the car, and assuming the entire process has been created by people that know the entire picture, there is not a hearable difference.
Sure, the big boys can lease a fiber and send PCM to get their programming to the end of the line, so it makes sense to use WAV files when you have a big budget.
Most of the problems I have encountered since digital audio and video came around have little to do with the technology itself and more to do with unqualified operators who don’t understand what they are doing.
Not that I’m accusing anyone in the discussion of being unqualified. However, this has been on ongoing debate for years and the data/results of surveys almost always suggest that without a super quiet listening room and high end equipment, the overwhelming majority of people report they can’t hear the difference on almost all the sample tracks. So do what you may, but if you are hearing a large improvement by switching from compressed digital to uncompressed digital on an analog FM (or even on a digital broadcast) the prep work was probably flawed. Look at all the trouble lastday is having because he doesn’t know the way the system at his station was set up.
One more note for the real epicureans. Reed–Solomon codes are a group of error-correcting codes that were introduced by Irving S. Reed and Gustave Solomon in 1960. They have many applications, the most prominent of which include consumer technologies such as CDs, DVDs, Blu-ray discs, QR codes, data transmission technologies such as DSL and WiMAX, broadcast systems such as satellite communications, DVB and ATSC, and storage systems such as RAID 6. One of the “things” that is often misstated about WAV, FLAC and AIF files is that they are “uncompressed” digital audio so therefore no data has been removed like those evil, ugly, sonically inferior MP3 files. NOT TRUE. Redundant data is deleted. Instead you have an instruction to retrieve that block of data and insert it back in the chain. In other words, only one version of the A/D data is really kept. The redundant data is deleted and replaced with code that recalls that block(s) and reinserts it into the stream. I may not be explaining this perfect because it is pretty complicated and it’s been years since I studied it.
Bottom line: When Dan, our fearless leader says: “Don’t use mp3 for any kind of music content! It’s destructive” he’s failing to mention that odds are elsewhere in the program audio chain, this same “destruction” is taking place, so unfortunately, I’m calling foul on what Dan says. Also, so called uncompressed digital isn’t quite so pure, anyway. When Tom, my old buddy from KMJK says “huge difference” in quality, I’m thinking that it’s more about what he wasn’t doing when creating those MP3 files in the first place because there is not a “huge difference” to be had. Especially since I know his ears are almost as old as mine.
One more note for those curious about Audacity. It’s free and available in all formats. It’s native format is 32 bit WAV (24 bit WAV audio with 8 extra bits for effects). It can, when equipped with all the plug ins available, export to any format you want including 24 bit WAV but those files are huger than huge. You can also export to 16 bit WAV which along with 16 bit AIF will be about 1.7 GB for a three hour radio show. A 320 kbs MP3 three hour show will be about 500 MB and sound just as good to 99% of the people on 99% of the tracks when it’s done correctly.
Again, it’s about bits and kbs, not WAV or MP3. Your opinion always counts, but beware, this is not a new topic and you will find internet links to many many discussions about this and lots of data from tests that will support whatever your opinion is. I’m not trying to get in anyone’s face, but beware drawing large conclusions about quality because there is a lot of data to the contrary of whatever your opinion happens to be one type or the other. I’m really just trying to shed some light to those that might be trying to figure out which way to go.May 19, 2020 at 7:32 pm #46416tombrooksParticipant
Educational as usual Andy….thanks for that – I followed most of it..!! Cheers!! LOL>.May 19, 2020 at 9:02 pm #46417radiogeekParticipant
I don’t do radio anymore, but I remember several things,
The problem I usually find is in editing. People trim files, edit files, and then splice things together to make larger files. That’s usually where the crap happens.
If it’s all in a compressed codec, if you bring it into software, delete one second and resave into the exact same codec… errors happen. Open the file again and do some adjustments to gain or eq and save as same codec, it’s even more degraded. Every time you simply save the file and re encode it gets worse and worse. I demonstrated this once for a class at a community radio station by playing them the same clip where all I did was re-save the same file three or four times.
That’s why ALL editing should be in uncompressed formats, IMHO. It actually loads faster from the hard drive anyway as the processors don’t have to think to decode the codec. A cleanly edited file that hasn’t been screwed over by re-encoding over and over can, in the final moment be encoded as mp3 and be … ok.
It’s not any different than the photos you see on websites, they are low quality but if edited well before the final save to 72 dpi for the web, they will look sharp enough online.
I prefer that folks focus on signal to noise issues and make sure they have good audio in the first place, and gain structure all the way through the editing and the playback. And sure, the damned STL codecs and compression at the transmitter make this all mostly irrelevant anyway.
May 19, 2020 at 9:24 pm #46419Greg_CharlesParticipant
- This reply was modified 2 weeks, 1 day ago by radiogeek.
Andy, its not a new topic, and unfortunately one with no end. Anyone want to discuss cable “burn-in-time” or cable “polarity?”
My perspective, as in mastering, not broadcast…you will detect very little difference between MP3 320 lame/Fraunhofer and 16 bit 44.1 .wav. As said before, sit in a good room, in the sweet spot, with good gear, and the right classical/jazz piece or anything with transients, lots of dynamics, and especially reverb tails…and yeah…there is a little difference. But for the majority of consumers, and with most other genres, forget trying to detect a difference. Loudness bias, Fletcher–Munson, all kinds of issues get in the way. And detecting these very subtle differences over the air considering the chains the signal goes through? I’m highly skeptical considering the slight differences manifested in a mastering room.
In the studio with someone attending a session who insists they hear a difference, not necessarily .wav vs. MP3, but often anything digital…just do a quick null test. It either nulls down to an insignificant dB level or it doesn’t. But even if it nulls, you won’t necessary convince your client, and he has the money with the final approval.
What happens with MP3 going into a broadcast chain…I never heard of “cascading codecs”…interesting. But I have heard of gear such as an Orban actually flattening and making a highly compressed/limited loud track more quiet than a track with a few dBs of dynamics left that wasn’t as loud before hitting the same broadcast limiter.
Was normalizing to -6db ever given an explanation? That just seems really weird, however going into a broadcast, chain…that’s out of my league.May 20, 2020 at 9:03 pm #46428
No explanation for the -6dB normalization. It’s just how they do it, probably because that’s how someone told them to do it (former station engineer?). I didn’t ask because the current staff are not technical, let alone engineers.
The daytime AAA mix goes into Simian in its native format, mostly 256kbps MP3s. It sounds pretty ok.
It’s these prerecorded due to COVID-19 specialty shows that are suspect. If a DJ submits a bunch of MP3s and those are reencoded during assembly into a show, now you have a minimum of two generational losses of data. Maybe more if the submitted files were reencoded by whoever probably diddled with them previously. HD is another lossy link in the chain.
You can have the world’s best Xerox copier. You copy an original document. Now copy that copy. Now copy that 2nd copy. Repeat a couple more times. The 5th generation copy will be legible but with lots of fuzz and hash and other degradation.
That’s what cascading lossy codecs do to audio.
And needlessly repeatedly doing A-D-A-D-A-D conversions in the audio chain each introduces errors, even if they’re not outright lossy.
So anyway I submitted the playlist and files for Vinyl Revival to air Memorial Day 5/25. All are first-gen 320kbps MP3s derived from uncompressed WAV files at the highest MP3 quality possible. I also gave them one uncompressed WAV File (with an MP3 counterpart). I asked that they use the WAV version if possible. We’ll see what happens.
Tune in Monday 5-25 at 5PM to hear the show. 🙂
- You must be logged in to reply to this topic.