Carlos Freitas has been working as an Audio Engineer for 32 years. He’s mastered 26 Latin GRAMMY winning records and is one of the most recognized mastering engineers in the Latin American audio industry. This year alone, four records he mastered have been nominated at the Latin Grammys, including a nomination in the category for best engineering for Roberta Sá’s “Delírio.” He is the owner of Classic Master, his mastering facility located in São Paulo, Brazil.
I worked along side Carlos Freitas for many hours during our process at the Olympics. I organized the paths for the stems that were generated in the mix session so that Carlos would receive all the subtypes of instruments with their own effects processors (such as reverbs and delays). To isolate each effect, I used the bus interrogation feature in Pro Tools to isolate all the instruments sent to the same reverb, and then I duplicated the reverb aux to suit the number of subgroups sent to it and created a unique input path for each one and assigned each reverb aux output to its respective stem group. I used Aux channels to receive each subtype and their respective effects, and used Commit Edit Selection to print the audio files to the exact length that was specified for each track. This saved me countless hours, and also I was able to check each stem after I exported the committed auxes to a new session to deliver to Carlos.
Carlos would deliver all the mastered files to me, so I would check his work to make sure that Trevor and Stefan got exactly what they needed from him. In the following interview, he tells us more about his process and the tools he used.
You used Pro Tools to master all the music and sfx for the Olympic Closing Ceremony. I know you use other software at Classic Master as well, but what made you choose Pro Tools for this job?
During our first meeting with Ale at Gargolândia Studios, we contemplated together who would be on the team and how we were to construct these tracks. Ale mentioned that he would record in many studios in different locations, but all the mixing was to be done in Rio de Janeiro at Companhia dos Técnicos, and the mastering would be done in São Paulo at Classic Master. So we decided we would all be on the same workstation. I had also heard about Cloud Collaboration, a new technology, and thought we could make use of this tool as well. After speaking with Ale and Flavio, we all came to the conclusion that it would be best to master all of the audio using in-the-box processing. This is because of the large number of recalls the would occur due to the intervention of all the other creative departments, and we would need to be efficient and agile to deliver the changes, which is exactly what ended up happening. There was one segment that had five or six recalls after the first master! So Pro Tools would be my workstation, and I used the plug-ins from the UAD platform. I used the Sonnox Limiter, my favorite limiter — a fantastic plug-in. It has True Peak Limiting, so I would use that on my stems and master bus. The Manley Massive Passive and Variable MU would also be on all my stems. The thing with working in the box is, EQing is EQing, compressing is compressing, whether it’s analogue or digital. You have to know what the function of an EQ is and know what frequencies to pull from it. I used Insight by iZotope for Loudness monitoring and Loudness Control from iZotope as well. Having Pro Tools as the centerpiece, we were able to use Cloud Collaboration and set up a project so that the team could have access to my masters, and this ended up being essential to the process.
Tell me a little about the project you set up in the cloud for delivering the stem masters for the Olympic Closing Ceremony.
What I did was set up a complete master session with a 24-hour timeline, and all the files were aligned at the timecode position where they were to be executed during the ceremony. Our project had 64 tracks, including stems of instrument subgroups, timecode as audio files, metronomes, count offs, cue channels with directions from the choreography directors, sfx in stereo and 5.1, music in stereo and 5.1 as well. As I received Pro Tools sessions with the mixed stems, I would master these stems and then post the mastered files on this super project in the cloud. Ale would have access to this project, and he himself could make adjustments to the mastered stems if he so pleased. If I we had a recall on only a few specific stems, then I would just substitute that audio files on those tracks and push the changes to those tracks up to the cloud. That was one of the advantages we gained by using cloud collaboration. Though the upload and download times are rather fast with this technology, I would usually finish up a days work and leave the files uploading overnight. Ale would open the project up in the morning and revise the new material that was posted. At times we used playlists if we wanted to have quick access to certain versions so we could easily take one or two steps back. The use of this technology was very useful to us, and I believe we will use this from now on future projects. In fact we are already working with cloud collaboration again on a new project after the Olympics. Ale Siqueira is mixing this time, and he is creating projects on the cloud at 96K. And I will create a 48K 24 bit project with all the masters of the record on it.
You have two mastering suites with Pro Tools HD Native systems and OMNI interfaces. What other scenarios do you use Pro Tools to master?
We also have SoundBlade in our facility, but it is used solely to create the final PMCD to send to the factory. So we basically use it as an editor and DDP generator but not for anything else. I use Pro Tools today for all the mastering that is done for television purposes, DVDs and mastering for iTunes MFiT. So my main tool for mastering today is Pro Tools. SoundBlade is my tool for inserting the ISRC code, building the CD file — SoundBlade is good for that. But the advantages that I have in Pro Tools, such as the use of the UAD AAX plug-ins, allow me to do much of my work in-the-box, specially when I’m mastering in 5.1. And now I intend to use cloud collaboration with clients that I work with a lot to receive mix files and deliver mastered files, especially DVDs. I also use Pro Tools for vinyl mastering, though in this specific case, I use outboard gear. I record the final result back into Pro Tools. I clock my entire system with an Antelope Audio clock, so my DA and AD conversions have excellent sound quality when processing with my analogue or digital outboard gear and printing back to Pro Tools. The monitor section on the OMNI interface was also very useful for me, for I control my 5.1 monitoring there. The system I have put together allows me to execute all the services offered by Classic Master with extreme ease.
How does working within a loudness standard influence the choices you would make during the mastering session differently than what you would do during a regular CD master, for example, where loudness standards do not necessarily have to be observed?
Maybe one of the most interesting things about working with television is to try to get to a point where the audio would sound as good to the end consumer after the transmission on television sets as it would in the mastering room. When we talk about mastering, people usually mention volume and punch. People have this need for their records to be loud. When mastering for CD, the peak must be at 0, and there is no predefined standard for RMS that one must follow. Your dynamic range is about 7 to 8 dbs. With the loudness standard, it’s completely different. You have to work with 23 dbs of dynamic range, considering that the peak is at -1db and the RMS at -24 dbs. So the challenge is that you have to make a song sound good, with punch, without using too much compression. So with the Olympics, for example, with the samba schools, I used the stems for setting different compression presets for each subgroup of instruments. This way I was able to control the transients so I wouldn’t have problems with having to lower the RMS of the track. I would keep an eye on Insight to make sure we maintained our levels at -24, and when the levels passed -23, I would check to see what instrument group might be driving that, and I would rebalance accordingly. We had 32 channels, 16 stereo subgroups, each with their own compressors and processing. So I would work these groups to keep my RMS under control, and yet I would respect the musical dynamics of the track with its soft and loud moments, for that is permitted when using loudness standards. The greater challenge was to make these tracks that had relatively little compression sound powerful. These individual subgroup compressors allowed me to do that. At the end of the chain, I used the analysis section of loudness control by iZotope to double-check that the whole audio file was within the loudness standard.
Tell us a bit about how you relied on inSight by iZotope and what features in this tool helped you complete the mastering of the audio.
Insight is a great plug-in! I would measure the overall loudness, momentary loudness and true peak at -1db. Not only would I see these measurements in real time, but I would also have a history graph of the entire track. The loudness is measured as an overall average of the entire song. For example, if the track is three minutes long, the RMS, which should be -23, can go up to -18 or -15 at points, as long as they are for a short period of time. Then you work out the rest of the track to make the average. On some tracks, I worked the average out so that I could have higher loudness values in the end to have that musical explosion or climax. So with the history graph in Insight, I could do that knowing where I could make the track louder or softer.
Did you enjoy mastering in stems versus just mastering a stereo or 5.1 mix?
I enjoy working with stems as long as I have the producer on my side. It’s a very fine line of what a mastering engineer can or cannot do, must or must not do. For television mastering, I don’t think it would have been able to arrive at the results that I did because of the rigid loudness norms. With all the various types of instrument groups and tracks that differed greatly in musical styles, I don’t think we would have been able to do what we did if it was just a stereo mix. Having the stems ended up giving me the freedom I needed. It also made some of the recalls easier, for we didn’t have to go back into the mix sessions in some of the cases, and we could execute the recalls in the mastering session. So working in stems was fundamental for the success or our work.
To finalize, if I could translate our team dynamics into one word, it would be “trust.” The artists trusted Ale, Ale trusted the technical staff, and the production team trusted us to deliver the material. After the work is done, one realizes that it was worth all the effort to attain a great result.
In our next blog, we will hear from Flavio Senna, the multiaward-winning mix engineer, and his son, Flavio Senna Neto, one of the recording engineers for the sessions held at Companhia dos Técnicos Studios.