1

Accelerated Playlist Comping in Pro Tools

Playlists are a powerful Pro Tools feature used in recording sessions for organizing takes, and in editing to comp the best moments of a performance. Playlists are virtual lanes nested within a track that allow you to record, store and edit many takes on the same track while maintaining positional reference on the timeline. In Pro Tools 12.6, some new improvements to Playlists were introduced, including shortcuts for navigating playlists while in Waveform view (Shift + Up or Down Arrows), Preferences for Automatically Creating Playlists when Overlapping Clips While Recording and while Editing, and New Visual Feedback for identifying tracks with available playlists at a glance.

Pro Tools 2018 has even more playlist comping enhancements that will enable faster and efficient workflows! Before these new enhancements, in order to quickly compile the best moments of different takes, you either had to be in Playlist Track view on the Edit Window, (which takes up a lot of screen real estate and doesn’t show what you need to see), or sort through takes using the Playlist Selector Menu and manually copy and paste clips between playlists.

In Pro Tools 2018 you can now build a comp by sending clip selections to a Target Playlist that you choose. The Target Playlist can be any playlist on a track, whether it’s the main playlist or the tenth. There are several ways to select Target Playlists. Let’s look at one option. To select a Target Playlist, go to the Playlist Selector Menu. On the new Target Playlist Menu, select what playlist will be the target, and it will then be displayed in Blue Text next to a Target Icon.

The new Target Playlist Menu embedded in the Playlist Selector Menu

In Playlists Track View, the new Target Playlist button turns blue; clicking this provides another way one can assign a Playlist as the Target. Also, with a key command (see the key command grid below*), you can assign the Target Playlist of the tracks with the edit selection on them.

Select the Target by clicking the Target Playlist Button in Playlist Track View

The playlist selector will be blue if the Target Playlist is the Main Playlist or if no Target Playlist has been assigned in a track with multiple playlists. It will turn orange if the Target Playlist is not displayed in the Main Playlist view.

Playlist Selector is color coded to show important information at a glance

With a batch of new shortcuts, one can quickly copy or move clip selections to the Target Playlist all while in Waveform View or any other Track View. You can also quickly summon the designated Target Playlist to the Main Playlist, and toggle between the last two viewed playlists as well.

Shift + Command + Up Arrow (Mac)
Shift + Control + Up Arrow (Win)

Cycle In Audio from Previous Playlist

Shift + Command + Down Arrow (Mac)
Shift + Control + Down Arrow (Win)

Cycle In Audio from the Next Playlist

Shift + Opt + Up Arrow (Mac)
Shift + Alt + Up Arrow (Win)

Copy Selection to the Target Playlist

Shift + Opt + T (Mac)
Shift + Alt + T (Win)

Move a Selection to the Target Playlist

Shift + Command + Right Arrow (Mac)
Shift + Control + Right Arrow (Win)

Designate the current Main Playlist
as the Target Playlist of tracks with the edit selection on them

Shift + Right Arrow

Bring the designated Target Playlist to the Main Playlist

Shift + Left Arrow

Toggle between the last two viewed playlists

*Set of key commands designed to accelerate your workflow with playlists

All these key commands are mapped in EUCON, and can be made available for quick and ease access in softkeys on Avid Control Surfaces, such as the S6, S3, Dock and the free Pro Tools Control iPad App.

One can assign these new playlist commands as softkeys on the Pro Tools Control iPad App

By compiling takes this quickly while in Waveform View, you can take advantage of screen real estate to monitor all channels in multiple mic scenarios (i.e: Drums, Horns or Background Vocals) when editing. You can also use the saved screen real estate for other tools or plug-ins as well.

Additionally, you can also cycle audio within a clip selection which allows you to quickly audition and paste-in selections of audio from alternate playlists while keeping your comp in the Main Playlist. A “home” icon briefly appears to indicate that you have cycled back to the original audio, indicating no change has been made.

Cycle audio from alternate takes within a selection.

Also, new Playlist Preferences have been added in Preferences > Editing > Tracks for user customization.

New preferences added in Pro Tools 2018 (in the red box) and 12.6 (in the blue box)

These enhancements also come with rich visual indicators to help you while you’re working with these new features.

 

  •  Blue Target icon – Playlist currently displayed as Main Playlist has been designated as the Target Playlist
  •  Gray Target icon – Playlist currently displayed as Main Playlist has been disabled as the Target Playlist (after performing undo of “Designate as Target Playlist”)
  •  Green Checkmark – Clip selection has been successfully sent to Target Playlist
  •  Red Circle with Line – Clip selection is on Target Playlist and can’t be sent to Target Playlist

 

Working with Playlists allows you to work efficiently and extract the best takes from recorded material for years, and now, Pro Tools 2018.1 allows foreven faster and smoother editing workflows. All users with a current upgrade plan or subscription are entitled to Pro Tools 2018.1.

Make your mark with Pro Tools

Create music or sound for film/TV and connect with a premier network of artists, producers, and mixers around the world.




The Creative Mind Behind the Conception of the Music for the Olympics Closing Ceremony

As I mentioned in the first chapter of these Olympic Ceremony blog series, Ale Siqueira is a music producer that has my admiration, largely because of his musical integrity, competence and talent.  A studious, multifaceted professional, his knowledge of musical history and heritage is deep. Originally from São Paulo, he lived for a few years in Bahia, where he was able to closely study African rhythms that made their way into Brazilian music, and has worked with artist from diverse countries around the world, becoming exposed to various musical cultures. Throughout the years, he has won three Latin Grammy awards and has produced various multiplatinum records.

Ale was invited to be the musical director by Rosa Magalhães, the Creative Director for the Olympics Closing Ceremony. He has been part of the team helping put into music the ideas, messages and concepts that the executive and creative teams wanted to convey during the event.  He also put together the arrangers, additional music producers and audio engineers that worked as a team under his supervision, including myself as technical coordinator.

Ale was kind enough to share a little about his experience and the process behind his work.

Mikael Mutti, Eduardo Andrade, Ale Siqueira, Flavio Senna and William Jr.

How did you begin with Pro Tools?

SIQUEIRA: Well, I’ve been using Pro Tools for a long time. From what I remember, I must have been 18 years old [when I started with Pro Tools] — now I’m 44. Pro Tools only had four tracks back then, and then it jumped up to 16. It was a little after Sound Tools — imagine that! There was only one interface that was one rack unit, with some trim pots on the front of it. The interface had only four inputs. Pro Tools was used more as an editing tool back then than a recorder, since it only had four tracks. Back then I worked with vanguard electroacoustic music.  I got to know Pierre Boulez. I worked with a great group of people from UNESP and UNICAMP, such as Flo Menezes, Fernando Iazzetta, Edson Zampronha, Rodolfo Coelho, Rodolfo Caesar and Silvio Ferraz. It was in the likeness of classical music, like Stockhausen, but instead of being acoustic, it was eletroacoustic. And in this lab called PANaroma, we used Pro Tools. This lab exists until today; very serious work is done there, and it is located in UNESP, the college where I studied composition and conducting. Later on, when I was 19, I purchased my first Pro Tools system, which had expanded to 16 channels. It wasn’t PCI yet; the system used Nubus. And since then, I have always been the guy that used Pro Tools.

 

Could you tell us a bit about the concepts you kept in mind when you were choosing the artists and the compositions that would be in the Olympics Closing Ceremony?

SIQUEIRA: I kid around that Brazilian music is to the national arts what soccer is to our sports. The “soccer” of our arts is music. In our music, we are full of stars, just like our soccer has Pelé, Garrincha, Rivelino … Since we have this strongly in our music, I wanted to expose this. Our music is one of our greatest national cultural patrimonies. It has more value in Brazil and abroad than, for example, our visual arts, our literature and our performing arts. Our famous stars are in our music, so I wanted to show that. So how do I do that? Instead of creating new soundtracks with composers, I wanted to revisit our pantheon of the great names from our wonderful songbook. We cannot possibly mention everyone, nor dive too deep — since it is an entertainment ceremony where music is inserted, it is not a musical production. The music services the sporting event. So at that level, we paid homage to some of the great names of our music: Villa-Lobos, Tom Jobim, Luiz Gonzaga, Jackson do Pandeiro, Carmen Miranda … Jacinto Silva, [who] is not so well known, but I wanted [him] to be present, for he is from the “coco de embolado.” We opened the ceremony with Ernesto Nazareth. That guy was a hit maker, a Lenine from the ’20s. He just isn’t known nowadays. His track “Odeon,” used at the ceremony, was not just a hit in Brazil, but it was known all around the world.

One thing we did which is not very common in this type of ceremony was to use historic phonograms because usually, if we were to pay tribute, for example, to Luiz Gonzaga, we would call in the singer of the moment, and we would rerecord Luiz Gonzaga’s composition. Some songs we were able to get the clearance necessary to use the original phonograms, and we were able to have the voice and accordion of Luiz Gonzaga singing Asa Branca, to exacerbate even more the homage being paid. On the American version of “Chovendo na Roseira” (a.k.a.“Children’s Play”), we had Tom Jobim on the piano. On “Tico Tico no Fubá,” we had Carmen Miranda singing “A Ordem é Samba”; we had Jackson do Pandeiro’s voice.

 

How did you feel that the music directed the other aspects of the spectacle, like projection, pyrotechnics, lighting and choreography and vice-versa? What was your experience of this creative process between so many teams?

SIQUEIRA: It was always a two-way street. We would check segment by segment to see what would lead the way.  For example, on the first segment, we imagined that Nazareth’s “Odeon” would lead the way, and from there the other teams would create the projection and other elements. On many of the cases, the music lead the way, but we always kept an open brainstorming dynamic — a creative process with many meetings, checking which department would dictate the tone. But usually, we would send the musical scratches to Bryn Walters, who would create the choreographies, and to Batman Zavareze to create the projections.

But then again, the music came at a second instance, after the initial concept was defined, which wasn’t musically related at first. I helped a lot with this process as well. The idea of Santos Dumont wasn’t mine; I just underlined the segment with a composition that was contemporary to Santos Dumont, which was “Odeon” from 1909.

The segment with the Barbatuques started with another of Rosa’s ideas with the birds. So in truth, the basilar primordial concepts weren’t musical. The main concept usually came from Rosa, the creative director. The idea to pay tribute to Grupo Corpo was mine, and Rosa thought it was great and bought the idea.  The moment Rosa says “we need to have Indians here,” from there I create my ideas. Then Bryn does the choreography, Batman does the projection, Cristophe Berthonneau the pyro, and so on.

And there were some things that were obligational. For example, the homage for the volunteers, we didn’t create that, that was protocol. Then I gave the idea, “Why don’t we call Lenine and adapt his song ‘Jack Soul Brasileiro,’ which is a tribute to Jackson do Pandeiro, and would now pay homage to the volunteers?” The national anthem, the crazy idea with the children was mine.

Photo taken by Eduardo Andrade

That was my next question. You put together a choir of children and percussionists playing a rhythm from Candomblé (a Brazilian religion with African roots). Tell us about the creative process and symbolism that you had in mind for this moment.

SIQUEIRA: In 2015, on my first meeting with Rosa, the question arose, “Who will sing the national anthem?” So I said, “Better than having someone sing the anthem, why don’t we bring a multitude of children?” My original idea was to bring 500 children, running around Maracanã singing the national anthem. She loved the idea right off the bat, but then thought that logistically we would not be able to get so many children. We matured the idea, and then we came to the conclusion to bring 27 children, representing the 26 states, plus the Federal District, as the 27 stars on the flag. I didn’t have the idea to have the anthem executed in 12/8 (musical meter) then. One day, I realized that we sing the anthem in 12/8 (even though it is written in 4/4). It’s important to say that the approach was not religious; it is not Candomblé-related. Even though I have a lot of respect, I have recorded various Candomblé records and have visited many “terreiros” in Bahia (where the rituals of Camdomblé are taken place). The idea was because of this insight, that we sing the anthem in 12/8, even though it is written in 4/4, just like the American shuffle. It is written in 4/4 but played in 12/8. Eighth note equal to a triplet, playing the first eighth of the triplet and the third eighth of the triplet.

All of the Americas have strong influences from compound African music. In Cuba you see that a lot. I’ve recorded there six times already. We have, as a historic and cultural legacy, the incorporation of the compound meter on the music throughout the Americas. So even though it is written in 4/4, many things we play in triplets. I theorize that our swing comes from that, in the micro rhythms between the eighth notes and the sixteenth notes of the simple meters, using the triplets of the compound meters. When I had this insight regarding our national anthem, I decided to do the anthem in 12/8. And then I thought about revisiting the Vassi, which is one of the main rhythms that present in Candomblé and in Umbanda, but again, removing any religious context. And then I did an experiment that worked really well. I recorded a Rum, Pi and Lé, the three drums of Candomblé, trying to prove this theory that we sing the anthem in 12/8. We do not sing the anthem as it is written. Even the conductor of the children, when she came to conduct, she asked whose crazy idea this was, and I explained my theory to her. She then told me that it was more difficult to have the children sing in 4/4 as the anthem is written than it is to have them sing in 12/8, because they just naturally sang that way, which further proves my theory. So my approach was conceptual and not religious.

The Brazilian National Anthem accompanied by the Rum, Pi and Lé drums

Is there a method to your production process? Could you describe how you go about to create your first scratches to the final result?

SIQUEIRA: There is a course that I am elaborating for the Federal University of Recôncavo in Bahia that will probably take place next year called “Methodologies of Music Production.” We will listen to a record and talk about what was done. Each class will focus on a particular production method. Why is that? Because I do not have one method only. Every record I imagine a different method, particular, maybe even unprecedented. I try to perceive, become sensitive to what would be best for that record, and from there I create a method or I choose one I have used before. So every record has a completely different story from the next. There are records I made in studios, and there are records that I recorded in the woods with barely any electricity available. I have surreal stories …

Having said that, the ceremonies was a similar idea because there were tracks that were born from Andre Mehmari (arranger), there were tracks that were born from a phonogram and then became e remix at the hands of a DJ, and there was music that had all that but also had a singer that entered in a strategic production moment. Some songs I thought would be important to hire the producer of the band, which was the example with Bruno Giorgi, Lenine’s son, that produced Lenine’s track in his home with his father’s musicians, the way that they wanted. And it was my decision as musical director to delegate in this case 100% of the production to Lenine’s team. There were tracks with two producers at the same time, for I thought it would be important. Like in the case of the Barbatuques, I thought it would be important to have the hands of Mikael together with André Magalhães, the latter being the producer for the Barbatuques.  In this case, we put together a sketch, then we recorded many versions, and then they spent weeks editing and choosing what would compose the groove of that piece of music, a very different process than what happened with the Ganhadeiras de Itapuã, where in three days, everything was ready. And in the example of the Ganhadeiras, I went to a studio in Bahia, where one can record in five live rooms at the same time to record the maximum amount of musicians live because it is an organic and live musical track. No rehearsals — the Ganhadeiras showed up with their 10 musicians already rehearsed, and we just recorded. Different than the pop approach of the Barbatuques, full of constructions, loops, edits …

So for each situation, you have to think what would be the best method. If there isn’t one, then make one up.

 

I hope you’ve enjoyed these series about the amazing professionals that were involved in the Rio Olympics Closing Ceremony. I hope they have brought insight and lessons about the tools, methods, planning and the technical skill that one has to have when exerting a roll in a large creative team like this. Thanks for reading, and until the next time!

Get Your Free 30-Day Trial

Compose, record, edit, and mix with Pro Tools—the award-winning professional’s choice for music and audio post production. Now you can try the entire creative toolset free.

DOWNLOAD TRIAL




Recording and Mixing the Music for the Olympics Closing Ceremony

The team working for this ceremony was composed of prized, talented and experienced professionals. Our mix engineer, Flavio Senna, is no exception. Flavio has worked on countless classic Brazilian records, has done live PA for internationally renowned artists and throughout his career has collected more than a dozen awards, including Grammys and Latin Grammys. He also is co-owner of the most traditional professional recording studio in Rio de Janeiro, Companhia dos Técnicos (aka CIATEC), which many decades ago was called RCA Studios and belonged to the record label prior to their acquisition. Most of the music for the Olympics Closing Ceremony was recorded and mixed at CIATEC. The recording engineers were either his son, Flavio Senna Neto, William Jr. or Arthur Luna.

During this interview, we will have both Flavio Senna and Flavio Senna Neto describe their part on the recording and mixing of the music for Olympics Closing Ceremony.

Flavio Senna, Eduardo Andrade and Flavio Senna Neto

Flavio, you were the mix engineer for the soundtrack of the Olympics, and you have worked with Ale Siqueira, the musical director, in other projects in the past. The choice of doing the final mix of the Olympics soundtrack here is largely due that this is your room and where you are most comfortable, and yet one had to consider that these sessions would need to be accessed in many different systems.  With this in mind, the decision was made to keep the mix in-the-box. Could you please tell us about the tools that were chosen and what you guys did to synchronize the teams and systems to have this multi system compatibility?

SENNA: I have worked many times with Ale, and we understand each other very well. He sends me his sessions, and I’ll execute what he wants me to do to the track, but I’ll do it my way. So there is trust and affinity. At first, we were going to mix on the Euphonix, but the results were sounding so good working in-the-box with our HDX system. Because we would need to be fast, and the large amount of recalls we would need to execute, we decided to keep the mix in-the-box.

 

You have been a part of the project since the scratch versions of the tracks, correct?

SENNA: Yes, that was one of the thing I requested from Ale. Since we were not going to have a lot of time to do the final mixes in the end of the process, I requested that as the sessions took place, I would mix the new elements in as they got recorded. For example, if a violin was added to the track, everything else would be balanced, I would just have to add in the new violin. So I would just update my mix as the new elements were added. In the end, I had everything ready to meet the deadline. We spent three days going over all the mixes to make sure everything was taken cared of for the final delivery.

 

Many of the artists that participated in this project are spread out throughout Brazil, exactly because Ale wanted to pay homage to the wide variety of composers within their respective cultures and musical styles. How was the logistic of these recordings and receiving sessions from many different studios around Brazil?

NETO: From the beginning, it was decided everyone would all be on Pro Tools 12. So all we had to do was some IO adjustments at times, and we were all set to go. We would try to use plug-ins that everyone had available to them, but when there was something we didn’t have, we would use Track Freeze or Commit. All we had to do was open the session and record. No real complications there.

SENNA: We decided with Ale from the beginning that everything would go through the quality control of Companhia dos Técnicos. Everyone does their work well, in Maceió, in Bahia, in São Paulo, but they all do it their own particular way. So when it came time to mix, we decided to unify it all using my standards of mixing, which I use for the work I do.

NETO: Since these tracks were recorded in various studios, there comes a time when someone has to match them in terms of timbre. Tracks within the same song sometimes were recorded in different locations, so when it came time to mix, we had to make all the tracks sound unified. All the songs were part of a story, and they had to be presented in context.

The Live Room and Booths at Studio 2, CIATEC

Senna, one of the things that caught my attention during the mix was that you not only made technical decisions like applying EQs and compressors, but you would also make arrangement decisions, such as muting elements, creating delays and textures. You would take instruments that sounded acoustic and would apply radio and lo-fi effects on them. Some of these choices you would make without producers or Ale present. This shows the great trust he has in you, for most of these decisions were incorporated to the final arrangement. Tell us more about this aspect of the production.

SENNA: I believe that when one chooses an engineer for a project, one wants everything he has to offer: his sound and what he can contribute creatively to an arrangement. Because I’m close to Ale, I know what he likes; I know a lot about what he wants and the result he desires to attain in the end. I also know about the time he didn’t have. In this project, he didn’t have time to think about those delays the way he usually does. Ale has fantastic ideas, and I learned a lot from him. So when I imagined that something would fit in well, I would add those elements because I knew he was going to accept them. When I mute something, for example, it’s so that we can grow more dynamically down the line. Or, it’s because there are two or three sources in the same frequency region. That is something that I will do at times. When the producer is not around, I do everything that I believe that needs to be done and then I’ll show the producer. That’s a characteristic of how I mix.

 

You mixed these tracks having in mind that they would playback at the Maracanã Stadium, and that they would also be broadcasted to home systems. Those are two very contrasting acoustic environments. What did you do different during these mixes that you probably wouldn’t do on a record because of this?

SENNA: The advantage of having mixed at CIATEC was our monitoring systems. I have a PA (JBL 4350 H) and a TV (Yamaha NS 10) in here. I really think of that. I have some experience with P.A., and I know Maracanã well. I know what it sounds like in there and what frequencies build up. I did think of the 70,000 people that were going to be there, but I left some things in the hands of the PA engineer as well. I thought more about the broadcast, the billions of people that would tune in. I liked how the mix sounded on TV, even though each channel had a different sound, everything was there. The timber changed but not the balance. So I mixed having this in mind, thinking of the subs and speakers and to not let any frequency overbear in the broadcast. The range of frequencies are quite different when thinking of home TV systems, so I would consider this in my mix decisions. I had more low mids, and I would clean up a lot between the lows and low mids. On records, I leave this audio dirt in — I like it. It makes the sound grittier, pulsating, sounding less electronic, less processed. But this region for this Olympics project, I had to clean up a lot because of the playback in Maracanã and for broadcast as well.

The studio where the tracks were mixed

Could you name five plug-ins that you really enjoy working with?

SENNA: I enjoy some of the Slate plug-ins, FG-Grey compressor. I also like Revival, I use it a lot, and his Neve EQ as well. And from Waves, I couldn’t go without my Q10.

NETO: In general, we use the Waves bundle a lot, a basic bundle that has most of the plug-ins we need and that we usually use, which are Q10, H-EQ and the Renaissance bundle with RBass. With just the plug-ins from this bundle, we can do any project. But since we would be working with teams in other locations, we all defined plug-in bundles to subscribe to as well and we ended up getting to know new plug-ins.

 

Here in this studio, you have HDX cards and many HD I/O interfaces. Tell us about what you felt changed sonically and also regarding processing power when you made the upgrade to this system from the HD3 system. How did this change impact your workflow?

SENNA: When we purchased the HDX system, we set up that good old blind test where we would compare A to B; I didn’t want to know which was which. I didn’t want to be influenced. There were 12 engineers in the room to hear a drum recording to make the comparison. We all picked out the HDX System. The stereo imaging, the definition, wider and with a greater sensation of spatiality.

NETO: Not that the HD3 system was bad — we used them for 10 years. But the gain with HDX was really significant.

SENNA: Yes, that was one of the greatest changes I’ve seen. There was a clear difference between the two systems. HDX with the new HD I/O was a great evolution, an unmistakable sound.

NETO: Regarding processing, here at the studio, we need to be ready to take on any type of project. Today we have a set up that has 64 channels of I/O, and we can insert plug-ins on all the channels at 192KHz.

SENNA: We have two HDX cards and four HD I/O interfaces.

SYNC HD and HD I/Os at CIATEC Studio 2

When the mixes were finalized here at CIATEC, I would open up the session files and split all the subtypes of instruments into stems as it was requested for delivery. And these stems were what went to Carlos Freitas in mastering. Since he mastered the separate stems individually, he was in a way dialoging with your mix. How did you guys communicate during this process? How was this dynamic?

SENNA: I have gone many times to Carlos’ mastering facility, and he has already been to CIATEC many times as well. He knows where I am mixing. He has already mastered many records I have done; he knows what I like, and I also know what gets in his way. I know what frequencies are not welcome, what type of compressing I should avoid. So I do keep in mind the work that he has ahead of me. He doesn’t have to solve sonic issue in the master, for I mix thinking of the limits that he has when mastering. During the mix, we have much more to do than in the master. So I aim to facilitate the work for Carlos so that he can get to that amazing sound.

Last but definitely not least, on the next blog, we will hear from Ale Siqueira, the musical director and master mind that put together and lead the creative team for the music and sound design of the Olympics Closing Ceremony.

Create. Collaborate. Be Heard.

Make your mark with Pro Tools — create music or sound for film/TV and connect with a premier network of artists, producers and mixers around the world.

LEARN MORE




Mastering the Audio for the Olympics Closing Ceremony With Avid Cloud Collaboration

Carlos Freitas has been working as an Audio Engineer for 32 years. He’s mastered 26 Latin GRAMMY winning records and is one of the most recognized mastering engineers in the Latin American audio industry. This year alone, four records he mastered have been nominated at the Latin Grammys, including a nomination in the category for best engineering for Roberta Sá’s “Delírio.” He is the owner of Classic Master, his mastering facility located in São Paulo, Brazil.

I worked along side Carlos Freitas for many hours during our process at the Olympics. I organized the paths for the stems that were generated in the mix session so that Carlos would receive all the subtypes of instruments with their own effects processors (such as reverbs and delays). To isolate each effect, I used the bus interrogation feature in Pro Tools to isolate all the instruments sent to the same reverb, and then I duplicated the reverb aux to suit the number of subgroups sent to it and created a unique input path for each one and assigned each reverb aux output to its respective stem group. I used Aux channels to receive each subtype and their respective effects, and used Commit Edit Selection to print the audio files to the exact length that was specified for each track. This saved me countless hours, and also I was able to check each stem after I exported the committed auxes to a new session to deliver to Carlos.

Carlos would deliver all the mastered files to me, so I would check his work to make sure that Trevor and Stefan got exactly what they needed from him. In the following interview, he tells us more about his process and the tools he used.

 

You used Pro Tools to master all the music and sfx for the Olympic Closing Ceremony. I know you use other software at Classic Master as well, but what made you choose Pro Tools for this job?

During our first meeting with Ale at Gargolândia Studios, we contemplated together who would be on the team and how we were to construct these tracks. Ale mentioned that he would record in many studios in different locations, but all the mixing was to be done in Rio de Janeiro at Companhia dos Técnicos, and the mastering would be done in São Paulo at Classic Master. So we decided we would all be on the same workstation. I had also heard about Cloud Collaboration, a new technology, and thought we could make use of this tool as well. After speaking with Ale and Flavio, we all came to the conclusion that it would be best to master all of the audio using in-the-box processing. This is because of the large number of recalls the would occur due to the intervention of all the other creative departments, and we would need to be efficient and agile to deliver the changes, which is exactly what ended up happening. There was one segment that had five or six recalls after the first master! So Pro Tools would be my workstation, and I used the plug-ins from the UAD platform. I used the Sonnox Limiter, my favorite limiter — a fantastic plug-in. It has True Peak Limiting, so I would use that on my stems and master bus. The Manley Massive Passive and Variable MU would also be on all my stems. The thing with working in the box is, EQing is EQing, compressing is compressing, whether it’s analogue or digital. You have to know what the function of an EQ is and know what frequencies to pull from it. I used Insight by iZotope for Loudness monitoring and Loudness Control from iZotope as well. Having Pro Tools as the centerpiece, we were able to use Cloud Collaboration and set up a project so that the team could have access to my masters, and this ended up being essential to the process.

Ale Siqueira, Carlos Freitas, Eduardo Andrade, Fernando Henna and Lucas Arruda

Tell me a little about the project you set up in the cloud for delivering the stem masters for the Olympic Closing Ceremony.

What I did was set up a complete master session with a 24-hour timeline, and all the files were aligned at the timecode position where they were to be executed during the ceremony. Our project had 64 tracks, including stems of instrument subgroups, timecode as audio files, metronomes, count offs, cue channels with directions from the choreography directors, sfx in stereo and 5.1, music in stereo and 5.1 as well. As I received Pro Tools sessions with the mixed stems, I would master these stems and then post the mastered files on this super project in the cloud. Ale would have access to this project, and he himself could make adjustments to the mastered stems if he so pleased. If I we had a recall on only a few specific stems, then I would just substitute that audio files on those tracks and push the changes to those tracks up to the cloud. That was one of the advantages we gained by using cloud collaboration. Though the upload and download times are rather fast with this technology, I would usually finish up a days work and leave the files uploading overnight. Ale would open the project up in the morning and revise the new material that was posted. At times we used playlists if we wanted to have quick access to certain versions so we could easily take one or two steps back. The use of this technology was very useful to us, and I believe we will use this from now on future projects. In fact we are already working with cloud collaboration again on a new project after the Olympics. Ale Siqueira is mixing this time, and he is creating projects on the cloud at 96K. And I will create a 48K 24 bit project with all the masters of the record on it.

 

You have two mastering suites with Pro Tools HD Native systems and OMNI interfaces. What other scenarios do you use Pro Tools to master?

We also have SoundBlade in our facility, but it is used solely to create the final PMCD to send to the factory. So we basically use it as an editor and DDP generator but not for anything else. I use Pro Tools today for all the mastering that is done for television purposes, DVDs and mastering for iTunes MFiT. So my main tool for mastering today is Pro Tools. SoundBlade is my tool for inserting the ISRC code, building the CD file — SoundBlade is good for that. But the advantages that I have in Pro Tools, such as the use of the UAD AAX plug-ins, allow me to do much of my work in-the-box, specially when I’m mastering in 5.1. And now I intend to use cloud collaboration with clients that I work with a lot to receive mix files and deliver mastered files, especially DVDs. I also use Pro Tools for vinyl mastering, though in this specific case, I use outboard gear. I record the final result back into Pro Tools. I clock my entire system with an Antelope Audio clock, so my DA and AD conversions have excellent sound quality when processing with my analogue or digital outboard gear and printing back to Pro Tools. The monitor section on the OMNI interface was also very useful for me, for I control my 5.1 monitoring there. The system I have put together allows me to execute all the services offered by Classic Master with extreme ease.

 

How does working within a loudness standard influence the choices you would make during the mastering session differently than what you would do during a regular CD master, for example, where loudness standards do not necessarily have to be observed?

Maybe one of the most interesting things about working with television is to try to get to a point where the audio would sound as good to the end consumer after the transmission on television sets as it would in the mastering room. When we talk about mastering, people usually mention volume and punch.  People have this need for their records to be loud. When mastering for CD, the peak must be at 0, and there is no predefined standard for RMS that one must follow. Your dynamic range is about 7 to 8 dbs. With the loudness standard, it’s completely different. You have to work with 23 dbs of dynamic range, considering that the peak is at -1db and the RMS at -24 dbs. So the challenge is that you have to make a song sound good, with punch, without using too much compression. So with the Olympics, for example, with the samba schools, I used the stems for setting different compression presets for each subgroup of instruments. This way I was able to control the transients so I wouldn’t have problems with having to lower the RMS of the track. I would keep an eye on Insight to make sure we maintained our levels at -24, and when the levels passed -23, I would check to see what instrument group might be driving that, and I would rebalance accordingly. We had 32 channels, 16 stereo subgroups, each with their own compressors and processing. So I would work these groups to keep my RMS under control, and yet I would respect the musical dynamics of the track with its soft and loud moments, for that is permitted when using loudness standards. The greater challenge was to make these tracks that had relatively little compression sound powerful. These individual subgroup compressors allowed me to do that. At the end of the chain, I used the analysis section of loudness control by iZotope to double-check that the whole audio file was within the loudness standard.

 

Tell us a bit about how you relied on inSight by iZotope and what features in this tool helped you complete the mastering of the audio.

Insight is a great plug-in! I would measure the overall loudness, momentary loudness and true peak at -1db. Not only would I see these measurements in real time, but I would also have a history graph of the entire track. The loudness is measured as an overall average of the entire song. For example, if the track is three minutes long, the RMS, which should be -23, can go up to -18 or -15 at points, as long as they are for a short period of time. Then you work out the rest of the track to make the average. On some tracks, I worked the average out so that I could have higher loudness values in the end to have that musical explosion or climax. So with the history graph in Insight, I could do that knowing where I could make the track louder or softer.

Did you enjoy mastering in stems versus just mastering a stereo or 5.1 mix?

I enjoy working with stems as long as I have the producer on my side. It’s a very fine line of what a mastering engineer can or cannot do, must or must not do. For television mastering, I don’t think it would have been able to arrive at the results that I did because of the rigid loudness norms. With all the various types of instrument groups and tracks that differed greatly in musical styles, I don’t think we would have been able to do what we did if it was just a stereo mix. Having the stems ended up giving me the freedom I needed. It also made some of the recalls easier, for we didn’t have to go back into the mix sessions in some of the cases, and we could execute the recalls in the mastering session. So working in stems was fundamental for the success or our work.

To finalize, if I could translate our team dynamics into one word, it would be “trust.” The artists trusted Ale, Ale trusted the technical staff, and the production team trusted us to deliver the material. After the work is done, one realizes that it was worth all the effort to attain a great result.

 

In our next blog, we will hear from Flavio Senna, the multiaward-winning mix engineer, and his son, Flavio Senna Neto, one of the recording engineers for the sessions held at Companhia dos Técnicos Studios.

Create. Collaborate. Be Heard.

Make your mark with Pro Tools — create music or sound for film/TV, and connect with a premier network of artists, producers and mixers around the world.

LEARN MORE




Joining the Team – Working at the Olympics Closing Ceremony

I had the privilege of working on one of the biggest events of the planet, the Olympics Closing Ceremony, and worked alongside great professionals of the audio industry. Now I want to share a bit with the community about some of the challenges we faced and how we met those challenges with the tools we have at hand today.

I had spent the week in São Paulo, executing Pro Tools Cloud Collaboration demos, including a visit to Ale Siqueira at the studio where he records most of his work today, Gargolandia Studios, about two hours away from São Paulo. Ale Siqueira, to me, is one of the most talented, hardworking and competent music producers in Brazil today, collecting three Grammy Awards and having produced diverse multiplatinum records from Brazilian and international artists. His musicality, knowledge and artistic integrity are some of the attributes that I admire the most in his work.

Picture taken by Eduardo Andrade

Ale Siqueira was the musical director of the Olympic Closing Ceremony. He was brought in by Cerimônias Cariocas, the production company in charge of the artistic and technical aspects of this event. During the Cloud Collaboration demo, he was immediately thrilled by the possibilities that this new technology could offer him, since he was working with talents from all over Brazil.

As I was driving back to Rio de Janeiro from São Paulo at the end of the week, I got a call from Ale, asking that I join his team as a technical audio coordinator. I would be in charge of all media management and file distribution to the creative teams for choreography, mass movement, lighting, video projection, pyrotechnics and special effects, and would take part in production meetings to communicate the needs of these teams as is pertained to the music and SFX creation. I would also be producing and delivering files for mastering, broadcast, monitor and PA.

After accepting his invitation, a few days later, I was at the studio Companhia dos Técnicos, (aka CIATEC) where most of the music was recorded and mixed. I would either spend my days there, at Maracanã (the stadium where the Olympic Ceremonies took place) or at the site where daily rehearsals were being held. Though there were many players, the four main pillars of the team I was in were Ale Siqueira, the musical director; Flavio Senna, the mix engineer; Carlos Freitas, our mastering engineer; and Trevor Beck, in charge of the replay system. I have interviewed each one to have their different perspectives of what it is to work in a ceremony like this.

I’m going to start at the end of the chain, with Trevor Beck, the audio engineer who was in charge of replay, and Stefan Fuller, the second audio engineer part of the replay team. John Watterson was the monitor engineer for all the ceremonies and was able to be a part of the interview to tell us a bit of his part in this process.

Tells us a little bit about the process of receiving audio files and inserting them into replay system and the choice of using Pro Tools for the delivery of these files. What are the features in Pro Tools that makes it the ideal tool for an event like the Olympics Closing Ceremony?

BECK: First of all, Pro Tools is a universal platform, so whether we are building mp3 test files back in Sydney, or whether we are working in Brazil, Pro Tools is universally used and understood by everybody. The two biggest things with the Olympics is that one, the changes never stop, and it’s never slow. We need to be able to make those changes consistently and make sure that everything is 100 percent phase coherent with all the other stems. We break everything down to up to 64 stems for diverse purposes: live stadium feed, 5.1 and stereo effects for broadcast, 5.1 and stereo music for broadcast, cues for people using in-ears, count-off tracks, click tracks, FSK for Pyrotechnics and timecode for show callers, lighting and projection. We need to make sure nothing drifts, nothing gets out of phase. The ability of Pro Tools to freeze tracks, commit tracks and do offline bounces means that we can do those changes quickly and efficiently and absolutely know that the files will be phase coherent. One of the unique aspects of ceremonies is that the broadcast in 5.1 may be using the music and SFX mix, and then also decide to add in one of the individual music group stems just to enhance what they are putting out to people. So they need to be able to add in to their 5.1 mix the individual stems that the stadium mix is using. If those stems are even slightly out of phase, that can be very destructive to the audio that they are broadcasting.

FULLER: Also, when things go wrong, you know you’re going to have local support for Pro Tools anywhere in the world, which happened on this occasion, as a matter of fact. When I landed I had an issue with my iLok on the first day, which started playing up. Fernando Fortes, from the team, got in touch with one of the local Avid resellers, Quanta Store, and within a day we had a new iLok delivered. It was shipped overnight; I was able to move my Pro Tools license to the new iLok within 24 hours.

Trevor Beck, John Watterson and Stephan Fuller

Since you’re managing files that are not only going out to broadcast, but are also going to the PA system, in-ear monitoring of talents and cast, what are some things you need to be cautious about because you are feeding so many different mediums from one system?

BECK: We’re delivering to three or four different mediums, so the broadcast in one sense is pretty straight ahead. They have their surround mix, and they have their stereo mix, and they can add individual stems into that if they want to enhance something in particular. For the stadium, we try to break up the stems into tonal subtype approach so that we can control the low end in the stadium versus the top end in the stadium. So, for example, in an orchestral arrangement, the stems might be broken up into Low Stings and High Strings, Low Brass and High Brass, so that we can EQ and compress for the stadium differently than what we would do with just a straight stereo file. For in-ears and monitors, I’ll let John Watterson, who looks out for our monitors, tell you about how he gets that done.

WATTERSON: For some of the cast groups, we use the guide track (full stereo mix) just routed straight out to them. For the high-level talent, where they would want to hear more of themselves or more of a particular instrument group, such as a string player or a brass player, they will get a mix from the stems and mics to adjust to those requests. There’s the FM system, which is a highly compressed feed to maintain maximum loudness and has a whole number of elements stacked on the top and ducked. There’s some multiband compression, “shout” microphones for cast movement, com panel microphones for show call and choreographers, all of these are stacked into a priority order, so that the person who is speaking at the right time gets the highest priority.

 

Trevor, you were present in London Olympic Ceremonies as well. What were the differences in the technologies available then, and how was the workflow impacted?

BECK: (Laughs) Dear Lord, what I would have given to have Pro Tools 12 in London! That was before offline bounces, track freezing and committing, so the amount of time that I spent bouncing and creating rehearsal files while trying to build the replay system and at the same time communicating with all the different departments was just crazy. It would have saved me many many hours. I would have less gray hair and still have more hair if I had Pro Tools 12 back in the London Olympics! The other thing that has really been handy for us with Pro Tools with the metadata side of things with wave files was that we used to generate quicktime files so that people could see a timecode burn window, and they would program to this Quicktime. Now the other departments use Wave Agent, by Sound Devices (a free software). So they take our bounces from Pro Tools; open it in wave, which will extract the timecode metadata that is embedded on the file; and play the timecode in a window.

One of the nice things about Pro Tools 12, given its CPU efficiency, means that yourself and other mix engineers could work with us in the room on their laptop! So we didn’t have to go to a studio or edit suite all the time. If the files had been recorded and mixed and we are working off our final mix sessions, we could actually be in the same room. So I could show them, for example, we’re having a problem when the torch goes up the stairs, and we need some extra time, but I can’t make this loop using the stems in a way that it’s seamless. What I need is some additional percussion here or there to make it work, and because they’re in the room with you and you’re running Pro Tools 12 on a laptop, we literally sit there, edit and make adjustments on the spot. In all these ways, on a big event this, all these new feature gives you an efficiency which allows for more flexibility and creativity and whole lot more sleep!

FULLER: At one stage of the ceremonies, we had four different Pro Tools sessions open in the same room at once. If we were all running different software, we would not be able to be as efficient.

At that moment, when the show is going on and billions of people are watching, what do you keep in mind to avoid mistakes?

BECK: In a way, you don’t get that opportunity to make a mistake — you just don’t. So you really have to think all your actions through. It’s important to build in a lot of safety features. We get together with the FOH engineer, monitor engineer and show call, and we do a lot of technical cues to cues, where, basically, we check that everything is receiving timecode as it should, and we make sure everything fires. We have two replay machines that are linked, and so we practice crash-and-burn scenarios where I’ll have Stef running dress rehearsals, and then I’ll say, “Machine A is dead. Go!” And he’ll have to quickly move to Machine B without anyone from the team noticing. We’ll keep doing that until we can do it seamlessly, and where we are at that point where we are comfortable and relaxed doing things, so that when you are under the pressure of being live to the world, that you’re familiar with everything that needs to be done, so it doesn’t feel scary to make those moves. You never approach a show wondering what you would do. You always approach a show knowing exactly what you would do. If something has wheels, if it powers up, if it comes out of a tunnel it could get stuck, if something has to fly or drop and we didn’t know how long it could take, we would make notes and prepare musical loops for those points. We might have about 50 or 60 musical loops available to us in a show. You might only use three or four, but you have them everywhere, and you make sure none of those loops cross points like pyro points or that they don’t mess up the projection or anything else. There’s a lot of checking, double-checking, triple-checking, quadruple-checking to make sure you have a plan B for anything that could go wrong because you can’t stop the music.

 

On our next blog, we will have Carlos Freitas, who’s mastered 26 Latin GRAMMY-winning records, describe the mastering process for the Olympics Closing Ceremony.

Create. Collaborate. Be Heard.

Make your mark with Pro Tools. Create music or sound for film/TV, and connect with a premier network of artists, producers and mixers around the world.

LEARN MORE