Desks support over 100 musicians on stage simultaneously in epic production
I Love Musicals was created by Swedish musical star Peter Jöback in 2012, and its success was repeated in 2013 and 2015. Over 65,000 people attended the latest Scandinavian tour where Peter had invited longtime colleague and friend Helen Sjöholm to co-host. The show’s spectacular success and the audience’s incredible response inspired its creators to take the production global. Peter Jöback took his Swedish production team and musicians to Japan, including Le Comp members: Mikael Jöback, Hux Nettermalm, Martin Höper, Magnus Bengtsson and Mikael Ottosson. In June 2017, Peter brought the show to the Ullevi Arena in Gothenburg, Sweden.
We had the opportunity to speak with FOH engineer Mats ‘Skuggan’ Wennersten about the production, its challenges and how the Avid VENUE | S6L helped him to handle an incredible amount of channels at both the FOH and the monitor seats.
The entire production consists of a 40 piece symphony orchestra, a full band with drums, two keyboardists, a bass player, a guitar player and a choir with 20 singers. The Ullevi Arena usually hosts up to 50,000 standing attendees. For this event, the audience was seated and the stage was positioned along the long side of the arena, resulting in a seated audience of 27,000 people.
Peter was very keen on doing the show in Sweden’s biggest arena and for this he brought several guest artists to the roster, including star singer Helen Sjöholm, Tommy Körberg. He also added three internationally renowned guests from The West End and Broadway: Scarlett Strallen from UK, Ma-Anne Dionisio from Canada and Tam Mutu from UK.
Emmy Christensson who performed with Peter in last year’s Phantom of the Opera also joined the cast. In total there were seven leading singing artists and eight additional singing dancers. The full production consisted of around 100 musicians on stage at the same time.
A huge production like this requires a lot of preparation and extensive rehearsing—what was the schedule?
Well, the first rehearsals took place in Stockholm over four days before we went to Gothenburg, aiming to rehearse for another 3 days at the venue. Due to heavy rain on the third day, we ended up only rehearsing on the second day and the show days of which the first day was completely used for setting up the equipment and wiring.
How do you approach mixing such a big ensemble?
I started by counting the musicians and the required mixing channels. Then I decided to completely single mic the orchestra to make sure that we get the best sound we possibly can. Instead of using mic systems for the orchestra sections, we had every single instrument miced up separately.
When the project started in 2012, we had to deal with 96 channels. I was using a VENUE | Profile desk back then that I had been using since 2005, actually. Jonas Reinsjö, the monitor engineer and I had two Profile desks until 2013. In 2015 the show added a 32-piece choir to the performance and so we started using two Profiles for FOH as the channels no longer fit into one desk. The two desks were MIDI-connected enabling me to program both desks in order to get the correct snapshots up at the right time.
When we took the show to Japan in 2016, we were able to trade in one of the desks for an S6L that I gave to Jonas for monitors because of the high number of mix busses and I was stuck to my two Profiles on FOH. It was only when we started to plan for this year’s show in Ullevi that we were finally able getting the S6L-32D for monitors and FOH.
Our first estimate was that we would need 128 Input channels, but at that point, we didn’t know about the additional singing dancers. And then came additional musicians so, in the end, we ended up using three stage boxes with 140 input channels and around 94 output channels for monitors. I use about 140 plug-ins on the desk. One week before we started the rehearsals in Stockholm I actually bought a second DSP card for the FOH Desk because I use a large amount of plug-ins and ran out of DSP Power. I had eight or nine reverbs and around 60 multiband compressors—I usually use the standard EQ for the channel and multiband compressors for both channels and groups. Together with my snapshots I can create a different sonic character for some instruments and songs.
You can imagine how long this all takes. The entire orchestra gets one setup, and the guys with the solo instruments like saxophone, trumpet, and trombone each get their special plug chain and I programmed the multiband compressors for each of the songs.
How does your workflow look like during the show and how do you handle this many input channels and plug-ins?
The key is getting all the input channels of each section into layers. For example, the four leading Artists/Singers I have via Bank safe available on the faders at all times. I have one layer for the band, one for extra instruments like accordion and things that happen only in a couple of songs. I then have another layer for the strings, one for the brass section, one for the percussion section, one for the woodwinds, and one layer for the choir including the dancers. Another layer is used for some FX returns. The FX returns for the orchestra are included in the orchestral layers because I have separate reverbs on the strings, brass, etc. This way, I do not have to think too much on the spot and have quick access to it. In the last layer, I have all the FX returns from the vocals and also additional signals coming in from video with sound: the show intro for example or the helicopter coming in via video in Miss Saigon. In total, I think I used eight layers for this show.
And then when it comes to maneuvering the show because I have done this show a couple of times now, I always know when a solo is coming and so I can bring up my corresponding layer and grab the fader of the instrument and adjust accordingly. For some of the new material for this show, I had to learn the parts to be prepared and during the rehearsal. Our conductor, Julian Bigg, was very helpful. He often asked me if I would like another run through certain sections of the show to make sure that I had everything I needed. We actually had a talkback open between me and him during the rehearsals at all times. He is aware of all the technical challenges that come with a production like this, so he is always aware of not only the music part but also the sound engineering side of things.
A great advantage of micing the orchestra instruments individually is that you can recall things in various venues much easier because you can place the mics at the same position to the instruments and program all the inputs and basic EQs, compression, and FX. This then works in halls, open air, etc.
Because the song material of the shows is so diverse, at times you deal with orchestral oriented songs and then, for example, the “Chess” material is pretty much a full-blown pop/rock/symphonic section.
How do you fight feedback? It must be a challenge when you have more than 100 mics open in an arena.
Due to the individual and ultra-close micing, I have a strong signal and the signal to feedback ratio is very high. Also, for the PA and monitoring, we chose very good speakers—d&b Audiotechnik for the PA-System and the bigger on stage monitors and Genelecs for the orchestra seats. Some people actually tried to talk me into another PA System but I decided to stick to d&b Audiotechnik because they have a very high feedback to signal ratio and very good phase linearity.
The woodwinds are the biggest challenge here, as the instruments themselves are not very loud and you have to have about a 20-30 cm distance between the instruments to the mic. For the woodwinds, brass, strings I have separate graphic and parametric EQ in each group to control the frequencies that might typically cause feedback. I push the group up into feedback, check the frequencies, and then put it back to working level. That works pretty well. And at Ullevi, the PA was pretty far away from the orchestra anyways, so this also helped a lot to be feedback safe.
However, during the tour, it was a completely different challenge because in some venues the PA was just 5 meters away from the orchestra. You always have to find the right balance between the frequencies you cut and the full instrument sound you need to maintain. You let a double bass go down to a low cut at maybe 43 Hz and then have to take something out of the low mids to prevent feedback and still get that bass sound, it’s a compromise sometimes.
Let’s have a look at your plug-in armada—what are you using?
Since I come from the VENUE |Profile world, I used to use a lot of McDSP plug-ins like the multiband compressors and dynamic EQ’s. However, with the S6L system, I’ve been using fewer plug-ins because right away the S6L sounded so much better due to the better preamps and better onboard EQs. I used to use the multiband compressors to vary the character of the songs after I do the basic EQing with the built-in EQs. I also used to use a lot of Waves plug-ins on the Profile systems, which I have a feeling I will be using on the S6L quite soon.
When it comes to reverb, on the Profile I used mostly the TC VSS3 and Revibe Reverbs, while on the S6L I thoroughly checked out the built-in reverbs and created presets with them and this worked really well. Additionally, I use some Sonnox/Sony Oxford reverbs for the vocals and for the rest pretty much ReVibe2 from the desk. Jonas and I have been comparing some reverbs and found out that the built-in reverbs is very good, probably because it runs at 96kHz.
How would you sum up your experience with the S6L overall, the sound quality, system functionality and workflow?
Well knowing my way around the Profile, I immediately felt at home with the S6L. I think they have really taken the best from the former system and added a very intelligent new approach to things and implemented them perfectly. One great thing is that you can choose how to work. You can use the mouse, the touchscreen, the knobs and the faders depending on what you are used to. Actually, the very first show that I did using the S6L was at a rock festival that I had previously done using the Profile desk. I imported the show file from the Profile onto the S6L, exchanged some plug-in slots where things were not compatible and then directly started sound check.
First time on the desk and I had to do six, seven rock bands back to back. I intuitively could find my way around the S6L and just put all the bands into the fader layer. I stored the user faders for each band in a snapshot list and the main EQ for each band. There were, of course, guest engineers from the bands present and they had never seen the desk before. I ran them through the desk, helped them out and within five minutes they each had everything up and running.
I genuinely don’t think I could have done this show as comfortably with another desk because with the S6L, it makes no difference if you use 30 channels or 130 channels. You just have to organize things properly. DSP-wise, now with the second card, I also have enough power to even add stuff for future shows and never have to worry about adding a plug-in or not because of limited DSP power.