1

How Avid Media Composer Uses a Computer

In the past, acquiring and running software for desktop and laptop computers was a slow, thoughtful process. We would stand in stores (back when they existed) and stare at software boxes (back when they existed). Turning a box around, we’d comb through the specs, making informed decisions based on our intimate knowledge of the computers we owned.

That was for simple “home stuff” like Quicken or Doom but what about “work stuff”? As professional craft editors responsible for large projects it was even more critical to understand the specs and the tech behind the whole process.

At the time Avid Media Composer wasn’t just software in a box. It was acquired as part of an expensive “turnkey” system – a machine designed from the ground up with the sole purpose of running it as best as possible. It’s the etymology behind referring to an edit system as “an Avid”.

Today we download and install everything we see to our phones, tablets and even desktops. We’re app-happy. It’s how we test and consume new workflows, and it’s all thanks to this massive D.I.Y. culture. The result is editors and assistants building and supporting their own machines. Unless we’re responsible for outfitting a massive facility, turnkey systems are largely gone. Today all we need to do is click, download and pay a monthly subscription fee to get a working Media Composer system. But what kind of computer are we downloading it to? Will it handle Media Composer? Or perhaps the more appropriate question: How will Media Composer handle the computer?

We’ve all asked these questions over the years, especially when we were students learning the trade. In fact today, places like the Assistant Editors Bootcamp are great examples of how we bring new women and men into the industry. But are they learning these basics? When Noah at the AE Bootcamp reached out to me with these questions, they were at the request of his Lead AE and Independent Editor class. I was eager to help. But to truly get answers, we need to get them from Avid’s engineers directly.

A group of students at the Assistant Editors Bootcamp, lead by Noah Chamow in 2017. (assistbootcamp.com)

Conveniently, Avid is in a wonderful mindset of transparency right now.

As a Vice Chair of Avid’s ACA, as well as a volunteer Moderator of Avid’s Pro Video Community, I was able to have conversations with a number of senior level peeps at Avid. Reponses came in from many of them – from Avid offices in Massachusetts, Québec and California.

Avid Technology Inc., 75 Network Drive, Burlington, MA 01803

It was the response from Shailendra Mathur, VP and Chief Architect that kicked everything into gear. Here’s what I asked:

For the purpose of assisting editors and AE’s, I’m hoping to create a 1-page guide that explains what parts of a computer are used by Media Composer – broken down by processes like rendering, AVX Plug-ins, real time playback, processor-intensive codecs and so on. Would you or one of your staff be willing to assist me in its creation? 

The response from Shailendra Mathur: 

Hi Chris, sure we can help. We have quite a lot of info on this since it has been a popular question through the ages, and it will be a wonder to fit it into a single page :-).

Shortly thereafter I got an email from a number of Avid’s engineers and away we went. The first question: How simple would this one-pager be?

I opted for a simple concept we could start with – listing only the “intentions” of the app, meaning answers to questions like, “For video effects, does MC use the CPU or the GPU?” and, “How many cores are used and for what tasks?” From there we started diving into the details behind those simple explanations.

It took a while, which wasn’t their fault but rather mine. In addition to my craft-editing schedule, there were a lot of emails and phone conversations with Avid’s engineers to help me understand things. I’m not an engineer, so I’ve been very thankful for their patience. I still can’t say I comprehend it all, yet the goal from the beginning was to have this written by an editor for editors.

Here is the result:

So… Are you begging for more details? Of course! We’re craft editors and that’s what we do.

First… What is this whole thing and how long has it been here?

The Avid Intelligent Compute Architecture as it’s called was initially developed when Avid Media Composer v3.0. It evaluates the whole system – the OS, hardware, GPU capabilities, availability of processors and number of cores. It dynamically distributes the processing to the device best suited to the specific task for different segments of the timeline. Rather than targeting just the GPU, just the CPU, or just the FPGA based cards, the philosophy changed to use them all in a holistic fashion. Thus the whole system is turned into an accelerator. The Intelligent media player in the application acts as an orchestra conductor, keeping as many of the resources playing to provide the performance required. Keeping a holistic view of the whole system in mind, particular attention is paid to the cost of transferring heavy video data across the system bus when deciding which compute hardware should be used for a particular process.

OK let’s get into it. Below are cutouts from the above 1-pager, followed by the notes I jotted-down during my conversations with Avid’s Architecture team.

 

RAM and Cache

1. Everything works better with more RAM. Filling a computer with the maximum it can handle is now a standard recommendation. Having less RAM constricts Media Composer’s abilities.

Media Composer works best when it encounters the least restrictions possible, especially when it comes to RAM. For example, take a 2014 MacPro with 16GB of RAM. Hit play and watch the RAM usage in the macOS Activity Monitor. (This is how we monitor apps and their efficiency.) Media Composer may hover generally around the 8GB area. But if you take that same system and change the RAM to 32GB, Media Composer may hover generally around the 14-16GB area. This isn’t Media Composer “hogging” more resources, but rather the smaller RAM is constricting the system’s ability to use any available resources. This is a prime argument for increasing a system’s RAM to max capacity. 

Maxing-out the RAM “lifts the computers ceiling” as high as possible. Thus all functions have the possibility to operate at their own maximums on that computer, without constraints placed on them by lesser amounts of RAM. If a new iMac can physically hold a maximum of 64GB of RAM, then that’s what is recommended by Avid.

Since RAM can be the easiest and least expensive way a user can upgrade a computer, Avid has been architecting the 64-bit playback engine to take advantage of RAM first.

Minimum requirements will, of course, still exist. For example, a laptop with only 8GB of RAM and a 5400RPM external drive holding a project’s DNxHD media will work as a minimally qualified machine.

 

2. Larger raster sizes (UHD, 4K etc.) use more RAM than smaller ones. 

Larger raster sizes means more pixels, which require more processing power to play in real time.

https://commons.wikimedia.org/wiki/File:Aspect_Ratios_and_Resolutions.svg

3. The Interactive Frame Cache within Composer often assists playback. Since cache from Composer is stored in RAM, increasing the Media Cache -> Video Memory can improve stream counts. This allows more streams to be played in real time without rendering. Users should learn to use this setting as needed.

Avid Media Composer: Settings -> Media Cache

Leaving the Desired Video Memory cranked-up all the time may negatively affect other processes and apps.

In Media Composer’s processing algorithms for playback, video is actually looked at as individual frames. During processing, Composer determines how those frames get played in streams. For example, a single video layer of DNxHD media qualifies as one stream. More layers and effects add more streams.

The term processing refers to the heavy lifting of playback. There may be pre-processing involved at some stage (where things are transformed – partial renders would be one form), but getting everything from the drives put forth into the output, all of that is referred to as processing.

The term cache in a timeline refers to a way that processing distributes the handling of playback. When processing gets really dense and complicated, here is where CPU cache can assist.

The cache might be thought of as a sort of invisible Video Mixdown that the system uses to reduce strain and help playback. That’s a pretty narrow and somewhat incorrect definition, but it hopefully gets the basic point across. A better way of explaining it would be: Instead of doing a complex evaluation over and over again (processing), Composer keeps the result in RAM, saving the pain of fully reprocessing over and over. Cache is downstream of processing, and will remember the results of processing.

Cache does eventually fill up. At a certain point, when there is no longer room in the cache, the oldest used frame is thrown away in favor of the new frame. Want more frames saved in cache? Get more RAM.

Avid Media Composer: Settings -> Media Cache (showing the default setting)

Avid Media Composer: Settings -> Media Cache (showing the Set High setting)

Note: On this particular system, which is loaded with 32GB of RAM, Media Composer is operating at around 8 GB. Setting the Media Cache’s Video Memory to 22 GB adds to that memory pressure, reaching a grand total of around 30GB. This means only 2 GB of RAM is left, which can cause memory issues if any other apps are launched or if background processes like Dropbox or iTunes begin to sync.

When Media Composer is closed, the Memory Cache setting is not saved, thus allowing for fast, easy re-launching later.

Note: Increasing this Video Memory setting by large amounts may negatively affect other processes that require RAM, so do not to leave this cranked-up all of the time. Adjust it up/down as needed.

 

4. The Playback Video Frame Cache improves single frame play responsiveness.

Avid Media Composer v8.9.2: Settings -> Media Cache -> Video Memory tab

While the Media Cache -> Video Memory increases the number of frames saved, this setting increases the responsiveness of those frames during playback. It gets better results with Media Cache -> Video Memory set higher.

 

Codecs

5. Currently the CPU handles encoding/decoding of codecs.

RED Camera files (R3D) are the exception, which encode/decode with help from the GPU, and certainly more so when a Red Rocket card is assigned the workload.

Most codecs are structured in a way that one might call “GPU averse”. But this could (and likely will) change in the future. If codecs become more GPU-friendly as bitstream formats, then Avid will no doubt chase that philosophy.

 

6. Playing codecs smoothly in a timeline requires processing, which benefits from more CPU cores. Codecs with large raster sizes also benefit from more cores.

Playing a timeline that contains Linked (AMA) clips plus many effects results in a higher stream count, especially with more complex codecs. With more cores there are more opportunities to distribute the processing of those streams effectively.

 

7.All codecs benefit from more RAM, but some codecs (LongGOP, AVC/H.264) need much more to work effectively.

Some codecs are easy for computers to play and edit (like DNxHD). Other codecs are more complex and require a great deal more processing power to get minimally acceptable results. There are certainly situations where computers that are considered minimally qualified to run MC are not powerful enough to run MC plus those codecs. More RAM will have to be added.

Files accessed through the Source Browser can also be transcoded in to Avid codecs (Clip > Consolidate/Transcode) which use less streams and less system resources to play. Workflows using Avid HD-sized codecs do not require a high number of cores to work effectively.

Note: The H.264 codecs (.mov and .mp4) are no longer being handled by the legacy 32-bit QuickTime engine. As of Media Composer 8.9.1, the 64-bit playback engine handles them. This applies to XAVC-S (.mp4) as well.

 

Video Quality Menu

8. The Video Quality Menu changes the raster size of the viewed output to allow weaker computers to play complex codecs smoother. Green/Green mode plays the codec in full raster. Yellow/Green mode reduces that raster to 25%. Full Yellow mode reduces that raster to 6.25% (1/16th size). Currently the CPU, not the GPU, handles this raster resizing.

The CPU is doing this resizing from its original raster size. Operating in these lower modes on HD-sized projects usually allows for a weaker computer to play back a timeline smoother.

For larger-than-HD raster sizes however, this can add a CPU-based bottleneck.  The computer is using more resources to do a real time conversion of the raster.

 

9. If a codec is greater than 8-bit, switching the Video Quality Menu to Green/Green/10-bit mode playback can sometimes be a more effective use of overall processing. This depends on the amount of effects on a clip.

 

Video Effects, Timeline & Playback

10. GPU is front loaded/preferred by Composer when playing from a timeline. Playback looks at the topmost layer in a timeline first, which is seen as one stream.

The term “front loading” means that the top-most-layer in a Media Composer timeline will target the GPU first. When playing a sequence, Media Composer looks at a timeline’s play head (blue vertical bar) from above, and not from the side like we users do. (Imagine the play head as the light bar in a photocopier or a scanner, hovering over a sequence.)

Before playback of a sequence with effects begins, Composer’s algorithms allocate as much of the timeline as possible to the GPU. It also allocates some CPU power, but only after it identifies specifics within the timeline that need CPU-only processing and/or multithreading over a number of cores.

The primary reason for all of this is we want to read back only one stream from the GPU, and that starts with the topmost layer. So loading a computer with the hottest qualified GPU and highest amount of GPU RAM possible helps with this. The better and more RAM-heavy the graphics card, the less data needs to be sent to the CPU and the cores.

Render and Expert Render can relieve system stress by collapsing multiple streams.

 

11. Some effects are processed only using the CPU. They were engineered to do so. More recent effects have been engineered towards GPU usage.

Avid Media Composer v8.9.2: Effect Palette

12. Some Color Adapters (source effects for example) are processing-intensive, so a more powerful GPU will handle them more effectively.

 

Random Notes

That’s it on my notes in-context with the basic document, but here are a few other tidbits of information I picked up in conversation.

– As of Media Composer v8.8.3, the QuickTime AMA plugin relies a lot less on the QuickTime engine (operating in the background outside of Media Composer), and more on Composer’s own playback engine.

– Higher frame rate clips benefit from more cores as well as bandwidth. This is because higher frame rates can be handled by something called stream-based parallelism (the processing of sub-sequent frames in parallel). This eats up a lot of buffer memory. Thus another argument for more RAM. Note: this is only for I-frame codecs. Hence why GOP codecs are so much more difficult. Their processing of frame rates cannot be handled in any sort of method of divide-and-conquer. By design, GOP codecs have a lot of interdependencies between the video’s frames. If processing of a later frame can only proceed once the dependent earlier frame has been decoded, then you have a result that needs to be addressed much more practically and linearly.

 

Further Reading

If you’re really into learning the architecture behind all of this, Avid has public access granted to many documents created over the last few years by Avid’s architecture team.

Here are the links:

1) A blog discussing the architecture itself – the Avid Intelligent Compute Architecture: http://community.avid.com/blogs/mediacomposer/archive/2013/03/12/how-intelligent-computing-powers-our-editorial-architecture.aspx

2) And here is my personal favorite – the public file ion Google of the patent filed which includes the performance architecture: https://www.google.com/patents/US8358313

 

Questions?

Any questions? I’m sure there are. If this info doesn’t prompt more questions, I’d be surprised. For the sake of being heard as best as possible, let’s please post all questions and comments at one place, on the Avid Community.

Here is the link: http://community.avid.com/forums/p/182676/849425.aspx#849425

Ask as many questions as you’d like. If it’s something I don’t know (which is plenty), then I’ll pass it along to others. Or perhaps other users here can step up? The goal is a universal understanding, in order to make us better at our craft. Hopefully this is one good step in the right direction.

Thank you for you time!

Chris Bové

(AKA “Pixel Monkey” on the Avid Community)

Discover Media Composer

Accelerate storytelling with the tools embraced by top movie, television, and broadcast editors. And power through HD and high-res editing faster and easier than ever.




A Journey to Pro Tools 12.6

Some Good Folks

There is one thing I really want to share before we start.

This process – the journey towards getting some of the features in 12.6 out into the world – as well as my time at Avid has reinforced my feeling that I’m lucky to be working with some of the most passionate audio professionals and skilled engineers I have ever had the good fortunate to meet.

Equally we have had a group of enthusiastic and professional users following us through every step, making sure we delivered.

It has been a long, but satisfying process, with an end result that I’m hugely proud to have played even a very small part in. Thus I want to share some insight into how we made our way towards some of the new features in 12.6.

 

The Beginning

Let’s rewind a few years.

While meeting with one of our largest broadcast users in the Asia Pacific region, as we do from time to time, to update and talk with them about where Avid is going development wise across storage, video, audio and media management, we began discussing what they were looking for their next DAW solution in Post Production Audio.

We discussed how Avid could help their engineers and assistants in the Audio Department work more easily and efficiently, without significantly disrupting their established workflows used with their current tools. Additionally they also wanted the support and services that only Avid can provide across all our solutions, almost all of which they were already using.

Their reply was clear. “We are very interested in seeing what Avid can do and want to work together”. Equally clear were the features they wanted to see.

To make sense of what was being requested we had to really look at how they were using Pro Tools, other DAWS, consoles and how these fit into a workflow.

New features are great, but it’s how they contribute to getting the job done that’s really important.

As the team started looking at these individual ideas and requests, we had to ask whether they line up with other plans and requests, and whether they would be useful to other users as well. Overwhelmingly the answer was yes. “Let’s get to work!”

I don’t think there was any doubt from anyone involved, that what we were looking to undertake was going to be a challenge, and may require some potentially significant changes to the previous Pro Tools edit models.

It was the beginning of many meetings, scribbled notes, whiteboard drawings, rough design documents. These turned into engineering documents, internal task assignments, alpha demos, presentations, feedback, redesign, internal testing, on site testing, and a great strengthening of working relationships between regional offices and departments within Avid.

The truth is, dealing across multiple languages and time zones as well as various departments both internally and externally, there were a few delays and the occasional misunderstanding. However, everyone’s commitment was absolute. Each time we persevered, made the effort to include and listen to the customer’s feedback, and with some patience on their part, the small issues became just that – small in comparison to the goal.

 

The Essence

Condensed down to a theme, the aim and idea from the customer was to allow assistants and dialog editors to prepare a session more quickly, with as few mouse clicks and button pushes as possible, in a way that didn’t interfere with the mixers’ (often another person) ability to control the overall tone and levels of the same session.

If you are involved in Audio Post you know how daunting and time consuming it can be to receive a sizable AAF with potentially thousands upon thousands of clips, all of which might need some attention; be it adjustments to timeline position, fades, EQ, and Dynamics, all of which must be completed before the narration and mix process can begin.

Now multiply this by, for this customer, more than 40 mix rooms and literally hundreds of audio operators. You can easily understand that even small improvements are as valuable to the users, as they are to the staff managing schedules and budgets.

 

Previous 12.x Features Highlight

A few of the features that partly came out of this adventure, you will have already seen in recent versions of Pro Tools.

12.3 introduced Clip Transparency. This seems simple, but once it’s turned on, you will wonder how you ever got by in the past moving a little yellow box around.
Now, aligning dialog and music is much easier. As you move a clip, they become transparent so you can see what’s “underneath” and what you are moving. This means less trimming and potentially less tracks to achieve the desired result.

->Enable Clip Transparency in the View Menu under -> Clip -> Transparency

Also in 12.3 was a new Batch Fade Window. This feature included presets, shortcuts and advanced options that allow you to, for example, choose to leave fades that already exist, or change just the shape but not duration.

Another example would be selecting all the clips after an AAF import and adding new short fade ins/outs to smooth out the transition into and out of clips, while preserving any fades the editors may have used during the edit process. This can be a significant time saver.

->Try batch fade! Simply select multiple clips on the timeline, then hit Command + F to open the new batch fade window. You can recall a few presets using control +1-5 while the batch fade window is open. If you have an Eucon enabled surface, you can control these functions directly without having to use a keyboard.

12.6 Features!

A page and a half and we aren’t even onto 12.6 yet! So lets now have a look at some of the new features and the reasoning behind them. Although, I think as there are so few “rules” in audio, how people come to use these features will be as unique and numerous as the range of Pro Tools users around the world.

Clip Effects – Real-time clip by clip, input gain, polarity, EQ, Filters, and Dynamics, complete with shortcuts and presets. All this with a design goal to make it quick to use.

Depending on how and when you found your way into Pro Tools, this will be either 1) a welcome return 2) similar to something else you have used or 3) a new tool. In any case, this feature is bound to have a massive and positive impact on the way you work – much like how it would be difficult for me to go back to a version of Pro Tools that didn’t have Clip Gain.

The idea is to have a real time effect allowing an assistant or dialog editor to help prepare the audio on a track for easier and more creative mixing. Previously, AudioSuite processing or automation could be used. However, this meant the need for extra steps if you ever wanted to revert an AudioSuite render, and automation should ideally be free for the mixer to use,.

Throughout the design process, the team made sure to stick to the ideals of easy and quick access. The feature includes shortcuts for showing/hiding the clip effects window, preset selection, and copy/paste. You can either work on an individual clip or multiple clips. When multiple clips are selected, adjustments to individual parameters are applied to all selected clips, but settings for the other parameters are preserved. The clip effects are based on the ChannelStrip plugin, and all HD users have control over the clip effect settings, while all Pro Tools users can playback, render or bypass without any compatibility concerns.

->The Clip Effects control is accessible by clicking the icon in the universe bar or by the shortcut option+6(num). Also try turning on the numeric shortcuts for the presets in Preferences to quickly apply your settings. Again, the functions are accessible with Eucon softkeys enabling fast operation.

Layered Editing

If you’re like me and have spent the large majority of your career in Pro Tools, then some of you might wonder what the fuss is about the current Pro Tools editing model.

Pro Tools has always dealt with the track on the time line as “flat”. So, “deleting” a clip off the timeline results in a “hole” or blank space. The clip will still available in the Clips list. This makes a fair amount of sense when there are no other clips near by.

However, issues arise when you take a small clip, be it a narration drop in or small sound effect, and place it within the boundaries of another clip. When the small clip is deleted, there will now be a hole in the timeline.

If I wanted to repair this hole in the underlying clip, I could either trim the ends, or use “heal separation”. But that’s a lot of extra work, and I still might not be able to restore it to how it was before. And don’t we all know for whatever reason, once you lose that first arrangement, things never quite sound the same.

In 12.6, you can enable “Layered Editing” from the tool bar. So long as the underlapped clip is not fully covered by another clip, you can either move the overlapped clip away by dragging or nudging, or delete the clip to restore the underlapped clip to its previous untouched state.

While I recommend having a try, we have kept the legacy editing model because there are some workflows that require it, and some people will be happy to work as they always have. The development team always tries to retain the ability to use legacy workflows if at all possible.

This all sounds great! But what happens if I do fully cover a clip, either by recording, copying and pasting, or dragging a minute long atmosphere from the workspace and accidentally covering a group of off the screen clips further down the timeline.

That brings us to Playlist Improvements.

Playlists have always been a powerful tool. Unfortunately they often weren’t used in post production, and the main reason was the lack of an easy way to see whether there were any playlists on a track or not. Using someone else’s session, or even your own after some time, could mean a frustrating time hunting through the tracks in your session to look for alternate takes. This has now been addressed by the addition of a simple indicator, in blue, to show that other playlists are available.

While playlists are great, switching to Playlist Track View, potentially means losing valuable screen real estate. This could also cause confusion about which tracks were routed where, especially when using an external console. There is now an extremely simple Shift + ↑ or Shift + ↓ shortcut that cycles among your playlists on the tracks with the edit insertion. Using this shortcut, you can then easily copy and paste between playlists, or quickly toggle takes for directors or producers without having to mute/unmute various tracks.

Furthermore, there are options to send a clip, or selection of clips, to a particular playlist. If you do a lot of narration recording using custom session templates, you could make a narration track with pre-prepared playlists that are named with take numbers or a rating scale. Pretty useful stuff!

However, we still haven’t addressed the clip overlap issue.

In 12.6 there are new options in the Preferences that will automatically send a clip that is fully overlapped, either by recording or editing, to the next available playlist. This again, is implemented as an option, so the legacy workflows that users have been taking advantage of, are still available.

What do these options do? If a clip, or clips are completely covered either by a new recording, or by another clip via copy/paste or drag and drop (from OS Finder, Workspace, or Clips list), they will be safely stored, intelligently, on another playlist.

Playlists created by this function will be named according to the clip name. If you record over a contiguous arrangement of clips they will be moved together to another playlist.

12.6 also adds great visual indicators about what’s going on in your session, improvements to latency domain control for “in the box dubbing” on HDX, as well as the long awaited direct fade manipulation in the timeline.

I think if you’ve been feeling hesitant about upgrading, now is the time. Pro Tools, and especially Pro Tools HD, has never been more accessible.

I want to finish up this post with a few shout outs. To the Executive Team at Avid who supported us through this. The Pro Tools Product team, who listened, took up the challenge and made this happen. The Avid and Beta testers who make us all go back and check what we were thinking. My co-workers in the Japan Office who tirelessly translated and championed for our users. And most of all, the customers who sought to make a positive difference, not just for them, but all Pro Tools users.

Please check out the fantastic Pro Tools 12.6 overview videos from the Audio Application Specialist team.

Create. Collaborate. Be Heard.

Make your mark with Pro Tools—Create music or sound for film/TV and connect with a premier network of artists, producers, and mixers around the world.

LEARN MORE




IBC 2016: Avid Demonstrates Emerging IP and UHD Workflows

Our industry continues evolving at a rapid pace, with emerging technologies and formats offering exciting possibilities for the future. However, implementing these technologies remains a key challenge for many media organizations.

At IBC 2016, we are pleased to preview solutions for several converging technologies that are driving significant change for the media industry. By supporting real-time IP signals natively in key components of the Avid MediaCentral Platform, we’re accelerating the industry’s transition to IP and delivering a unified environment for file-based and live signal-based media workflows that will ease the migration to emerging image formats, including UHD.

Video IP integrations

Many media companies rely on legacy technologies like coaxial cabling and baseband SDI signals to transport video and audio signals throughout facilities and across geographies. But recent technological advancements have made it feasible to pass professional audio and video signals over standard IP networks.

This year at IBC, we will demonstrate support for a variety of emerging IP standards, including SMPTE 2022-6 and VSF TR-03, illustrating how media companies can easily manage the transition to converged IP infrastructure over time. Technology presentations will showcase IP ingest, editing, playout, graphics insertion, and monitoring workflows spanning several Avid products, including Avid Media Composer, Maestro, 3DPlay, and Playmaker.

Visit the booth to see how converging on IP networks for file-based and signal-based traffic will provide media companies with increased flexibility, agility, and lower costs.

UHD integrations

We’re also showcasing innovative UHD broadcast solutions that integrate seamlessly with both standard SDI production infrastructure, as well as IP production workflows. The Avid UHD workflow enables broadcasters to deliver richer, sharper content without over-investing in new solutions, and is centered on Media Composer | Software, Interplay | Production, Media | Director, Pro Tools, DNxHR, and Avid NEXIS, along with graphics and replay servers.

 

Avid is also participating fully in the AIMS alliance and showcasing its interoperability at IBC 2016 with products from other vendors at the IP Interoperability Zone in Hall 8

Avid at the 2016 IBC Show

Join Avid at IBC 2016 in Amsterdam from September 9-13 or follow the action online! We’ll be sharing several exciting platform innovations and new products to help you address your most pressing business challenges and stay ahead of the competition.

#AVID AT #IBCSHOW




IBC 2016: Avid Extends Openness with New Alliance Partner Innovations

In 2014, we launched the Avid MediaCentral Platform to unify workflows and help customers achieve the efficiency and productivity they need. Since then, the world’s leading media professionals and organizations have adopted the platform in staggering numbers. One of the reasons is because of the incredible level of openness and interoperability the platform offers.

Key to this success is the Avid Alliance Partner Program—a special partnership that allows developers to become Avid certified, access various levels of tools, achieve partner product certification, and access the Avid sales network. The program is designed to form the basis of an invaluable, multi-year partnership for the most reputed and mission-critical products required by customers.

At IBC 2016, we’ve expanded our Avid Alliance Partner Program to arm developers with the services and resources they need to create platform-compatible solutions even more easily. This way, we can ensure that Avid continues to offer the most open, most flexible, and most interoperable platform on the planet.

New Avid Alliance Partner Program features include new developer testing and certification, new levels of product certification, and additional tools for creating differentiated connections to the Avid MediaCentral Platform.

So what does this mean for end customers? Purchasing solutions from a certified Avid Alliance Partner means that customers can have the peace of mind that they will benefit from an enhanced purchasing experience, a lower total cost of ownership, less complex integrations, superior post-sales support, and a level of software maintenance that includes both Avid and Alliance Partner products.

Avid at the 2016 IBC Show

Join Avid at IBC 2016 in Amsterdam from September 9-13 or follow the action online! We’ll be sharing several exciting platform innovations and new products to help you address your most pressing business challenges and stay ahead of the competition.

#AVID AT #IBCSHOW




Pro Tools | Control — Creating Custom Post Macros

After the release of Eucontrol 3.4 and the free Pro Tools | Control app for iPad, I fell in love with its Channel view and Soft Keys view. When paired with the new Pro Tools | Dock, the combination becomes an amazing piece of hardware that is compact yet extremely flexible. In that respect, I decided to go ahead and make some macros for the common tasks I do for sound editing and Mixing in Dolby Atmos, along with some cool stuff for Cargo Cult’s incredible plug-in, Spanner.

You can Download my user set here, and once downloaded, unzip it and place the file into:

/Library/Application Support/Euphonix/UserSets/MC2User/MC_USER_SET_Root

This is where your custom XML goes. Please note that currently, this is Mac only as that is what I work on along with SoundMiner. The reason I made this is pretty simple. As a mixer, I am a guy who is very much against doing repetitive work with no creative output. Which is why I have come up with these! I will explain in the following videos what each macro does. Once you have the xml, you can access this page from the User Pages Button on the main page of the Soft Keys tab. This takes you to another page with the Tracklay, Dolby Atmos and Spanner Buttons. But before that, lets look at the process of creating a macro.

Pro Tools | Control Soft Keys Tab

Creating your first Eucon Macro

A Macro is just a sequence of key strokes or shortcuts that help you to execute multiple key strokes in one button press. For example, let’s say you frequently take a copy of an audio clip to the track below it, mute it and come back to the original clip selection in order to create a backup when you are processing a clip with audio suite. This sequence of events can be automated into one button. Simple tasks like these, once automated, save a huge amount of time that frees up more creative space for you. The EuControl Soft Keys are in XML format. EuControl has the ability to create and execute these macros based on the focused application.

But there are a few things to keep in mind before creating a macro. This is the method I adopt while doing this:

  1. Get the sequence of shortcuts right. Remember, there are many ways of doing the same task. but you want to be able to execute it in the least number of keystrokes. This is to make it as fast as possible and it is simpler to undo your changes. Also, bear in mind that Eucontrol can only accept a maximum of 20 individual steps, but each step can have multiple key strokes as long as they share a common modifier key.
  2. Once you have determined the sequence, make sure that you have the EuControl Soft Keys editor open beside you. This helps because when building complex macros, you can easily forget the steps if you don’t do it in parallel. I do this by performing each step in Pro Tools and putting the macro with each one. So, if I wanted to copy, move selection down, paste and mute, I would make a copy in Pro Tools, put the key stroke in the Soft Key Editor, go back to Pro Tools, perform the move, come back to the soft Key editor and put that shortcut etc. It may take a bit of extra time, but it is also very helpful to troubleshoot and be sure that your sequence is correct.
  3. Make sure your Button Layout is something that lends itself to muscle memory. If you need to access many pages to execute a set of buttons, then it may defeat the purpose of using a macro.

As an Example, let’s create a macro that copies a selection, pastes it into a track below, mutes it and returns the selection to the original track. For our purpose, lets make this on the Pro Tools | Control app. To begin, lets look at the sequence:

  1. Make sure clips are selected and command key focus is enabled. Then Copy with c
  2. Move selection down. So, the key stroke is ;
  3. Paste the copied clips. So the key stroke is v
  4. Mute the clips pasted. Usually, we would just do a Command+M. However; what if the selection contains clips that are muted before? We want to be able to maintain all the states of those clips too, otherwise performing a Command+M will unmute all clips. The easy way to do this is to create a clip group and then mute that, thus maintaining the original mute states. So the key stroke is Command+Alt+G and Command M.
  5. Move selection up. So key stroke is p.

Now that we have the idea set, lets build it. If you look on the Soft Keys tab on Pro Tools Control, there is a button on Page 1 called User Pages. We will use the pages in that to build our macros. If you haven’t pasted my macro XML yet, you will see this is a blank page that looks like this:

Blank User Page

What is important is to note the page number as we will need to know that to put our macro in. In this case, the page is Page 148. By default, it jumps to page 147 on the current Eucon Soft Key set. For now, lets create our button on page 148. To do this, click on the EuControl app on your task bar and choose EuControl Settings and then select the Soft Keys tab. Now, since we are creating a Soft Key on the Touchscreen section of the app, select Touchscreen from the drop down menu as seen in the Picture:

Eucontrol Setting Soft Key Section

Once there, we need to go to Page 148 because that’s where we will create our macro. Select Page 148 and select any one of the buttons you see on screen. If you want to add more pages, you can simply click the + sign beside the pages dropdown menu. Once you have selected the button, click on Command… This brings you to the Soft Key Editor.

Soft Key Editor

Here we see options like Key, EUCON, Page, etc. Each has a specific function. If you choose EUCON, you can access euconized commands like menus or preferences that are otherwise not accessible via a key stroke. But remember, we can have only one EUCON command per macro. In our example we are building the macro with keyboard shortcuts, so we choose Key.

Once there, the key strokes we need in order are C, ;, V, Command+Option+G, Command+M, P. That’s a total of 6 keystrokes. But, we don’t need to create 6 different entries for this. The reason is because the first three don’t have any OS Modifier keys such as command or control which would change their key value. (We are assuming the Command Key focus is on). So, we can build this as one Key Function. The Next one is Command+Option+G. This and Command+M along with P needs 3 different entries because their OS modifiers are different. So, when we finish building the macro, it will look as this:

Copy and Mute

Macro Button

Lets now name our Button as Copy Down Mute. Now, if you have clips selected, have an empty track below it in Pro Tools, go ahead and push the button and see the magic work!

Although this was a very simple one, there are a few things to keep in mind. For example, if you copy from a track with a surround output or an automated plugin to another one that doesn’t have these, then a dialogue box is thrown saying “Some Automation parameters in the clipboard do not match the paste destination”. This is important as we need to test the macros in various situations and figure out the best way to make them run. Now, let’s look at the macros in the custom Soft Key Page shared above.

Custom User Page

“The more you automate repetitive tasks, the more creative time you get. A click or keystroke saved, is a creative second earned.”

—Sreejesh Nair

Track Lay Soft Keys

1. Extend & Create 1 Frame Fade

This is very useful to me when I spot sound effects or ambiences from SoundMiner. There is a quick way of cutting a mixdown video or an audio track into the scenes. If you have markers on all your scene changes, then you just need to go to the grid display and change that to clips and markers. Now, hide all the tracks except the Video track and select the length of the video. Once you do that, head over to Edit Menu -> Separate Clip -> On Grid . The logic here is that since the grid is split to markers, it separates on the marker locations. Pretty easy! Of course you can still get scene based edits if you work using AAF transfers from Media Composer. Once done, it is then easy to spot an ambience from Sound Miner for that scene selection. When I export from SoundMiner, I make sure I have head and tail for the same. This is important as it is key to making the macro work. My usual method is to have ambience spotted and then extend the clip by one frame on either side and insert a one frame fade. This is now made very easy with this macro. The 5 frame version is the same with 5 frames of fade length.

2. Copy Fade from Clip Above

If I have an ambience that is laid in the exact length of the clip above it and it has a certain fade length that I want to use, I can use this macro to replicate that. Note that this won’t be able to replicate the fade type. The Copy fade from Clip below does the same but from a clip below the selected one.

3. Extend Whole Clip to Match Clip Above with Fade Length

Sometimes I want certain effects that I have to match the length of the clip above. This does exactly that but it also includes the clip fades. Once I run this, I can also run the Copy Fade from Clip above to get the same fades too.

4. Extend Clip Head to Match Clip Above with Fade Length

This does the same as above except only for the head of the clip.

5. Extend Clip Tail to Match Clip Above with Fade Length

This does the same for above but only for the tail of the clip.

6. Extend Whole Clip to Match Clip Above without Fade Length

Sometimes the clip above won’t have a fade or I need only the length of the clip between the fades. That’s when this is the one to use. The rest of them are the same with head and tail.

7. Duplicate Clip Backwards or Forwards

Duplicating clips forwards is easy. But for backwards, it was usually Control+Option+Command+Click with the grabber. This was a bit too much so I created the Duplicate clip backwards macro for this very purpose.

8. Export Tracks

This is a Quick access to export selected tracks to new session.

Dolby Atmos and Spanner Page

The Atmos Pan transfer copies your regular surround pan into an object plugin. There are a few requirements for the Dolby Atmos Pan to be executed correctly.

  1. The Object track must be below the track from which Pro Tools automation is to be transferred and of the same track width (Eg: Both are Stereo or Mono.)
  2. The Object track view must be set to Master Bypass lane of the Atmos Plugin and the main track must be set to waveform view.

1. Move Clip Down

This is the first step in Copying the Pan Automation once the above criteria are met. The video below shows what happens.

2. Transferring Pan

Once the above is complete, without clicking anywhere else (in order to not lose the selection), click the appropriate Soft key. This is based on the output you have for the source, be it 7.1 Mono or Stereo or 5.0 mono etc. This will copy the automation. Remember to make sure you are on the Bypass Lane of the Atmos Plugin. (For this to be visible, the plugins parameters must be automation enabled).

Spanner Plugin to Atmos

The way this is set up is to have individual Atmos panner auxes for each channel. This is because there is no multi-mono version of the Atmos plug-in. So for a 7.1 pan you must create L, C, R, LSS, RSS, LSR, RSR in that order, one below the other and kept below the Spanner track. I usually use Spanner on the master of the Ambience bed or FX bed. The Objects I want to pan are separately sent to these 7 auxes. Once set, the channels need to display the bypass automation lane of Spanner as well as each of the panner plug-in.

If you look at the Soft Key Page, I have made individual soft keys for each channel of Spanner. This is because I found this to be a more efficient way of converting the pans rather than a complete set. This can therefore be used whether you are going from LCR to LCR and all the way up to 7.0. The only thing to keep in mind for a 5.1 pan is you need to use the Spanner Left Side Surround and Spanner Right Side Surround Soft Keys for Ls and Rs. This will be sent to the 4th and 5th aux but that’s ok as we are only concerned with the panning Metadata. This video will hopefully explain this.

The rest are pretty self-explanatory I would think. To know how these work, go to the EuControl Preference Softkey Editor and simply click the macro button. Then if you click on Command…, you get to see the execution logic of it. I hope these are of use to you and I look forward to hearing your comments and ideas!  To watch and learn more about EuCon Softkeys and the technicalities, check out this excellent Tech Talk by Mark Corbin from Avid:

You can download the custom soft key user set here.

Create. Collaborate. Be Heard.

Make your mark with Pro Tools—Create music or sound for film/TV and connect with a premier network of artists, producers, and mixers around the world.

LEARN MORE




Is there a DR in the House? Disaster Recovery Explained

It seems that everyone I know has a story to tell about the time they lost work when their computer died. The more dramatic versions of these stories typically involve tight deadlines for projects with big bucks on the line. And the usual epilogue to these stories is how they will never ever let this happen again. Backing up data for these folks is now a fanatical obsession.

Fortunately, we live in an age where storage is relatively cheap and plentiful, and systems are available to automate backups. So it’s easy to set things up so that you never have to worry about losing data.

 

Disaster Recovery

In business-speak, disaster recovery is defined as the set of policies, procedures, and systems to enable the continuation of critical business functions. Any outage of the production systems may result in disruption of a business and cause financial loss. To mitigate the risk of such outages, companies can design and deploy disaster recovery systems to minimize data loss and downtime.

In general, operational risk is defined as the risk of losing value when bad things happen – earthquakes, hurricanes, attacks by killer bees, etc.

Operational risk is managed by keeping losses within some level of risk tolerance which is determined by balancing the costs of improvement against the expected benefits. Having redundant components or full system replication is a means to mitigate risks of failures.

This table shows common operational risks, with examples and possible solutions to mitigate risks.

Operational risks

Example

Solution

Hardware/software failure

Server crash

Redundant system components

Localized outage

Loss of power to a server room

Local system replication

Site outage

Hurricane

Regional system replication

DR Modes

A system is always in one of two modes: primary or backup. Only one of the systems can be in the primary mode at any time. Under normal operation, the initial active system is running in primary mode and updates are sent to another system (or systems) operating in backup mode. This allows the backup system to be ready to assume the primary mode in case of an emergency.

When a disaster is declared, a system that was previously operating in backup mode starts operating in primary mode. This event is called failover. When the failed system is restored, it then acts as the backup to the newly designated primary system. This event is called failback. Note that the primary and backup roles are interchangeable between the systems.

 

DR System Configurations

There are multiple server configurations that can be used in Disaster Recovery systems. The diagrams below show typical configurations. Common to all configurations, system A is continually backed up to system A’. If system A fails, clients can immediately connect to system A’ while system A is restored.

 

Active / Passive

In the Active / Passive configuration is the most vanilla of DR configurations. Server System A is continually backed up to System A’. System A’ is only brought online when System A fails, in which case the clients would connect to System A’.

Active / Passive configuration is the most basic configuration for DR. The A’ system is effectively a “warm standby” system that will only be used in the case of a disaster.

 

Active / Active

In the Active / Active configuration both Systems A and A’ are used by clients and are continuously synchronized with each other. If either System A or A’ fails, then the clients of the failed system switch over to connect to the working server.

Active / Active configuration has an added advantage in that both systems can be used simultaneously. However, it may have complications such as conflicts if the same data is changed by different people in both the A and A’ systems. Systems that allow for Active / Active configurations typically allow setting policies for conflict resolution.

 

Shared Active / Active

In the Shared Active / Active configuration two separate systems are over-provisioned to act as a mutual backup. If system A/B’ fails, then the A clients would connect to system A’/B, and vice versa.

The Shared Active / Active configuration has advantages over the two previous configurations in that both systems can be continuously used, without the complications of data conflicts.

You can see more DR configurations in this technical brief.

 

Local Replication

An Interplay Production system is commonly deployed with Avid shared storage systems, either ISIS or Avid NEXIS. These systems can be configured to have local replication using the configurations listed above. For example, here is an Active / Passive configuration.

A second complete system can be configured at another location within the site. This system is considered the Backup Workgroup. An instance of the Interplay Copy service is configured to continuously back up the data in the Primary Workgroup to the Backup Workgroup. This process copies the clips and sequences (asset metadata) as well as the video and audio files (asset essence) to the Backup Workgroup.

 

Regional Replication

For regional replication, where the connection between the two systems has high latency, i.e. over a WAN, the mirroring to the backup system can be performed using the backup and restore functionality in Interplay. A synchronization application can be configured to make a copy of the Interplay backup data to the backup system in an Active / Passive configuration.

The following commands are scheduled to run at a given interval (daily, every 8 hours, every hour, etc.):

  1. A backup of the Primary Workgroup runs
  2. The synchronization application copies the backup data from the Primary Workgroup to the Backup Workgroup
  3. The database is restored to the Backup Workgroup

The media files can be copied using the same synchronization application. A File Gateway system is configured to allow access to the remote backup system using the CIFS client.

 

Applications for Synchronizing Files

There are several applications available for synchronizing file systems. Mirroring is used for Active / Passive systems, where the primary file system is copied to the backup system. Synchronization is used for Active / Active systems, where changes made on either file system are made on the other system.

This table shows free utilities that can be used to actively mirror or synchronize file systems:

Application

OS Availability

Mirror

Sync

robocopy

Win

Yes

No

rsync

Win, Linux, Mac

Yes

No

Unison

Win, Linux, Mac

Yes

Yes

These applications all have the option to use file dates to optimize scanning and file transfers. When files are copied, the new file’s modification date is updated to match the original file. This allows the application to cut down on file “scanning” to determine if a file has been changed, making the process faster.

The applications also have the ability to skip over specified folders. This can be used to prevent the copying of files that are actively being created.

Using Scripts for Continuous Backups

Note that all three sync apps mentioned above work as a “one shot”.  The commands will not run in a continuous mode without some further scripting. However, this can be easily achieved in most scripting languages, using power of “goto technology”:

 

echo mirroring system-a to system-a-prime %date% %time%

:start

robocopy system-afolder system-a-primefolder /e /purge /xd creating

timeout /T 60 > NUL

goto start

 

The script above will repeatedly copy new or changed files from System A to A’. It starts in the directory named “folder”. The “/e” flag specifies a recursive copy, which scans all subdirectories looking for files to copy. The “/purge” flag causes the deletion of files and directories that no longer exist on System A. The “/xd” flag will cause the application to skip over files in any folder named “creating”.

Also note the number 60 in the timeout command. This specifies a one minute delay between file system scans to reduce the CPU and IO load. This number can be tuned to balance the I/O load and frequency of backups to the DR system.

DR Tag Team at the ACA

At the ACA in Vegas last week, I gave a presentation on DR with Dan Keene from World Wrestling Entertainment.  WWE is a global sports entertainment company headquartered in Stamford, Connecticut. The company is one of the largest producers of original content distributed to 180 countries across the globe. They produce more than 40 hours of original programming every week. Dan discussed WWE’s plan to build a regional Active / Passive DR system to duplicate their production media on Avid shared storage and Interplay to a virtualized environment at a remote location using the techniques mentioned above.

We got some good questions after the talk. Here are a couple of them with answers:

 

Q: What software are you using for switching clients from the primary to the backup system?

A: WWE uses a utility called Production Selector from Jelly Bean Media to automate the connection to their systems.

 

Q: How do you keep the users and workspaces between the primary and backup shared storage systems in sync?

A: Currently it’s a manual process. Changes to users and workspaces on the primary system must be made to backup system. Avid is looking into using the Data Migration Utility to help automate making these changes.

 

By the way, if you happen to be in the Washington DC area on May 25, you can catch my talk on DR at the SMPTE Bits by the Bay Conference.

 

Conclusion

Utilizing system redundancy, locally and/or regionally, is the best way to ensure that media production systems keep running smoothly. Various DR configurations can be deployed to meet the needs of your business. There are free tools available to automate the backing up and restoring of data to local or remote systems.

Using these techniques will ensure that your big budget project won’t go bust if bad things happen. Knock on wood.




What’s New in VENUE 5.1.1 Software for S6L

The new VENUE 5.1.1 software update is now available to download at no charge for all VENUE | S6L customers with a valid Avid Advantage ExpertPlus support contract. VENUE 5.1.1 enhances S6L control surface functionality with GEQ on faders and improves overall system performance.

To download the VENUE 5.1.1 software update free of charge, S6L customers just need to log in to their Avid Account and find the installer link.

 

GEQ on Faders:

VENUE 5.1.1 adds the new GEQ on faders functionality, allowing users to control each of the 31-bands of the built-in graphic equalizers via the control surface faders.

VENUE | S6L provides a pool of 32 31-band graphic EQs (GEQs) which can be inserted across any Aux bus, Group bus, Mains bus or Matrix output channel. It’s always been possible to control the Graphic EQ from the External GUI or via the control surface encoders, but with VENUE 5.1.1, it’s now possible to control GEQs via the faders.

S6L's 31-band graphic EQ assigned to the faders

Here’s how you do it…

In the VENUE software, go to the OUTPUTS page. Attention an Aux, Group, Mains or Matrix output channel. Target the “GEQ” tab and insert a GEQ across the output channel using the drop down list.

Now go the MEDIA > Events page and create a new Event. Choose a Function switch (for example F1) as the Event Trigger. Choose the new GEQ on Faders option as the Event Action. With this simple Event programmed, every time the function switch (F1) is pressed, the S6L control surface will target the graphic EQ to the faders for the Attentioned output channel.

Now go back to the S6L control surface. Attention an output channel (one with a GEQ inserted across it), and engage your new GEQ on faders event by pressing the function switch in the soft keys section.

Result: GEQ on faders!

VENUE | S6L Now Available

The next stage in live sound is here—with the award-winning VENUE | S6L system, you can take on the world’s most demanding productions with ease.

LEARN MORE




What’s New in VENUE 5.1 Software for S6L

The new VENUE 5.1 software is now available to download at no charge for all VENUE | S6L customers with a valid Avid Advantage ExpertPlus support contract. VENUE 5.1 adds significant capabilities to this already powerful system, including expanded networked I/O capabilities and control surface enhancements. To download the VENUE 5.1 software update free of charge, S6L customers just need to log in to their Avid Account and find the installer link.

 

What’s new in VENUE 5.1

  • Support for two AVB-192 Ethernet AVB Network Cards, which enables users to expand their VENUE | S6L systems to support up to 192 remote mic-pres and 96 outputs using three fully loaded Stage 64 I/O racks to take on the biggest live sound productions
  • Spill Mode, which allows engineers to quickly spill any Aux, Group, or VCA members onto the surface faders for immediate access to these channels
  • Enhanced visual feedback of parameters and states on the high resolution OLED displays for faster navigation and mixing
  • Improved Show file compatibility with other VENUE systems

Rich Steeb mixing FOH for Blue Rodeo

Support for two AVB-192 Ethernet AVB Network Cards

The VENUE 5.1 software update enables you to increase S6L’s I/O capabilities to support up to three fully maxed out Stage 64 I/O racks on a single AVB network ring—up to 192 inputs and 96 outputs—by adding a second AVB-192 Card to your E6L Engine. This is in addition to your local control surface I/O and 64 channels of Pro Tools recording and playback. If you are using a single AVB-192 Ethernet AVB Network Card that ships with system, you still can connect up to 64 inputs/32 outputs with one Stage 64 rack, and up to 96 inputs/64 outputs with two Stage 64 racks (48 ins/32 outs each). Each AVB-192 Ethernet AVB Network Card provides two independent Gigabit Ethernet ports, copper and Fiber (via SFP), and also includes a built-in 7-port switch.

VENUE | E6L Engine

This is a significant expansion of the S6L’s I/O capabilities, enabling you to take on even the biggest shows. The VENUE | E6L-192 Engine has always had the processing power to handle these high channel counts, which offers a processing channel for every mic pre in your remote stage boxes, but this represents a major enhancement even for those S6L systems running the E6L-144 Engine which supports 144 input processing channels. Not only does the second AVB-192 Card offer you more inputs than before, but you can now connect up to 192 Stage 64 inputs and switch between them—perfect for festival scenarios where you might be switching assignments between acts and stages. You can even switch between I/O assignments via VENUE Snapshots.

To connect a redundant ring network using three Stage 64 I/O racks:

  1. Connect an audio network cable from Network port A on the back of the S6L control surface to Network port B on the second (middle slot) AVB Network card of the E6L engine.
  2. Connect an audio network cable from Network port A on the first (lowest slot) AVB Network Card of the E6L engine to Network port B on the first Stage 64.
  3. Daisy-chain the first Stage 64 to the second Stage 64 by connecting an Ethernet cable from Network port A on the first Stage 64 to Network port B on the second Stage 64. Connect another Ethernet cable from Network port A on the second Stage 64 to Network port B on the third Stage 64.
  4. (Redundant) To make a redundant audio network connection:
    1. Connect an Ethernet cable from Network port A on the last Stage 64 in the chain to Network port B on the first (lowest slot) AVB Network Card.
    2. Connect an Ethernet cable from Network port A on the second (middle slot) AVB Network card to Network port B on the S6L control surface.
  5. (Optional) To connect to a qualified Pro Tools computer, connect a supported Ethernet cable from Network port C on the S6L control surface to an available Ethernet port on the computer (or to a Thunderbolt port using a Thunderbolt to Ethernet adapter).*
  6. (Optional) To connect to a router or computer for ECx Ethernet Control, connect a standard Ethernet cable from the port labeled ECx on the S6L control surface to the router or client computer.

* Do not connect network equipment such as routers, hubs and switches to any S6L system Network ports.

Duke Foster mixing monitors for Blue Rodeo

Spill Mode

VCA Spill is a feature that was first introduced in SC48 before making its way into other VENUE consoles along with Group Spill. With VENUE 5.1, this functionality has been significantly enhanced in S6L to include VCA’s, groups, and auxes. If you’re familiar with VENUE’s VCA Spill, it will work the same on S6L–you double press your Attention key and the console will spill the members that are assigned to that VCA onto the faders, offering access to the channels assigned to that VCA. In S6L’s default banking configuration, what we call “Profile mode”, when you bring up your VCA’s on the faders and then spill your VCA’s from there, you can basically stay in this mode the whole show. It’s a very easy way to access everything, and especially useful when your dealing with the high channel counts that S6L supports.

What we’ve done with S6L is to expand this even further by adding Aux Spill, which allows you to spill any channel that is assigned to the aux onto the faders. If the aux send for a channel is turned on, that automatically makes it a member of the aux bus. So when you double press the attention key of an aux bus, you’ll spill its members. Why would you want to do that? Because it allows us to then engage sends on faders. S6L has two distinct workflows that work really well in conjunction.

Imagine the workflow: you’re a monitor engineer and you want to access the mix for the singer on stage who’s monitor mix is fed by Aux 10, and you’ve got all your input channels up. In a conventional sends on faders workflow, you’d access the bus that you want to see the sends for, and then engage sends on faders. The limitation with this conventional workflow is that you’re presented with whatever bank you’re on (for example input channels 1-24), irrespective of whether those channels are actually assigned to Aux 10. For some channels the fader might be down at minus infinity with no send going to that bus. What you really want is a more filtered view, where you’re only getting those channels that are assigned to the aux.

That’s the beauty of S6L’s new Aux Spill—it takes advantage of two workflows simultaneously. First you’ll spill the aux master to get the channels assigned to the bus, and then engage sends on faders. The result is the control surface send contribution on the fader for only the channels assigned to the bus.

Spilling is done by double pressing the Attention key, and engaging sends on faders is done by pressing the solo button—the AFL for the output bus (make sure the “Sends on Fader Follows AFL” option is active in the OPTIONS > Busses page). If you go into the Options > Interaction page of the software and link solo and attention, you can keep the console in this mode for the whole show if you wish. At that point you engage the spill, AFL up the channel, and anything that you attention or solo from that point on will spill and give you the sends on faders, offering you quick access to all the relevant elements in the bus you’re working on.

Although operating Aux Spill is very simple from the control surface, we’ve made it even faster by integrating it into S6L’s Universe Screen. The whole point of the Universe Screen is to provide you with quick access to any element that exists in the massive collection of channels, and with VENUE 5.1, you now have a Spill button in the function bar of the Universe Screen for auxes, groups, and VCA’s. By using the Universe Screen and putting the console into Inputs Mode where all faders are inputs channels, you can just spill stuff from the screen—you don’t even need to see the VCA master. If you’ve got one of your flex channels (the two faders at the top of the master section) assigned to follow the “attentioned” channel, it will always give you the master of the spilled members. This is a seriously powerful new workflow that gives you unprecedented speed in accessing and tweaking your channels.

S6L at FOH for Carols by Candlelight in Adelaide, Australia

Enhanced visual feedback

S6L’s Channel Knob Modules (the sections above the faders with 32 encoders) feature a high resolution OLED display that displays the information for its associated encoder. VENUE 5.1 makes it easier than ever to mix and navigate using this section by improving the visual feedback provided by the OLED displays. The encoder is not only a knob, but pressing it toggles the channel for whatever is function is assigned to it—for example, aux send on/off. The OLED displays what parameter is currently assigned, and when you press it, the center of the ‘halo’ graphic illuminates showing that the function is engaged. Additionally, the parameter name is displayed on top, the parameter value is below it, the function of the associated “select” switch and “in” switch are shown down at the bottom left. This enhanced visual feedback will make it much easier for you to see the status of each know at a glance when you’re scanning across the console.

Improved Show file compatibility

Finally, Show file compatibility with other VENUE systems has been improved with the addition of automatic configuration of FX returns. In the legacy VENUE systems, FX returns are a completely different kind of channel than an input channel, which is how the S6L system handles them. So if you’ve got a show file from Profile and bring it into S6L, the software is now clever enough to identify your Profile FX returns and will automatically preconfigure some of your S6L input processing channels to behave like FX returns. Let’s say, for example, that you have a 48 input show file built on Profile. When it’s time to import the show file into S6L, it will automatically map your 48 inputs 1 for 1 and then place your stereo effects returns starting from channels 49 onwards. S6L will also preconfigure those inputs to behave like an effects return—it will make the channels stereo and keep the channel names from your original Show file. And unlike the old FX returns which had a reduced EQ and no dynamics processing, in S6L these channels have complete channel processing, including a 4-band parametric EQ, compressor, and gate.

Gerard Albo mixing the a-ha world tour

As you can see, the VENUE 5.1 software update represents a major increase in S6L’s power and flexibility, and we encourage all customers to log in to their Avid Account and upgrade their systems with this no-cost download.

VENUE | S6L

VENUE | S6L Now Available

The next stage in live sound is here—with the award-winning VENUE | S6L system, you can take on the world’s most demanding productions with ease.

LEARN MORE




The Terrible Word “Premultiplied” Explained

Most of my fellow students hated algebra. I liked it, in part because of our professor – an interesting character, different than any other mathematician I knew. No walking with the head in the clouds, very practical and down to earth – and an accomplished bridge player with numerous international titles. We called him “Julian And-The-Rest-Is-Obvious”, because whenever he presented a proof of an algebraic theorem, he would go only as far as he thought was necessary through the intellectual hurdles. Then he would take a step back, dust off his hands from the chalk, look at the blackboard and say: “and the rest is obvious, isn’t it?” – leaving us scratching our heads and trying to find the obviousness in all the rest…

One of the things Julian taught us was that some things look simple on the first glance, then start to look complex and confusing when you dive in, but when you really understand them they become simple again. So…

Everyone recognizes this formula?

mix = graphics * α + video * (1 – α)

Yup. The good old blending function we have all known and loved since late ’70s. In the real-time broadcast graphics world, it allows us to overlay the texts and graphics elements on top of the video. Simple, isn’t it? Well…

Have a look at this picture:

These dark edges around the text don’t look right. It should look like below, shouldn’t it?

So… where these dark edges came from?

Let’s now forget about our texts and orchids and do some math on a simple example. Imagine an edge of the uniformly light grey object (say, color=0.7 in the scale from 0 to 1). Let’s now assume that the edge is sloped. In order to look nice and smooth, the edges need to be antialiased. Since the raster graphics is composed of discrete pixels, antialiasing is realized by calculating the alpha for each edge pixel, based on the percentage of the pixel covered by the edge:

The pixel which is covered by an edge in 25% will have α = 0.25. Now assume that we overlay our pixel on the video, which has the color=0.4:

The formula is:

mix = graphics * α + video * (1 – α)

So:

mix = 0.7 * 0.25 + 0.4 * (1-0.25) = 0.475

We should receive 0.475, but the color we really get is significantly darker. Why is that? To find out, we need to understand the process:

Most of you encountered the term “fill and key” which in the video world means the same as “graphics with alpha” or “color and alpha.” Fill and key signals are generated by the graphics server and are overlaid on the video by the linear key (usually in the video mixer), producing the composite signal, which I earlier called the “mix”.

The graphics and its alpha look like this:

Color

Alpha

See how nicely the edges are antialiased? That’s what we expected, right? Well, wrong!

The text is antialiased because it was overlaid on the graphics’ black background using the same blending function as for blending with the video. Let’s come back to our earlier example:

graphics = object_color * α + background * (1 – α)

Our graphics pixel will have the value:

graphics = 0.7 * 0.25 + 0.0 * (1-0.25) = 0.175

Now, when we overlay that pixel on our video, instead of expected 0.475 we get:

mix = 0.175 * 0.25 + 0.4 * (1-0.25) = 0.34375

Significantly darker color… You should see now where dark edges came from: the color of the text was multiplied by alpha twice, because it was in fact composed twice: first when drawing it over the background of the graphics and the second time when mixing it with the video. We say that graphics was premultiplied because – since the background was black – it was simply multiplied by alpha prior to mixing it with the video:

graphics = object_color * α + 0.0 * (1 – α) = object_color * α

One might ask what if the graphics background was not 0? Well, that would be really asking for trouble, just don’t do it!

Anyway: the result of this pre-multiplication is that the edges which have alpha smaller than 1.0 will become darker than they should. How to solve it? There are four solutions to this problem:

1. Compose just once: get the video into the graphics system and overlay the graphics directly over it. There are some disadvantages, such as double color-space conversion between YUV (video) and RGB (used in the graphics space), but that is the least of our problems. The most important is that in most cases our customers will simply not agree that the compositing takes place inside of the graphics. Graphics server is usually not supposed to be downstream…

2. So here is another idea: why don’t we just antialias alpha, but not the graphics? In other words, render the graphics in such a way that it is not pre-multiplied:

Color

Alpha

Unfortunately, this won’t work in general case. I’ll let you figure it out for yourselves… Hint: remember that the graphics can have more than one object.

3. What we can do is to un-pre-multiply the graphics. Terrible word, but it describes what needs to be done: before we apply the good old blend function, we need to “repair” the graphics by dividing it by alpha in order to recover the original object colors. This can be easily done by applying graphics fragment program (a.k.a. “shader”) to the rendered graphics. Those who are afraid that dividing by very low alpha values will result in calculation errors should not worry. In the end, what is important is really the error of:

(graphics_color / α) * α

4. But there is a more elegant solution. Back to the math… As a result of the pre-multiplication the true formula is:

mix = graphics * α + video * (1 – α)

i.e.:

mix = object_color * α * α + video * (1 – α)

while what we are really asking for is:

mix = object_color * α + video * (1 – α)

The solution is simple. Since:

graphics = object_color * α

our formula should be:

mix = graphics + video * (1 – α)

In other words, change the blending function of the linear key and simply don’t multiply incoming graphics by the alpha. After all, it was already pre-multiplied by it! And indeed, some linear keyers support such modified blending function. Simple again, isn’t it?

Avid’s graphics servers (called “HDVG”) support three out of four solutions presented above:

1 – through so-called video insertions, which can be mapped on the video background

3 – through the “shader” applied as a post-processing for the entire rendered image

4 – when internal linear keyer of HDVG is used. I know, I know. You are going to ask how does it differ from solution 1? In both cases we put graphics machine downstream… Indeed, but in this case HDVG does the mixing directly in the video i/o board, not in the GPU. And that i/o board has a bypass. Should anything happen to the graphics subsystem, the video will pass through unharmed.

By the way, there is a small caveat. Of all four solutions, only the first one really gives proper results in all cases. Even though we normally use solutions 3 or 4, in some cases they might produce wrong colors. I’ll let you figure it out for yourselves… Hint: the same as in solution 2. By the way – the trouble described above applies not just to antialiased edges. Exactly the same mechanism is responsible for darkening and discoloring of semi-transparent objects.

For dessert, a problem which looks unrelated, but in fact has the same root cause.

The word “calligraphy” is derived from Greek and means “beautiful writing”. In our broadcast graphics world by “calligraphic font” we mean a font in which letters touch each other and change shapes depending on their neighbors. Arabic are most known examples of such fonts. In theory, the glyphs (or their presentation forms, because shapes change depending on neighbors) should touch each other in order for the text to look continuous. In practice, letters overlap. Look what can happen then:

Do you see a subtly thicker joint? There is actually more than one in the text above… If we draw the text with transparency, it looks even worse:

Any idea why? After what we just have discussed all you need to know is that characters are separate objects. And… the rest is obvious, isn’t it?




What is a Production Workflow?

We’ve all seen the old movies about the broadcast news business. It starts with a plucky news reporter given an assignment or following a hunch for a story and then heading into the field with a camera crew to capture a news event or interview a subject. From there, the footage is returned to the station, edited and makes it on-air with seconds to spare! And there you have your typical production workflow.

Joan Cusack in Broadcast News (1987)

When we refer to a ‘production’ workflow, what it means is starting with the visualization of an idea for a show, film or song and going through the production process until there is a final product that is ready to share.

Our goal in QA at Avid is to try and emulate, as close as possible, our customers’ real world end-to-end production workflows in our test labs to identify and fix bugs before the software is released. With a customer base that spans the globe and covers many kinds of project types, unfortunately, we cannot test all of the specific workflows that exist and are in use day in and day out. However, if you want to share what your workflow looks like with us, we should be able to incorporate aspects of it into our production workflow tests! See how to reach us at the end of this article.

In general, a production workflow can be categorized into 5 stages. While they can often overlap throughout the course of a project, we can categorize these general stages as: Ingest or Media Acquisition; Media Staging, Search and Logging; Editing and Collaboration; Asset Management; Delivery, Broadcast and Distribution.

 

Ingest or Media Acquisition

This is where it all begins with the creation of the raw media. The media can come from professional grade video or film cameras, a graphics system, cell phone, a musician recording a song in their home studio or a production crew recording a live sporting event.

 

Media Staging, Search and Logging

In this stage, the acquired media is prepared for the editing process. Assistant editor’s, producers, loggers, interns or other staff begin to review and log the raw media for the editing process. This could include searching for a particular quote from a speech, sound bit from an interview, the best take from an actor’s performance, a guitar riff in a song or spectacular play on the playing field. Notes of the wanted sections are logged and given to the editor.

Editing and Collaboration

As they say, this is where the magic happens! All of the logged and annotated raw media starts to come together in a final product. This is also an iterative process in which a rough cut is made then sent for review and approval. If changes are required, the changes are made then sent out again for approval. In the case of a live sporting event, highlight or ‘melt’ reels can be created while the game is in progress.

 

Asset Management

In this stage, the managing of assets is done. This can range from archiving media, placing watermarks on the media so that it is not stolen and assigning digital rights. It can also include sending media to affiliate station or production houses for their use.

 

Delivery, Broadcast and Distribute

When the project has been completed, the last step is to make this available for consumption by the end user. For a news broadcaster, it can be airing a segment during a newscast. For others, it could be posting to an “Online Video Platform” (OVP) such as YouTube or Vimeo. Another option is making the project available to other outlets like such as HBO, Hulu or Netflix.

While most production workflows can be categorized into these stages, there is no general way in which any stage is done or the sequence in which a production is completed. With the various programming formats such as broadcast news, live sports, and scripted and unscripted reality shows, each production workflow has its own unique quirks and style.

 

Avid’s Role in Your Production Workflow

With tight release schedules at Avid, what we have to do within our workflow QA organization is find the top 3-5 common denominator for each of the stages that can be used in our manual and automation testing. (Note: We’ll take a closer look at the automation testing we use at Avid in a future blog). For this information, we rely heavily on our product designers, program managers and sales teams for their input.

As a current or new customer, we welcome any and all information about some details of your production workflow. If you can provide sample media, screen shots of your timeline, or bin information that would be helpful. Please send your production workflow feedback and examples to social@avid.com. All of this information helps our QA organization get an even better understanding of how our customers use our software in their workflows. In the end, it helps us empower you to create more enriching content!

CTA Workflow Techtalk

Share Your Production Workflow

Send us your production workflow details and supporting examples. Your feedback will give us the ability to help you continue to create more enriching content!

CONTACT US