How to (and not to) edit a 38-camera multicam project

Jun 15, 2015


When I was 23, I shot and edited KMFDM’s 20th Anniversary 2xDVD set. It was a monumental undertaking at the time, working with something like a $10,000 editing budget plus a week on tour at 22 years old – and being given full creative and directorial control. I shot it with a Canon GL2, one-man-band style, and then slaved over the edit and DVD authoring process for 3 months to get it perfect.

The result was my masterpiece: 8 live videos I shot and edited plus 8 sister videos shot by the fans (via the Fankam project) and edited by me. Plus tons of extras and behind-the-scenes footage compiled onto a 2nd disc. We had a sanctioned release and the project was sponsored by the record label. I even buried an easter egg on the main disc. Our first replication batch was 10,000 discs, surely just a starting batch – and then it happened.

We discovered there was a copyright snag with one of the samples in one of the songs. Legal turmoil ensued. The result: there would be no more discs printed. Our “initial” batch of 10,000 copies would be the only copies ever pressed. Project closed. My aspirations to be a platinum-selling concert DVD director were quickly extinguished. 


Fast forward to 2013 – it is one year before KMFDM’s 30th Anniversary, and I realize it’s time for another live concert video: a 30th Anniversary live concert to follow my early work. The wrinkle? Zero budget. No label support. No money to hire equipment and crew, much less even compensate myself for the time.

So instead of approaching the project traditionally, I suggest to KMFDM founder, front-man, and long-time friend Sascha Konietzko that we try another option. I would gather all the cameras and gear I own, meet the band on tour, and travel for their last week of tour shooting the shows and conducting interviews. I would create a video-on-demand product at the end and me and the band would sell it together and totally circumvent traditional distribution channels. And we would crowdsource the interview from their 109,000+ Facebook fans. Sascha agrees, and so sets into motion the project.


Shooting five concerts in a week doesn’t seem like much of a challenge, but I wanted to squeeze the absolute most I could out of these five shows and provide myself with a huge amount of shot variety for the edit. So I assembled all the random cameras, clamps, and magic arms I owned at the time and rented a couple lenses. All told the arsenal comprised:

  • (1) Blackmagic Cinema Camera 2.5k
  • (1) Canon EF-S 17-55mm f/2.8 IS
  • (1) Canon EF-S 10-22mm f/3.5-4.5
  • (1) Canon 100-400mm f/4.5-5.6L IS
  • (1) Canon G20 camcorder
  • (2) Canon Vixia M41 camcorders
  • (1) GoPro Hero 2 Silver
  • (2) GoPro Hero 3+ Black

A motley collection of random camcorders, cameras, and recording formats if I’ve ever seen one.

I chose a primary camera for each concert – either handheld or on a monopod. Then I mounted the other cameras wherever I could, on the drummer’s head, on the singers’ chests, on the necks of the guitars. All over the place. I figured that even if half the shots were crap, I’d have enough material remaining to cut something pretty decent together.

Those seven cameras shot each of the five concerts, yielding 35 takes of about 90 minutes each. In Seattle we had three additional volunteer shooters making the total editing volume 38 takes, each take spanning the entire 90-minute set.


After everything was shot, I had an enormous pile of media on my hands. Close to two terabytes of raw camera data. And in multiple formats: ProRes from the Blackmagic, AVCHD from the Canons, and mp4s from the GoPros. I enlisted master engineer and Punch Drunk IT guru Brian Koepke in the media management process at this point. We decided on the following workflow to concatenate and standardize the footage:

  1. Extract and concatenate all the Canon AVCHD footage using ClipWrap. Although Premiere (and probably other NLEs) can natively handle AVCHD footage, it’s inherently split into multiple files, so we wanted to maintain continuity of data as much as possible, and ClipWrap can rewrap AVCHD into a MOV wrapper without transcoding.
  2. Manually combine all the GoPro mp4s into cohesive clips. There are some tools available now to do this automatically, but I’ve had mixed results with these, where you can lose a frame or two at the junction between clips.
  3. Transcode everything into ProRes 422 masters. We chose ProRes 422 because it’s fast to read/write, very high quality, and exceeds the native bitrate of most of our footage, so we knew there would be little to no loss going into this format as the master.
  4. Create proxy transcodes in ProRes LT (and later proxy) to use for the actual edit. Originally we kept them in 1080, then dropped to 720. When we were really looking for more speed in playing back and editing multiple clips via multicam we dropped again to 640×360.

For storage we used a Drobo 5D to store the initial batch of media, but not for any editing. For the actual editing, Brian created a 3-SSD RAID over thunderbolt that achieved just over 1000 MBps read/write, which was damn impressive. Throughout the process several other spinning-platter RAIDs were built for various pieces, we used the HighPoint Thunderbolt dual RAID which worked very well.

All the edits were performed on either a late 2013 or late 2014 high-performance Macbook Pro Retina 15″ on Adobe Premiere CC. Brian assembled a huge master edit project to sync all the live video takes with a master live audio track, and then handed over the ready-to-edit files back to me.

The “edit suite” was the conference room in our office, windows blacked out by furniture pads. Inside was a Pyle loudspeaker and a 39″ Vizio TV from Costco.



A lot of what I do is video for live music: directing concerts live at music festivals and reacting quickly to what’s happening on stage. So I figured this would be a great approach to cutting the concert: essentially recreate a live scenario in post-production. We actually did some tests to make all 38 cameras available at once and edit that way in Premiere (including some crazy keyboard remapping). No matter what we did, there were some inescapable lags in the system, so instead we decided to approach the edit by city.

The thinking was: I will cut a multicam edit from each city, and then at any given moment in time, the best shot from each city will be available for a “meta” multicam project where each edited city is one “camera.” This seemed like a pretty solid plan, since theoretically you would only ever view the best possible shot, so cutting those together would be a breeze. It would look something like this:

Screen Shot 2015-06-15 at 8.54.07 AM

So I gave that a whirl, editing five 90-minute concerts for 7.5 hours over several days. Then I began assembling the “meta” final video. After three minutes I realized this would not work.

What I failed to consider is the content: music. Like many live directors, I cut to the beat of the music. It’s natural and it makes sense. The problem in this case is that I had cut to the beat of the music five times, and then combine those edits together, resulting in “stepped on” edits, where I instinctively made an edit during the “meta” edit at almost the same time I did previously – so you’d have an edit for 1-2 frames and then it would switch.

It looked terrible. Back to the drawing board:



Since everything was already setup for editing by city, I figured this process could still work, but I would have to rethink my approach for editing individual cities. Instead of cutting to the beat, I would have to cut OFF the beat, choosing the best LOOKING footage instead of timing my cuts to the music.

So I sat down for another 7.5 hours. This time, instead of turning the volume up and rocking out (as I would at a live show), I put in earplugs so I could better focus on what I was seeing, without eliminating the music completely.

Once I had my new and improved city-by-city edits complete, I attempted once again to make the final product. And again, after three minutes it was clear this would not work.

This time the issue wasn’t so much timing as content. Suddenly I was paying more attention to what the viewer SHOULD see at a given moment, such as a guitar solo or drum fill, but because I had gone through each city and cherry-picked the best shots by city, I effectively hid a ton of the relevant shots from myself.

For example, I may have liked a shot from Pomona of a killer guitar solo, and cut to that for 30 seconds or so in that city edit. But then in the final edit I didn’t have access to any of the other city-specific shots, like an aerial view of the drummer during that guitar solo.

At this point I realize that cutting everything by city simply will not work. Since the camera angles at each city were determined by available lighting and mounting options (not systematically by performer), cutting each city as it’s own sub-edit just won’t work. It’s back to the drawing board again:


After 14 hours of multicam edits, I’m ready to try anything that will speed up the process. So instead of cutting by city, I decide to build the sub-edits based on how the music relates to what the viewer expects to see. The vocals are the thread holding everything together, so I choose them the “foundational” layer. The guitars drive the melody, so pass number two will be guitars. The third pass is drums, since you often need to inject drums quickly in-between other shots. This means the edit will be structured with all the necessary singer and guitar shots, and then drums and crowd shots can get peppered on top to tie it all together.

Screen Shot 2015-06-15 at 9.40.20 AM

After this much testing and burning-in of the editing system, I’ve determined that it handles nine simultaneous HD layers in real-time as a multicam project pretty well. Good enough for a full-length live cut for this project. And since nine makes a nice multicam matrix AND means you can assign each multicam angle to a button on a numeric keypad, this seems like a solid approach.

So I do these edits. They work and look good. I complete the first pass on the singers, then use that as “camera 1” for the guitars multicam. This means “camera 1” is actually a sub-edit of all the vocals. I can layer guitars on top of that from the eight remaining shots. And that works.

Then I insert the guitar multicam (which contains the singers multicam) as “camera 1” into a third multicam, which contains eight shots of drums and crowd shots. This seems to work. Until it comes crashing down.

Towards the end of completing the third pass, I start to encounter some strange behavior in Premiere. It starts running slowly. It bogs. It crashes. And suddenly it won’t open my underlying first and second pass from my third pass project. W. T. F.

I spent weeks gently massaging the project file back to life, extracting sub-edits (first and second pass) into new projects, relinking files, and carefully rebuilding my work. Finally, after pulling out a lot of hair (and I think literally having some hair turn gray), I finish the third pass and everything seems about right. Unfortunately I had to “hard burn” the first two passes into ProRess 422 files of their own, thus losing the ability to go back and tweak those edits. I’m able to salvage the work and insert them into the third pass though. I make all the necessary tweaks to have a “first final cut” of the project.


Proud of my hard work and weeks of frustration, I happily export a full version and upload to Vimeo for review. Sascha calls a day later.

“Jacob, the video looks great, but I have to say – it’s out of sync.”

We go back and watch together. I notice a couple of blips and bumps, but overall it looks great to me.

“I’m sorry man, but there are entire sections that are clearly out of sync.”

I protest. I beg. I plead. I decide to get a second opinion from Andy, the drummer. I figure as a drummer and member of the band, he’ll be able to speak with authority on the timing and sync, and will clearly see it’s perfectly in sync.

“Sorry mate, Sascha’s right – a good 80% of this is out of sync. The audio is off, the timing is off.”

I’m ready to give up. I have spent months and months of personal time on this project, all for the love of the project and tiny glimmering prospect of future profits from an on-demand sale. I protest with both Sascha and Andy further, but they are adamant. The video is out of sync and must be corrected before going out.

After a week absorbing this idea, I come around. Looking closely I realize they are correct. At least 20 of the camera angles need to be nudged forward or backward by a few frames to achieve sync. The problem is, having hard-burned the first two editing passes into ProRes 422 files, there’s no way to go back and make this adjustment. I will have to re-edit the entire 90 minute video.


Having gone through hell and back with the embedded multicam sequences in Premiere, I know that is NOT the way to proceed. But at this point, I’m exceedingly familiar with the music. Then it hits me: edit blindfolded.

Instead of worrying about structuring the multicam sequences in layers, just setup one MASSIVE multicam sequence with every single angle, tapping the beat blindfolded. Literally blindfolded. After all the cuts are made in the sequence using the “add edit” command (I remapped this to the spacebar to make it easier to find while blindfolded), simply open the project, and then select a camera angle for each cut using the keypad (or the drop down selector).

The advantages of this method are:

  • You cut exactly on the beat and aren’t distracted by the visuals
  • You instinctively insert more cuts, thus creating a more engaging, dynamic video
  • You are not constrained to cut it live in real-time (we found that even with nine cameras, if you stopped the playhead after six minutes and then restarted the edit process, a noticeable one-second lag was introduced every time you made a cut, thus making it necessary to edit the entire live show start-to-finish in one pass)
  • Since there are no embedded multicam sequences, you avoid all the trouble there

The major disadvantage here is that playing back 38 cameras of multicam is nearly impossible for most computers. Even reducing the frame rate and using tiny 640×360 proxy files didn’t help.

But since I had already cut the set ten times and listened to the set countless more, I was so familiar with the shots, I simply took a screenshot of the multicam output, added numbers in photoshop, and placed this printout next to my monitor:

This fourth and final system worked. And it work GREAT. I could go back and fine-tune any of the edits very easily, I had a much faster, more engaging piece, and I was able to nudge entire tracks forward or backward to adjust sync at any time.

It was this final method that resulted in the actual product, which is available online at Or check out the portfolio entry on the Punch Drunk website.

Thanks for reading and good luck with your massive multicam project!


  • Run all your editing files on an SSD or SSD RAID. We like the Highpoint.
  • Put all your scratch disks and preview files on another, different SSD or SSD RAID.
  • Make sure your OS is on an SSD.
  • Use ClipWrap to make AVCHD files more manageable.
  • Avoid embedding multicam sequences into other multicam sequences.
  • Standardize all your multicam shots into one format, such as ProRes LT or proxy to do the actual edits. It’s very easy to replace proxy files with the full-resolution versions later.
  • If you do edit with proxy files, make sure they are the same frame rate. Many elements can be different – even size or aspect ratio – but frame rate should be the same.

Punch Drunk Makes TOURNAMENTS||Stage Diving||Concerts worth watching