Devolving into chaos
February 2, 2011 1:23 PM Subscribe
How do I fix two audio recordings that should be in sync, but aren't?
I have two recordings from a rock concert: one from the soundboard, another a room recording made with a Zoom H4. I'd like to mix them together to create a richer sound. Both tracks are WAV, 16 bit, and 44,100 Hz, but when I line them up in Logic (based on the drummer's count-in of the first song) the two tracks start drifting out of sync after a couple of minutes. What would cause this? More importantly, is there a way I can correct this?
I have two recordings from a rock concert: one from the soundboard, another a room recording made with a Zoom H4. I'd like to mix them together to create a richer sound. Both tracks are WAV, 16 bit, and 44,100 Hz, but when I line them up in Logic (based on the drummer's count-in of the first song) the two tracks start drifting out of sync after a couple of minutes. What would cause this? More importantly, is there a way I can correct this?
Are you sure you're lining them up precisely? You need to find a big peak that will be easily identifiable and zoom in far enough to see individual samples, then align the tracks.
posted by Anatoly Pisarenko at 1:52 PM on February 2, 2011
posted by Anatoly Pisarenko at 1:52 PM on February 2, 2011
Dunno about other DAWs, but the one I use, Reaper, has a playback-rate slider, which can be automated by drawing in an envelope. So in theory if you know that things are falling out of sync at a constant rate, you ought to be able to draw an angled line from point A to point B for the track which you think is the problem, and have it automatically adjust playback rate (by .x, trending gradually up to .x+005, or whatever) over time.
This sounds like a real pain, but it ought to be doable.
posted by Erroneous at 2:00 PM on February 2, 2011 [1 favorite]
This sounds like a real pain, but it ought to be doable.
posted by Erroneous at 2:00 PM on February 2, 2011 [1 favorite]
Use flex time within Logic (do you have Logic 9?).
Once in Flex mode, drag the shorter wav file so that it's the same length as the longer one.
Then periodically line up the transients (every minute or so at first) and see how it sounds.
You can't do this with Logic 8 or lower.
I believe this problem is called jitter.
posted by fantasticninety at 2:43 PM on February 2, 2011
Once in Flex mode, drag the shorter wav file so that it's the same length as the longer one.
Then periodically line up the transients (every minute or so at first) and see how it sounds.
You can't do this with Logic 8 or lower.
I believe this problem is called jitter.
posted by fantasticninety at 2:43 PM on February 2, 2011
Best answer: Not all audio devices are clocked exactly the same. One might be sampling at 44100.05 Hz and another at 44099.95 Hz. Oscillators are never exactly perfect and they always have some drift. It rarely matters except for the fact that errors accumulate the longer the recording, so it might only show up after say 45 minutes or what have you. For professional audio recording this is why you have one master clock source that is slaved by the other devices instead of having each device with its own clock source.
What you need to do is calculate the drift, by measuring the amount that they are off at the end of the recording where the effect is most pronounced, and then either elongating one slightly or shortening the other slightly to compensate.
posted by Rhomboid at 5:51 AM on February 3, 2011
What you need to do is calculate the drift, by measuring the amount that they are off at the end of the recording where the effect is most pronounced, and then either elongating one slightly or shortening the other slightly to compensate.
posted by Rhomboid at 5:51 AM on February 3, 2011
Best answer: Rhomboid is correct IMO. To add a pessimist POV, I doubt any summing of the two signals will result in a better recording than either of the two source recordings.
Even if you get the two recordings lined up "perfectly", keep in mind that sound from different sources will arrive at different microphones at different times. Assuming a reasonably standard rock band setup, let's say there's a guitar or bass amp on the left side of the stage, and one on the right. The sounds from these amps (assuming they're miked) will reach the mics that fed the soundboard at a different time than the mic(s) that did the room recording. This alone will cause phase cancellation and a general deterioration of quality.
(This is why audio engineers cling to the old adage, "use the fewest amount of microphones that is practical". Also, it helps explain why they use noise gates, directional mics, etc.)
I'll assume that you have a soundboard recording that's clear but dry, and a room recording that's more engaging but with less clarity. This would be a typical scenario.
What you can do in such a case is:
1) In a multitrack audio sequencing application, line up the tracks using an audio warp/flex time function as mentioned upthread, or do it manually via trial and error - this needs to be quite precise;
2) Assuming both are stereo, leave the board channels as-is, but send the room channels to a nice-sounding reverb plugin (try a short-ish room or plate reverb). You would either set the reverb to 100% wet, or mute the room channels and send them to the reverb pre-fader. This means you don't get any direct sound from the room channels, only whatever comes out of the reverb. Experiment with the levels of board vs. room, but err on the side of less room. Also, experiment with a longer "pre-delay" if your plugin has such a parameter. (Anything from 30 milliseconds up to 100 depending on whatever sounds right.)
What this would accomplish is a direct reproduction of the clear but dry board sound, with a "ghost" of the room sound added in, reverberated so as to not get in the way of the more direct board sound as much.
I've spent many long hours trying to line up tracks recorded on different clocks, with mixed results. So I feel your pain. But this is what I would do, although I suspect you may just as well end up deciding that one source recording sounds better than the other and just use that one.
posted by goodnewsfortheinsane at 7:42 PM on February 3, 2011 [1 favorite]
Even if you get the two recordings lined up "perfectly", keep in mind that sound from different sources will arrive at different microphones at different times. Assuming a reasonably standard rock band setup, let's say there's a guitar or bass amp on the left side of the stage, and one on the right. The sounds from these amps (assuming they're miked) will reach the mics that fed the soundboard at a different time than the mic(s) that did the room recording. This alone will cause phase cancellation and a general deterioration of quality.
(This is why audio engineers cling to the old adage, "use the fewest amount of microphones that is practical". Also, it helps explain why they use noise gates, directional mics, etc.)
I'll assume that you have a soundboard recording that's clear but dry, and a room recording that's more engaging but with less clarity. This would be a typical scenario.
What you can do in such a case is:
1) In a multitrack audio sequencing application, line up the tracks using an audio warp/flex time function as mentioned upthread, or do it manually via trial and error - this needs to be quite precise;
2) Assuming both are stereo, leave the board channels as-is, but send the room channels to a nice-sounding reverb plugin (try a short-ish room or plate reverb). You would either set the reverb to 100% wet, or mute the room channels and send them to the reverb pre-fader. This means you don't get any direct sound from the room channels, only whatever comes out of the reverb. Experiment with the levels of board vs. room, but err on the side of less room. Also, experiment with a longer "pre-delay" if your plugin has such a parameter. (Anything from 30 milliseconds up to 100 depending on whatever sounds right.)
What this would accomplish is a direct reproduction of the clear but dry board sound, with a "ghost" of the room sound added in, reverberated so as to not get in the way of the more direct board sound as much.
I've spent many long hours trying to line up tracks recorded on different clocks, with mixed results. So I feel your pain. But this is what I would do, although I suspect you may just as well end up deciding that one source recording sounds better than the other and just use that one.
posted by goodnewsfortheinsane at 7:42 PM on February 3, 2011 [1 favorite]
Response by poster: Excellent advice, Goodnews! I was able to "align" the tracks using the flex tools, and using just the reverb from the room as you suggested made for an great sounding mix!
posted by monospace at 8:21 AM on February 7, 2011
posted by monospace at 8:21 AM on February 7, 2011
Great to hear that!
posted by goodnewsfortheinsane at 7:24 AM on May 21, 2011
posted by goodnewsfortheinsane at 7:24 AM on May 21, 2011
This thread is closed to new comments.
Look for a time later in the file when you can distinctly measure how out of sync the two tracks are-- a cymbal crash or another count-off or something. (The closer that is to the end of the file, the better.) You can then alter one of the two tracks (probably the house mic) by speeding it up or slowing it down by a certain percentage. If the tracks are off by one second after 1 minute 40 seconds (100 seconds), it's one percent out of sync. In that case, your transform would be either 99% or 101% original speed.
Logic probably has both pitch-altering and non-pitch-altering speed adjustments. Try them both to see which sounds better.
posted by supercres at 1:40 PM on February 2, 2011