Real-time delay for video: how, and what programming language should i use?
May 6, 2011 3:57 PM   Subscribe

Real-time delay for video: how, and what programming language should i use? Processing? Open Frameworks? Something else?

I'm trying to create an installation where there's a real-time video delay. Specifically, I'm using the Kinect in order to overlay a series of images on top of each other, in time, so that you could effectively walk around a space from an hour ago.

This requires me to create a program with video delay; I'm planning for the delays to be 1 second, 1 minute, and 1 hour. The 1 second video delay is pretty easy; at 24fps, that's 24 640x480 24-bit (4-byte) color images stored in RAM, which is about 28 megabytes of RAM. 1 minute, however, is about 1.6 gigs of RAM, and 1 hour is 98 gigs of RAM! Clearly, this won't work with just large arrays stored in memory.

How would I do this otherwise? I'm fine with having concessions such as using 8-bit grayscale, or such, but that still only quarters the RAM needed, which still doesn't solve the issue of having an hour-long delay. Should I be continuously writing chunks of images to disk, and accessing them to disk? Is there a good way to do this in software, or with a specific kind of library? Should I be doing this in Processing, Cinder, Openframeworks, etc?
posted by suedehead to Technology (8 answers total) 4 users marked this as a favorite
 
For the hour long delay, at the very least, you should probably consider compressing the video. I would compress the video with a fixed bitrate and store it in a buffer sized to store 1 hour worth of video at your bitrate.

basically you want to do something like the following for each video frame:
  1. increment i
  2. truncate i modulo the size of the hour long file
  3. get the frame at position i in the file at disk
  4. decompress that frame and display it
  5. compress the current input from the camera
  6. store that frame at position i
The only library you should need for that is the library for video compression / decompression. Since you are not using anyone else's video files and nobody else is playing your file any lib will do, but ffmpeg has a good reputation.
posted by idiopath at 4:08 PM on May 6, 2011


s/store it in a buffer sized to store 1 hour/store it in a file sized to store 1 hour/
posted by idiopath at 4:10 PM on May 6, 2011


Also I should mention that IIRC that technique (minus the compression and disk storage) is called a "ring buffer based delay line". It originates in computer audio but seems to be the natural choice here.
posted by idiopath at 4:14 PM on May 6, 2011


Alternately, write each compressed 1-minute segment to disk as an independent file. For the 1-minute delay, just read the segment from a minute ago. For the 1-hour delay, go 60 segments back. Delete old segments. I think if you use a modern (post-1980s) filesystem and have a reasonable amount of free space on the disk you won't have fragmentation problems.
posted by hattifattener at 4:32 PM on May 6, 2011


It is admittedly a minor concern for so straightforward a task, but the ringbuffer approach has the advantage that all disk access (reads and writes) is sequential, except once an hour when you jump back to the beginning of the buffer. With the way that disk drives are used this is much faster than random access as you would have with one file per minute.
posted by idiopath at 4:41 PM on May 6, 2011


Jitter
posted by Blazecock Pileon at 4:42 PM on May 6, 2011


pure data is built for video mixing and you can use delays with video the same as you normally do with audio. it should pretty much just be a matter of setting up your input, then your three no-feedback delays feeding back onto the same output as overlays.
posted by rhizome at 7:44 PM on May 6, 2011


Follow idiopath's excellent advice with OpenProcessing.
posted by elektrotechnicus at 8:07 PM on May 6, 2011


« Older Why do so few allopathic/western medicine docs...   |   Do you know anything about Netflix ratings? Newer »
This thread is closed to new comments.