3D texture mapping, or 2D image distortion?
May 14, 2012 3:48 PM Subscribe
Would learning a bit more about 3D wireframing/texture mapping improve the speed over a complicated ImageMagick distort?
I'm building a proof-of-concept experiment for my work where a user's image is injected into a movie. The movie was created using no CG but with a "tracked object" in it. Here's a picture of what I'm trying to do. Each frame is slightly different and requires its own geometry. This isn't real-time, but the goal is to programmatically build an MP4 as fast as possible using Windows Azure.
My original plan was to use ImageMagick to distort the image on top of the coordinates. That works and looks pretty good, but it's slow -- the polynomial distortion I'm using (which gets the best results I've seen to date) takes about 1 second of CPU time per frame*, so a 1 minute 30fps video could take 30 minutes of CPU time. Then I composite the distorted image with the frame and put together the movie, which isn't so bad but still needs to be taken into account.
I'm wondering if there's thinking about this as a 3D problem instead is worth the effort. For example, I can treat each point as a vertex and turn the whole thing into a wireframe, then apply the user image as a texture to it.
I know nothing about 3D though and would have to teach myself everything from the ground up. But if there's an order of magnitude improvement in render time I'd dive in. And it feels like there could be, given how much more happens in a typical video game.
Is this worth pursuing? If so, what tools could make this happen? Open source command line utilities would be great.
* Multi-processor does help, but I'm looking for even greater improvements.
I'm building a proof-of-concept experiment for my work where a user's image is injected into a movie. The movie was created using no CG but with a "tracked object" in it. Here's a picture of what I'm trying to do. Each frame is slightly different and requires its own geometry. This isn't real-time, but the goal is to programmatically build an MP4 as fast as possible using Windows Azure.
My original plan was to use ImageMagick to distort the image on top of the coordinates. That works and looks pretty good, but it's slow -- the polynomial distortion I'm using (which gets the best results I've seen to date) takes about 1 second of CPU time per frame*, so a 1 minute 30fps video could take 30 minutes of CPU time. Then I composite the distorted image with the frame and put together the movie, which isn't so bad but still needs to be taken into account.
I'm wondering if there's thinking about this as a 3D problem instead is worth the effort. For example, I can treat each point as a vertex and turn the whole thing into a wireframe, then apply the user image as a texture to it.
I know nothing about 3D though and would have to teach myself everything from the ground up. But if there's an order of magnitude improvement in render time I'd dive in. And it feels like there could be, given how much more happens in a typical video game.
Is this worth pursuing? If so, what tools could make this happen? Open source command line utilities would be great.
* Multi-processor does help, but I'm looking for even greater improvements.
I actually decided to switch from ImageMagick to POVRay for a project just a month ago. I experienced a HUGE increase in productivity relative to the goal. ImageMagick was supposed to be a quick "cheat" method for me, but in the end the idea was really meant for 3D.
For what you're doing, I'm not 100% sure I would suggest POVRay, since there is a lot of OpenGL help out there and you wouldn't have to include POV with your resulting application. I don't know OpenGL at all, but in a 3D app, the steps are like this:
1. Create / import your mesh (with or without animation already applied)
2. Create texture that includes image map
3. Map texture to surface of mesh
4. Bind texture coordinates to mesh (this is extremely important so you get deformation rather than the mesh kind of moving around with the image staying put, for example)
5. Set up lighting conditions / background
6. Apply animation / render image frames
I doubt this'd be a huge project, but I really don't have the experience to say so :-) You might also try software such as Processing or Processing.js, both of which (I believe) should be capable of doing this.
posted by circular at 5:06 PM on May 14, 2012
For what you're doing, I'm not 100% sure I would suggest POVRay, since there is a lot of OpenGL help out there and you wouldn't have to include POV with your resulting application. I don't know OpenGL at all, but in a 3D app, the steps are like this:
1. Create / import your mesh (with or without animation already applied)
2. Create texture that includes image map
3. Map texture to surface of mesh
4. Bind texture coordinates to mesh (this is extremely important so you get deformation rather than the mesh kind of moving around with the image staying put, for example)
5. Set up lighting conditions / background
6. Apply animation / render image frames
I doubt this'd be a huge project, but I really don't have the experience to say so :-) You might also try software such as Processing or Processing.js, both of which (I believe) should be capable of doing this.
posted by circular at 5:06 PM on May 14, 2012
this is just proof of concept?
do it in processing, it will be just a few lines of code. You might still need to use ffmpeg to get the output from processing into an mp4, but maybe not. I am not that familiar with processings video output.
However, the warping is totally doable. See the third example. All you need to do is further subdivide the image and the vertexes.
here is a project I did that does exactly that. Just add a jpg for the background, and a png for the car, and you can warp the car to your hearts desire. It runs in realtime too.
If you need to do it on a server or something, processing might not be the best. I don't know what the situation with running it headless is.
posted by jonbro at 11:18 AM on May 15, 2012
do it in processing, it will be just a few lines of code. You might still need to use ffmpeg to get the output from processing into an mp4, but maybe not. I am not that familiar with processings video output.
However, the warping is totally doable. See the third example. All you need to do is further subdivide the image and the vertexes.
here is a project I did that does exactly that. Just add a jpg for the background, and a png for the car, and you can warp the car to your hearts desire. It runs in realtime too.
If you need to do it on a server or something, processing might not be the best. I don't know what the situation with running it headless is.
posted by jonbro at 11:18 AM on May 15, 2012
It's perfectly possible to do 2D graphics in OpenGL. Mapping a texture onto a flat poly with moving vertices should be a walk in the park. However, unless you need it in real time I'm not sure it's worth your time. OpenGL is pretty hard to learn and even harder when the majority of free tutorials are 1) ancient and 2) entirely focused on 3D.
posted by chairface at 9:10 PM on May 15, 2012
posted by chairface at 9:10 PM on May 15, 2012
This thread is closed to new comments.
I have no idea how good this would look but I think you could probably render it in something approaching real time.
posted by RustyBrooks at 3:54 PM on May 14, 2012