What is the best way to realtime-render 3D images on a server?
January 4, 2008 12:22 PM Subscribe
Does anyone have experience using DirectX in a server-side application? Is it practical? If not, what alternatives are there?
My company needs to build a web application that renders dynamic 3D images and sends them to a web browser as static JPEGs. One of my coworkers and I think the best way to do this is to use DirectX to generate the images on the server (assuming a high-end video card in the machine), then save the output as JPEG and send it out. Another developer is convinced that using the GPU for a server-side application is unwise, and advocates using an entirely software-based rendering engine. To me, it seems foolish to ignore the GPU and try to software-render everything, but I'm having a hard time proving my point. One of the criticisms that's been leveled against using DirectX on the server is that "nobody else is doing it". So, I need to know:
Thank you, great hive mind!
My company needs to build a web application that renders dynamic 3D images and sends them to a web browser as static JPEGs. One of my coworkers and I think the best way to do this is to use DirectX to generate the images on the server (assuming a high-end video card in the machine), then save the output as JPEG and send it out. Another developer is convinced that using the GPU for a server-side application is unwise, and advocates using an entirely software-based rendering engine. To me, it seems foolish to ignore the GPU and try to software-render everything, but I'm having a hard time proving my point. One of the criticisms that's been leveled against using DirectX on the server is that "nobody else is doing it". So, I need to know:
- Have you used DirectX to render 3D images on a server?
- Do you recommend it?
- Why or why not?
- Have you used a technology other than DirectX to render 3D images on a server, especially software-based rendering?
- Do you recommend that?
- Why or why not?
- Can you think of any company who is using DirectX on a server
- ... or any company rendering (non-raytraced) 3D images server-side?
Thank you, great hive mind!
A software renderer will be better for a server-side application if you have more than one user at a time, as it will be more easily parallelizable. Most server-grade machines have crappy video cards and unless you get fancy hardware, there will be only one GPU, versus anywhere from 2 to 16 CPU cores in a entry-level box. Big servers will have up to 32 quad-core machines (128 cores) - and one GPU, if that.
I'd stick with a software-backed OpenGL implementation, maybe something like Mesa.
In general, I would avoid requiring any fancy hardware on server machines (like a GPU) if you expect to scale this thing up. The lots-of-cheap-boxes approach is really the best way to go.
posted by GuyZero at 12:44 PM on January 4, 2008
I'd stick with a software-backed OpenGL implementation, maybe something like Mesa.
In general, I would avoid requiring any fancy hardware on server machines (like a GPU) if you expect to scale this thing up. The lots-of-cheap-boxes approach is really the best way to go.
posted by GuyZero at 12:44 PM on January 4, 2008
Also, in the server world, your heat budget is better spent on a faster CPU than on a better video card. In fact, your rack machines should probably not have a video card at all.
posted by the cake is a pie at 1:05 PM on January 4, 2008
posted by the cake is a pie at 1:05 PM on January 4, 2008
The answer depends on how and in what form/format you are sending the 3D to the renderer.
posted by rhizome at 1:28 PM on January 4, 2008
posted by rhizome at 1:28 PM on January 4, 2008
Response by poster: GuyZero - Isn't GPU rendering many times faster than using a software engine on the CPU? Even if you process in parallel, it seems to me that you'd have to have quite a few CPU cores to catch up to the speed of the GPU, or is this not the case?
posted by Vorteks at 1:38 PM on January 4, 2008
posted by Vorteks at 1:38 PM on January 4, 2008
Setting up the scene for each image may also introduce enough of a performance hit to cause problems. The GPU can assemble the scene many times a second once all the geometry and textures are loaded but how long is it going to take you to get that onto the GPU in the first place? Combine that with the limited bandwidth coming back off the VRAM that others have mentioned and you may not have a viable solution.
GuyZero touched on scalability. Say you have a GPU solution that renders faster than the software renderer can. What headaches are you going to run into when your application is a success and you have to ramp up? You're going to be able to add more CPU cores in a smaller space than you are going to be able to add GPUs.
Software rendering may not be as slow for your application as you may think. Check out stuff like Pixomatic and Swiftshader, they may be useful in evaluating your options.
Finally, you might talk to the GPU vendors. The right person at NVidia or ATI may be able to help you evaluate whether this might work for your application.
posted by mutagen at 2:42 PM on January 4, 2008
GuyZero touched on scalability. Say you have a GPU solution that renders faster than the software renderer can. What headaches are you going to run into when your application is a success and you have to ramp up? You're going to be able to add more CPU cores in a smaller space than you are going to be able to add GPUs.
Software rendering may not be as slow for your application as you may think. Check out stuff like Pixomatic and Swiftshader, they may be useful in evaluating your options.
Finally, you might talk to the GPU vendors. The right person at NVidia or ATI may be able to help you evaluate whether this might work for your application.
posted by mutagen at 2:42 PM on January 4, 2008
I would recommend DirectX over OpenGL, just because I think the API is better. I'm curious, though, why you don't want to render on the client. Seems like doing it on the server in the first place may not be the best solution. If you're going to use a software renderer anyway, might as well do it on the client.
posted by jeffamaphone at 3:44 PM on January 4, 2008
posted by jeffamaphone at 3:44 PM on January 4, 2008
I realize not everyone uses rack mounted servers, but my first thought was "you can't fit a a stock video card into a 1U box." Which of course is what all my web servers are. YMMV.
posted by rdhatt at 3:48 PM on January 4, 2008
posted by rdhatt at 3:48 PM on January 4, 2008
nthing the scalability/flexibility argument. Using the GPU to render your results, while potentially faster than a standard CPU, is going to trade off with limitations in the ease of maintaining and scaling out your application. It just feels like the kind of thing where you're going to paint yourself into a corner with the technology, and eventually abandon it for a more practical approach for supporting multiple concurrent users.
That's probably more of a sixth-sense analysis than you're looking for, so take it for what it's worth.
posted by Brak at 5:22 PM on January 4, 2008
That's probably more of a sixth-sense analysis than you're looking for, so take it for what it's worth.
posted by Brak at 5:22 PM on January 4, 2008
Try it out. See what sort of framerate you get rendering a typical scene and reading it back off the GPU, ignoring the image compression and networking side of things entirely. This will tell you roughly how many boxes you'll need to satisfy a given request rate.
Keep in mind that that's an upper bound, and that since you probably won't be able to parallelise rendering at all, making the rendering take any longer will necessarily reduce your total frame rate. If you have to swap in new textures, geometry, or shaders for each image, things will get much worse.
If you go ahead with this, you'll end up with some unusual server-side code that will be difficult to debug and maintain, and you might end up tied to one hardware and OS platform. The hardware and drivers you'll be relying on are (in general) built to get the highest possible framerate in Bioshock or whatever, not to run your highly available web app.
posted by plant at 8:16 PM on January 4, 2008
Keep in mind that that's an upper bound, and that since you probably won't be able to parallelise rendering at all, making the rendering take any longer will necessarily reduce your total frame rate. If you have to swap in new textures, geometry, or shaders for each image, things will get much worse.
If you go ahead with this, you'll end up with some unusual server-side code that will be difficult to debug and maintain, and you might end up tied to one hardware and OS platform. The hardware and drivers you'll be relying on are (in general) built to get the highest possible framerate in Bioshock or whatever, not to run your highly available web app.
posted by plant at 8:16 PM on January 4, 2008
This thread is closed to new comments.
They ran into PCI-e bandwidth issues -- the video card is very good at sucking in polygons, rendering them into its VRAM, and then sending the data out to the monitor. You don't need to *display* the rendered images, you want to read them back from the VRAM -- and that's going to be your bottleneck.
They eventually found that using a software renderer is faster; I'm not privy to all the details.
posted by the cake is a pie at 12:31 PM on January 4, 2008