What ever happened to N-Patches?
April 23, 2006 11:38 PM   Subscribe

Posting for a friend: A question for all you 3D rendering people... I want to know if there is any technology that is supported in recent GPUs that interpolates a 3d surface based on the vertex normals of a triangle in order to gain a higher level of detail.

It appears that support for N-Patches has gone the way of the dodo, but I'm wondering if there is anything like it that has taken its place. I'm looking for something that is computationally feasible for a 3d engine. I'm not trying to make a super-fast FPS or anything like that, but I do want objects to render in realtime.

I'm researching the topic of higher-order surfaces for a presentation I have to give and I'm having trouble figuring out whatever happened to things like N-patches and bezier surfaces. Specificially, I wonder what sort of HOS technology actually is used now-a-days if any.
posted by autojack to Computers & Internet (6 answers total)
 
ATI had some automatic interpolation code in one of their recent drivers -- would interpolate vertices across the board -- but they never shipped it, or at least never enabled it by default. It looked pretty bad.

Vertex shaders do exist, though, though I'm not absolutely sure they're allowed to insert new vertices as opposed to simply modify existing ones.
posted by effugas at 11:51 PM on April 23, 2006


ATI has something called TRUFORM, which sounds like an N-patch implementation to me (although I don't know too much about these things). I don't know how widely-used it is, but my ATI Radeon 9600 has support for it.
posted by neckro23 at 12:04 AM on April 24, 2006


autojack:

Last I checked, and I could always be wrong, true bezier surfaces pretty much went the way of the dodo after the Q3 engine. ATI's Truform is supported by few games and used by default in virtually none.

I'm not sure if the following is pertinent to your question or useful to you, but generally speaking what's done now is normal mapping. Essentially you make a detailed mesh of a few million tris, and another at maybe 3k for ingame use, and maybe a few lower LOD meshes. Then you run a utility (there are tons, both nVidia and ATI provide one) that uses various surface analysis methods to create a series of normal maps for the ingame model with each texel of the normal map corresponding more or less to the same area on the high-res model. The majority of the simple detail from the high-res mesh is thus preserved. Many of the latest games (Doom 3 and derived products, for starters) use normal mapping.

There are two major drawbacks to this method: the lack of detail occlusion and self-shadowing. Detail occlusion is solved with parallax mapping (using both a normal map that indicates surface normal vector and a heightmap) which essentially squishes the texture on the fly in order to provide the illusion of occlusion from the angle of the camera. Oblivion uses this, amongst others. The self-shadowing problem has not been solved without using extremely complex shader programs that absolutely fucking crucify real-world performance and are limited to simple tech demos.

The Quake 3 source is available now if you want to dink around with a software implementation of bezier surfaces in a real 3D engine, though.
posted by Ryvar at 12:41 AM on April 24, 2006


Addendum: what isn't preserved through normal mapping, specifically, regardless of normal map resolution, are back-facing details - from the perspective of the surface normal of the ingame mesh's triangle. Finally, to be absolutely clear I should specify that in the link I provided, it's the fourth sample image only that uses a shader program too complex (thus far) for current mainstream games. Rereading my above comment I'm not sure I was entirely clear on the point.
posted by Ryvar at 12:49 AM on April 24, 2006


Are you aware of subdivision surfaces? They define a way to interpolate a coarse base mesh to define a smooth or piecewise-smooth surface. Pixar used them in Geri's game, which introduced subdivision surfaces to digital animation. I believe they're still used.

Subdivion surfaces only use the positions of the base mesh vertices, not their normals, though.

This paper (pdf) describes how to implement them on programmable graphics hardware.

What do you need to know about higher order surfaces? I do research on this sort of thing and might be able to help out.
posted by driveler at 8:31 AM on April 24, 2006


IIRC higher-order surfaces screw up not only self-shadowing but also shadow-casting in general. I remember this being the reason that bezier curves were not included in Doom 3, even though self-shadowing was not used.

N.B. I may recall incorrectly.
posted by Ptrin at 8:59 AM on April 24, 2006


« Older Is it safe to be a first time biker in Vietnam?   |   Where can I find a list of common spanish... Newer »
This thread is closed to new comments.