Wrong computation of error bounds in nanite.cpp #984
Replies: 6 comments
-
Sure, but this isn't meant to be computing area or scaling with it - all quantities involved here are linear (e.g. error is approximating distance between the simplified mesh and the original mesh, so the computation here approximates the distance in pixel-related units). In traditional QEM, the error produced by quadrics scales as a square but that is not the case for any of the meshopt functions. |
Beta Was this translation helpful? Give feedback.
-
|
I mean visually it just looks wrong this way. The tri density in screen space should be approximately the same at any distance, no? It gets way higher than that with just |
Beta Was this translation helpful? Give feedback.
-
|
I would not necessarily expect the triangle density to be the same/similar at every distance: if the camera is close to the mesh and the source mesh isn't tessellated too aggressively, you'd expect to have source triangles but as the camera gets further away, eventually triangles will get smaller (presumably approaching the size of individual pixels), as the details need to be preserved. In well tessellated but more planar areas of the source mesh you will get larger triangles pretty quickly, whereas in areas with complex geometry the triangles will tend towards pixel sized to preserve silhouettes, as well as lighting if normals are factored in. In other words, I'd guess it depends on the mesh but triangle density isn't really explicitly controlled here, as the goal is to select a DAG cut where the error is imperceptible (which doesn't always imply uniform triangle density). I haven't tried to use |
Beta Was this translation helpful? Give feedback.
-
|
They are way way smaller than a pixel though, it gets really alias-y and flickery. Because of the 1/x curve at some distance essentially nothing changes anymore, you can't even reach the terminal clusters unless the object is absolutely tiny on screen. I don't have it running right now, I can provide some screenshots later, that will illustrate it better. |
Beta Was this translation helpful? Give feedback.
-
|
There might be cases where the underlying error is an over-estimate of the real error, unsure - hard to tell without the actual content. It might be an interesting idea to have an optional error limit on each cluster computed from the triangle size, to prevent cases where sub-pixel triangles aren't removed at all. (of course, you would expect aliasing & flickering still, Nanite pretty much requires a TAA solution to denoise). For more lenient transitions you'd have to compare to a higher threshold (e.g. comparing to Also note https://github.com/nvpro-samples/vk_lod_clusters/blob/main/shaders/traversal.glsl#L53-L63 is using a linear measure, I'm very confident that part is correct but the error is much more difficult to reason about and much more of an approximation. I've converted this to a discussion and will follow up in December when I can test this :) |
Beta Was this translation helpful? Give feedback.
-
|
I looked into this a little further and tested on some assets. I plan to take another look on a different scene once it is released which might not happen this year, so I will update this in the future again but here's notes so far: The linear scale is definitely correct and how the cutoff should be established more or less. There are some variations on how exactly the scale is computed - e.g. whether the distance is radial or along the view direction; whether the thresholding takes perspective distortion into account or not; etc. These are all mostly relevant close up.
There might be other not-super-obvious constant factors involved here. With the default meshopt sample thresholding, assuming everything else works perfectly, you should select clusters with error <= 1px; if that leads to selection of triangles <= 1px, then by definition a lot of triangles will be sub-pixel (because a triangle <= 1px can easily cover no pixels!). Maybe a better default Additionally, because hierarchical clusterization switches clusters in groups, if a single cluster has a somewhat higher error this could contribute to the delayed transitions (which is likely to also be a constant-time factor that's difficult to estimate). This could in theory be amplified when using the default configuration; I mostly tested all of this with Note that since a terminal group with a single cluster of 128 triangles is, approximately, a 8x8 quad patch, if the mesh on-screen is larger than ~10x10 pixels, then it would legitimately be too early to switch in certain cases. Obviously setting a higher Finally, in theory, there could be cases where depending on the attribute weights and values used, the quadric error overestimates the actual visual impact of edge collapses, which could result in delayed switching. I was able to trigger this by artificially setting normal weights to be quite high; I don't know to what extent this can happen naturally, as I test on either isolated meshes (where it's more difficult to understand the impact), or on NV Zorah scene (which currently has no vertex attributes). To mitigate this, I added an optional experimental edge limit factor - setting this to e.g. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hey Arseny,
I was playing around with clustered LODs and by visual inspection the result never looked like it was reducing triangles enough with distance while using a formula similar to what you have in
nanite.cpp.meshoptimizer/demo/nanite.cpp
Line 31 in d40efb0
I believe this is incorrect: Area in screen space reduces by distance^2, not distance, so it really should be dividing by
d * d. This visually looks much closer to what I would expect too.Beta Was this translation helpful? Give feedback.
All reactions