Proof of concept: Use Shader Graph for point cloud rendering #265
Proof of concept: Use Shader Graph for point cloud rendering #265
Conversation
|
@kring this is the one method I forgot to test with Shader Graph 😭 I'm glad you were able to get something working! I'm not sure how many of those items you'll be able to knock out, so while it would be nice to get it into this next release, it depends on how much time you'll be able to spend on the remaining items. But it would be nice to get some form of attenuation in the release for the upcoming blog post. As for the task list you put together:
|
|
I messed around with the shader graph today and the vertex ID node does work in built-in. As proof, I removed the vertex deformations from the vertex shader and used this as the mesh color (the divisor is 100,000): When I do Then I thought it had to do with retrieving the data from And it actually turned out that the data could be retrieved from the buffer if you hardcoded an integer. So when I add a But enabling the rest of the code still makes the entire thing disappear, so there's still something else that's wrong. Have to find out where. EDIT: I believe that |
|
bump. This would really improve point clouds. |




The point cloud rendering in #218 is awesome, but unfortunately it will only work with the Universal Render Pipeline. That's especially unfortunate because we've added support for the built-in pipeline in this release. I was curious what stopped us from implementing point cloud rendering using Shader Graph, which should make it much easier (perhaps even automatic) to support all the render pipelines. So this draft PR is a proof of concept of doing that, and I don't see any major barriers to it. Short version: it works.
URP:

HDRP:

It uses a custom function node to read from the structured buffer, so no increase in memory usage. In fact, it should be pretty easy to use this same approach in the non-attenuated case, too, so we only need one copy of the points.
Because the output vertex position in shader graph is always in "object space", we need to transform the clip space position back to object space, only so Unity can go the other way again. I think this is the biggest downside to this approach. But GPUs basically do
matrix * vectormultiplications in their sleep, right? I haven't measured performance to be sure, but I think this is a worthwhile tradeoff for the compatibility that shader graph gives us.It should also be possible to do nifty stuff like drape raster overlays over point clouds.
I'm currently using an "empty"
Meshto drive the rendering. It doesn't have any vertex data, but it does have index data because meshes require it. So this is pretty inefficient, but using a Mesh rather than DrawProcedural makes Unity set the model matrix and maybe other uniforms in the normal way that shader graph expects. We can definitely make this more efficient, I was just being lazy.Stuff I haven't figured out (hopefully no deal killers here, but who knows?):
@j9liu I'm curious what you think. I guess the first question is whether I've missed any major deal killers on this such that its not as practical as it appears. And the second question (assuming the answer to the first is positive) is whether we want to a) ship the current implementation in this release, and perhaps switch to shader graph next release, b) hold shipping attentuation until we sort this out, or c) try to wrap up this approach this week and get it into this release.