-
Notifications
You must be signed in to change notification settings - Fork 1
Description
Ryan Franz
There are always tradeoffs in creating data geared toward visualization alone. Creating a single batched 3D model is likely better performance, but at a loss of flexibility.
So I think we would need to address the assumptions up front on how this is done. Here are the assumptions that I think would have to be made (for the CDB performance case):
Assuming that the device supports a WGS84 ellipsoid (this was a bigger issue 12 years ago than today)
Assuming a particular elevation tile used (CDB 1.x allows a client to be flexible on mixing and matching LODs from different datasets based on client needs)
It might require assuming that baking in duplicate models from a generic model library (GeoTypical) are more efficiently rendered than if they were instanced in the client. (I don't believe that is true for our system)
It assumes that glTF allows enough attribution to distinguish one model from another when they are all merged together. I'm not the expert on glTF, not sure how attribution is applied to a set of polygons that represents a logical "model"
So, for the CDB repository case, this doesn't seem interesting, as they will want to edit/refine these models. For a CDB edge case, absolutely this would be helpful. Especially when the assumptions about how the client best handles the data are known in advance. For the CDB performance case, I have a hard time with losing the flexibility.
Note: I am not really a fan of the current tiled zip file structure, so don't read this as a defense of how CDB 1.x stores GeoSpecific models. But we do make use of this flexibility in our use of CDB
Kevin Bentley
I have the same concerns about glTF having enough attribution to replace OpenFlight. OpenFlight can store so much more than glTF currently supports. I do support the idea of creating extensions to glTF to make it more M&S ready. What I'm not sure about is how common it is to use some of the flt record types like sounds, heat maps, etc. In other words, how complex would extensions need to be to support most (90%?) of the CDB users? I don't personally know the answer.
Jerome
I fully agree with not losing the flexibility, and what I was considering is a solution that brings the benefits without having to make most of these tradeoffs.
Assuming a particular elevation tile is used -- although this could be a valid assumption that may sometimes be desired for visualization use cases, there is a middle way approach where either vertex attributes for feature IDs or nodes transforms can still support clamping to arbitrary elevation data.
The suggestion to use one batched model for a whole tile is specifically for geospecific models. GeoTypical models would be rendered more efficiently using geometry instancing, which works best with points vector tiles referencing those models (In CDB 1.x, GeoTypical models are also in a shared folder outside of the Tiles structures, rather than in those per-tiles zip files of models).
glTF (or extensions) should be able to assign feature IDs to a list of triangular faces which are part of the mesh to provide this sub-model attribution, providing both efficient batching and proper attributions.