• Which the release of FS2020 we see an explosition of activity on the forun and of course we are very happy to see this. But having all questions about FS2020 in one forum becomes a bit messy. So therefore we would like to ask you all to use the following guidelines when posting your questions:

    • Tag FS2020 specific questions with the MSFS2020 tag.
    • Questions about making 3D assets can be posted in the 3D asset design forum. Either post them in the subforum of the modelling tool you use or in the general forum if they are general.
    • Questions about aircraft design can be posted in the Aircraft design forum
    • Questions about airport design can be posted in the FS2020 airport design forum. Once airport development tools have been updated for FS2020 you can post tool speciifc questions in the subforums of those tools as well of course.
    • Questions about terrain design can be posted in the FS2020 terrain design forum.
    • Questions about SimConnect can be posted in the SimConnect forum.

    Any other question that is not specific to an aspect of development or tool can be posted in the General chat forum.

    By following these guidelines we make sure that the forums remain easy to read for everybody and also that the right people can find your post to answer it.

MSFS Everything about CGL generation [Custom DEM works]

We agree on the generic formula and that my writing can leave much room for reader interpretation and disagree on how the upscaling should be done, I think.

I am using pregenerated target LODs, each LOD made directly from source data. Just to waste some time -__-

Will try to explain the subdivision process better tomorrow.
Regarding testing the interpolation the game does, if you set your game terrain level of detail to the lowest, zoom in and move closer/further from the terrain, you can see very well the accumulation of interpolation error. Doing that allows allows using tile debug to measure altitudes of the actual vertices.

Did so in the above image some time ago, apologies for the readability :D
Circled values are from a base layer, not interpolated, other values are interpolated (once).
I hope you can read the 14.06something middle of the circled 15 values, when interpolated 5+ times, without adding corrective data in the delta layers, it will "dissolve" the mesh like in your first test image, so it will look like the filter is larger I think.

this 4x4 is annoying because you need the neighboring tiles to be able to interpolate at the edges.
With what I came up there is no need to use the neighboring tiles. But I will have to explain more.

It's probably what you mentioned above as being Lanczos. It might also just be bicubic?
Yes, that's what I meant, I think I saw something in a "corner step response" that did not fit bicubic, but I will have to double check. And as you said, it's really not the most important thing. "Nice to have" level of detail.

So: In order to go beyond Level 12 you're using 32 bit QuadKeys and something in the cgl header?
Yes, cgl header byte 0x1C should be 1 instead of 2, QuadKeys 32 bits long and what I designate in my cgl layout generation code as "levelchangevalue", should be
268435456, instead of 4096.
Hi again
I doubt, that we both found different correct (or nearly correct) solutions. I think it's rather the same solution found out via different paths.
So let's see:
OpenCV documentation:
https://docs.opencv.org/3.4/d4/d1f/tutorial_pyramids.html shows the gaussian kernel used
https://docs.opencv.org/3.4/d4/d86/group__imgproc__filter.html#gada75b59bdaaca411ed6fee10085eb784 describes the pyrUp function

Now, when upsampling (x2) and filtering at the same time, 3/4 of the information are missing ("injected rows and columns of zeros" in the pyrUp function documentation). So to apply the gauss kernel, I differentiate the cases and accumulate only the values that are actually available:

The first case is really just a simple average of the four neighbours.
The next two cases (really the same case rotated by 90°) need to make use of the weights from the kernel (could be reduced to 1/6/1).
And well, I just realised, that I don't handle the last case according to the kernel (therefore in brackets). For vertices that exist in both LODs I don't filter at all.
As this would likely change the appearance and the result of my algorithm looks pretty much spot on I'm guessing, that's what Asobo does as well.

Thanks for the information about the header info. I'll give it a try soon.
It indeed seems that our filters produce exactly the same result :)

The linear interpolation I apply on the edges doesn't produce perfect results though. Good enough when used in real terrain I think as there is no gaps.


  • 1621861289167.png
    54 KB · Views: 43
  • edge_error.jpg
    215.5 KB · Views: 38
Unfortunately I have been slammed at work and have some catching up to do here :)
Thanks for the info about chunked LOD - I'll have to have a closer look at it. I'm not quite sure yet how minimal error downsampling ties in with elimination of unnecessary lods yet. So far I've seen it as two separate steps.
Green arrows are transitions/algorithms that I have in some form.
- Elimination of unnecessary lods is what I have planned in the "prune/optimize" step. Discard differential tiles with neglectable amplitudes starting at the leaves. The MSFS default offline data seems to do that as well, as many CGLs have less that 341 tiles (5 Lods: 1+4+16+64+256). I guess this would become more interesting once we can get to higher LODs. At the moment it's probably mostly used for large water surfaces.
Yes, these are orthogonal processes. However, I don't think the MS LOD scheme is as much about reducing unneccesary LODs due to minimal error, as much as it is about reducing file size and bus bandwidth/VRAM for this intermediate data. It seems like a consistent, conscious design decision to skip every other LOD in both raster and vector data. In terms of Chunked LOD or anything else based on an error metric, you would simply double your error metric per level.

It is such a pleasure to see someone citing their sources (and in the correct format too)! 👍 👍 :)
I spend a lot of time in research papers / journals and have APA ingrained into me :)

Yet, for large parts of western switzerland (close to the french border) the CGL source is used for the lower LODs where closeup it switches to the LOD(s) streamed from the cloud.
Yes, going beyond lvl12 is possible, only needs minor changes to the code.
The quadkey format "2" (28 bit quadkeys) is indeed necessary. Additionally, an Asobo dev has mentioned in the dev forum regarding SAI tiles (imagery) that you must provide the entire pyramid to the highest LOD provided by Bing or Azure in an area in order for the sim to properly override in all circumstances. The only publicly accessible way to determine this without processing cm2(coverage map) tiles (WIP) is to inspect representative tiles in-sim using the tile debug tool.

I think I figured out how the game actually creates it's mesh based of the CGL data:
This looks correct to me as well. This is called elevation "residuals". It is very interesting to me that they are not using Chunked LOD as FSX/P3D, but that does require an offline computation step to optimize (performed by resample). With uniform residuals disk compression is efficient. When rendered more data is stored in VRAM and it uses more triangles, but hey triangles are relatively cheap these days, and getting even cheaper when LOD selection can be done in a mesh shader.

Currently, only replacing is possible. But as the format is known a tool could be made to convert contents of VEC+VECN cgl to a shapefile(s) for example, edited and converted back.
This is awesome! I have almost fully reversed the vecXXX.cgl files (except one 8 bit flags field on roads), see my other thread: https://www.fsdeveloper.com/forum/t...e-bgl-and-cgl-decompressor.433789/post-883045. I have put docs on the geojson output here: https://github.com/seanisom/flightsimlib/wiki/CGL-vecXXX-File-Format, but like you have run short on time for documentation and have not updated with the binary format yet. I will try to finish documenting that and post the source code this week. It seems we are very close to being able to roundtrip this!

TileAbs[x] = Upsample(TileAbs[x-1]) + TileDiff[x]
This makes sense, since it is using residuals. While taking the highest LOD and recursively applying the same Gaussian kernel up to the root is going to introduce a significant blur. However the method of starting from the root and applying a weighted incremental sampling to upsample higher LODs seems correct. It seems you've both made great progress in trying to reverse engineer the actual filter.

I would not be surprised if they referenced the proland code when building this, as it is the canonical way to do residuals. That similarly applies a uniform kernel while collapsing edges: http://proland.imag.fr/doc/proland-4.0/terrain/html/index.html#sec-residual However in terms of filters the MSFS binary has functions named "UpsampleDemBilinear" and "UpsampleDemBicubic" in the DEMDataMarker CGL residular parser. Do with that information what you will.

As this would likely change the appearance and the result of my algorithm looks pretty much spot on I'm guessing, that's what Asobo does as well.
Are you guys saying this matches how the levels are sampled in the default CGL files? Or just each other's implementations? I would be very curious if we can fully figure out the MSFS algorithm, as that opens the widest possibilities in the absence of an SDK!
Last edited:
Are you guys saying this matches how the levels are sampled in the default CGL files?
I'm not quite sure what you mean. In order for a test pattern to look correct when loaded in the sim it must have been saved using the correct inverse process. So as the test patterns show up correctly I guess we can deduce that the algorithm is correct. What happens in the edge rows might need to be looked at more closely. I'd be very relieved if breadeater's assumption there is correct and we don't have to involve the neighboring tiles. It's quite likely considering that the FS probably just picks those tiles out of the files that are needed.

However in terms of filters the MSFS binary has functions named "UpsampleDemBilinear" and "UpsampleDemBicubic"
These might be used for the "in between" points. I have noticed lately that when quickly zooming about in my mesh, so that you can observe the loading of new LOD tiles, the mesh is initially displayed faceted and after a few seconds it changes to rounded.
BTW: The overshoots the bicubic filtering creates in rugged terrain actually turn out to be a major pain - they just look stupid on sharp edges ("sawtooth ridges" for instance). I noticed that the Asobo mesh in the french alps suffers from this effect less, but they are also going 2 LODs higher and there doesn't seem to be much added detail to show for it. So it might be that they just oversample and blur to prevent the overshoots. Alternatively the mesh could be optimized to compensate for the overshoots, but that would be silly, right? Well I guess blurring IS a form of compensating for subsequent sharpening.

Anyway, I completed a level 13 tile that is close to my heart (well, about 1.5m below it when I'm standing) as a showcase model without blurred LODs. It does still have quite a few issues but it nevertheless turned out well enough that I thought it would be a shame to keep it to myself. So I uploaded it to flightsim.to - hope that's ok with you guys.