• Which the release of FS2020 we see an explosition of activity on the forun and of course we are very happy to see this. But having all questions about FS2020 in one forum becomes a bit messy. So therefore we would like to ask you all to use the following guidelines when posting your questions:

    • Tag FS2020 specific questions with the MSFS2020 tag.
    • Questions about making 3D assets can be posted in the 3D asset design forum. Either post them in the subforum of the modelling tool you use or in the general forum if they are general.
    • Questions about aircraft design can be posted in the Aircraft design forum
    • Questions about airport design can be posted in the FS2020 airport design forum. Once airport development tools have been updated for FS2020 you can post tool speciifc questions in the subforums of those tools as well of course.
    • Questions about terrain design can be posted in the FS2020 terrain design forum.
    • Questions about SimConnect can be posted in the SimConnect forum.

    Any other question that is not specific to an aspect of development or tool can be posted in the General chat forum.

    By following these guidelines we make sure that the forums remain easy to read for everybody and also that the right people can find your post to answer it.

P3D v5 Unhandled Exceptions with TFE machine learning

even before scenProc finishes to process the 4096 grid script i can already draw some conclusions,
few minor issues to iron out, step points improved or added, but beyond that this is incredible!
rewind 10 years this was unfathomable let alone with a single machine and a one man band,

speaking for performance from a birds eye view this is actually beyond reasonable processing time imho,
lets say there 30-40 tiles this size and detail to cover a small to medium size state,
give or take 3 hours processing per tile that's around 3-4 days to complete,
i call that extremally reasonable, well done Maestro! 👏

desktop 4096 grid, script completed processing in 2h and 20min,

scPDesktop03.jpg

here are both datasets overlapping (red layer is 256 grid, green 4096 grid)

data overlap.jpg

one idea comes to mind, in texture filter editor a vector and polygon layer as a filter to improve training and results,
beside knowing colors we can ignore upfront we also know allot of footprints from roads, rails, sport field, parking lots, building, water bodies, runways, and much more,
if we can load a pre edited vector layer as filter it should improve accuracy and maybe speed up learning in large files,
 
Last edited:
Hi Chris,
run USAF_B2 sample, 23min to complete on my laptop,
for whatever reason the entire script run my cpu usage did not exceed 66% at best,
at the low end it was hovering at 46%, do you have core count hardcoded somewhere?
(in texture filter editor during texture learning cpu was pegged at 100% but not in scenProc script run)
I think it is due to your script. You are using SplitGrid with LOD15, while both Clutch Cargo and I were testing with LOD13 there. In post #38 of this thread I explained how to balance the CPU load based on split grid size, multi resolutation segmentation maximum tile size and such.

With my test imagery I got 2048x2048 images by using SplitGrid LOD13. If I would go to LOD15 I would end up with only 512x512 pixel images. Depending on your maximum tile size value in the multi resolution segmentation step that only leaves limited room for parallel processing and thus maximum CPU load. With a maximum tile size value of 256, I would only get 4 tiles in parallel there. If you machine has 4 than 4 cores that will mean the other are idle.

The USAF_B2 image has a slightly different resolution than my test image, so the numbers will work out differently, but I think you will get the idea.

I think you are setting the split grid size too small. As I explained before DetectFeatures itself does not run in parallel, as that gives too much fluctuation in memory usage. So you need to balance your input properly to keep the CPU busy.
while training that huge 1GB tiff in texture filter editor i did get few UE's just as learning completed,
got the same on both systems i tested with "Array dimensions exceeded supported range." (that UE happened after i rebooted, changed to 256 segments,
total system ram usage was relatively low around maybe 7-12gb through the whole process, no ram shortage or reason to complain about memory),
That's a big NOGO. You should NEVER use a 1 GB image in the texture filter editor. The idea is that you use sample images of around 1000x1000 pixels in size to test and train the texture filter. From the big image you will process in the end you just select some samples that are representative for what you want or don't want to detect. And once it is trained you can run the filter in the DetectFeatures step on a big image.
 
I think we need learning button as separate function, it cant do the auto re-learn process automatically it has to be manually triggered,
maybe a checkbox to "auto learn" similar to merge results for smaller files, but for dealing with larger files it has to be manually triggered otherwise the current logic makes it impossible to work with, i am trying for fun run 4096 grid, i had it on 256 when the project loads after closing it (after initial learn and being saved) navigating from image input step one to multi res step makes me wait 20+ minutes while it tries to relearn the file or something happens that's way too long to be working with comfortably, I just wanted to change the segments in that step from 256 to 4096 i have to wait 20 min just to do edit the parameter,
Why? Not every filter will have a machine learning step, so it makes no sense to add a learning button to the interface that is only applicable to one type of step. That would be confusing to users.

I think you are just working with too big samples in the editor. You need to keep them around 1000x1000 pixels. In that case it is maximum a few seconds to show the effect of a parameter. Actually when you are waiting for the object segmentation step to calculate you are not even relearning, it is the SVM step that is learning based on the sample points. But I think you are just waiting for the object segmentation to finish.

A small trick could be to select no image in the preview, then alter the attribute of the step. In that case it triggers no instant update of the preview.
below is about 30min into running the 4096 grid script, first few minutes scenProc pulled well over 64GB ram and cpu was around 40% usage,
10 min into the run ram was freed and cpu usage reduced to about 10% i am guessing this will be the case for the reminder of the run,
(btw texture filter editor window shown below is as wide as it will allow to expand on my screen)
I will stress again, you should NEVER load a 1 GB image as sample into the texture filter editor. That makes absolutely no sense at all. If you are asking the tool to tweak a filter based on an image that is about 16000x16000 pixels in size and you are running the object segmentation to run on so much data it is normal that you have to wait for that long. Even in your script you won't run the texture filter on that much data at once, as the SplitGrid step will first slice the image into smaller segments.
 
one idea comes to mind, in texture filter editor a vector and polygon layer as a filter to improve training and results,
beside knowing colors we can ignore upfront we also know allot of footprints from roads, sport field, parking lots, building and much more,
if we can load another layer as filter it should improve accuracy, not so sure about time increase,
You can already load vector data into the texture filter editor and use the burn polygon step to make a mask image out of this. And then you could use that image in your texture filter to manipulate the data. Alternatively the DetectFeatures step also takes a mask filter as input.

But I don't think this will make a lot of difference for the performance. The texture filter would still have to check those pixels, it's just that their output is ignored. It would only help to reduce the vector feature output in the end.
 
I think you are just working with too big samples in the editor
yea i think so too, the entire time i was under the impression that the larger source image provides better/more detail for texture filter editor initial learning process,
i would much rather work with a small sample, ill adjust my splitgrid to 13 so we all match,
 
rerun the test after doing it from scratch properly, run completes in 15min with no output,
everything looks normal data seem to be processing cpu was at 100%,
what am i doing wrong here? (i even retrained saved and run it again with same results)

1.jpg


2.jpg


3.jpg
 
Do you see results when you check the output in the texture filter editor?

And did you check that the sample and the image processed in the script have the same number of channels?
 
Great feedback Chris. Yea, I got no output when I was using 3-band and 4-band together by mistake. What you d/l should all be 4-band... ooops, hope I didn't send you somethin' in 3-band. Arno, I too have been using SplitGrid15. I have not tried a splitgrid13. Again, I am using 15 so I do not have to use PROCESSHOLES... for me, a big time savor. I will try grid13 tomorrow. Maybe it works different with the new steps?

If I am understanding you guys correctly, if I click on anything; a different step, a different sample, it can take some 20 seconds to refresh. If that is due to sampling points, a way to hold of on refreshing until one clicks a "refresh button" would be interesting but then that would mean having to press a button every time... not sure if that is better?

Plan to test water detection tomorrow as well....
 
Thanks guys,

Do you see results when you check the output in the texture filter editor?
i do,

hope I didn't send you somethin' in 3-band
it appears that USAF_B2.tif only has the 3 RGB bands, NIR is missing,
how did i get output before without NIR layer 🧐

edit: i just noticed all 15 samples with NIR have the same fake coordinates while USAF_B2 has real coordinates,
that should explain why i get no output, referencing sample area is not within the same geo tiff coordinates,
1.jpg
2.jpg


@Clutch Cargo i doubt your current results are as good as they can be if you used the same samples the script did not factor the data in them,
whatever it learned was referencing a different area, the results you got from USAF_B2 should be just straight forward results without utilizing learning,
 
Last edited:
If I am understanding you guys correctly, if I click on anything; a different step, a different sample, it can take some 20 seconds to refresh. If that is due to sampling points, a way to hold of on refreshing until one clicks a "refresh button" would be interesting but then that would mean having to press a button every time... not sure if that is better?

It's usually the object segmentation that takes time. When using the same sample image the segmentation results are cached. But if the svm needs retraining it will have to process all images with sample points and thus requires object segmentation to be calculated for each of them.
 
edit: i just noticed all 15 samples with NIR have the same fake coordinates while USAF_B2 has real coordinates,
that should explain why i get no output, referencing sample area is not within the same geo tiff coordinates,

Sample images don't need coordinates to work. So it does not matter for the output of they are georeferenced or not.

@Clutch Cargo i doubt your current results are as good as they can be if you used the same samples the script did not factor the data in them,
whatever it learned was referencing a different area, the results you got from USAF_B2 should be just straight forward results without utilizing learning,
That statement is complete nonsense. The location plays no role in the learning at all. The whole idea is that you train on some samples and then can apply that logic to other areas as well.
 
The location plays no role in the learning at all
if placing sample points has no relation to location and vicinity colors in a given spot,
a green gradient scale is all we need, unless i am missing something aren't all other colors basically redundant?
 
With the sample points you indicate which objects are of the class you want to detect. Depending on your filter the object will have different characteristics, like color, ndvi, ndwi, etc. But location is not something that is used in this.
 
new script with 512 grid laptop completed in 15min,
freshrun.jpg


same run on desktop completed in 24 min
desktop run2.jpg


freshrun2.jpg
 
Last edited:
do you think we can talk directly with postgress to filter vector and poly features on the fly in the script process?
 
do you think we can talk directly with postgress to filter vector and poly features on the fly in the script process?
Of course you can. Either enter the filter of those features as mask in the DetectFeatures step. Or use a BooleanFeatures afterwards to get rid of them.
 
What kind of features you want to exclude from the detection?

I think for performance the mask will be slightly more efficient, as vector processing is more expensive.
 
im thanking buffering roads, trails, rails, flat surface feature and waterbodies poly, etc., anywhere trees shouldn't be or we want to make sure they are not in like airport bounds,
 
Back
Top