• Which the release of FS2020 we see an explosition of activity on the forun and of course we are very happy to see this. But having all questions about FS2020 in one forum becomes a bit messy. So therefore we would like to ask you all to use the following guidelines when posting your questions:

    • Tag FS2020 specific questions with the MSFS2020 tag.
    • Questions about making 3D assets can be posted in the 3D asset design forum. Either post them in the subforum of the modelling tool you use or in the general forum if they are general.
    • Questions about aircraft design can be posted in the Aircraft design forum
    • Questions about airport design can be posted in the FS2020 airport design forum. Once airport development tools have been updated for FS2020 you can post tool speciifc questions in the subforums of those tools as well of course.
    • Questions about terrain design can be posted in the FS2020 terrain design forum.
    • Questions about SimConnect can be posted in the SimConnect forum.

    Any other question that is not specific to an aspect of development or tool can be posted in the General chat forum.

    By following these guidelines we make sure that the forums remain easy to read for everybody and also that the right people can find your post to answer it.

P3D v5 Water detection observations and UE's

Some extra test images with water are always welcome. At the moment it seems the deep learning algorithm works better for the US data then it does for the Dutch data on which I trained it. But that might also be because there is just a lot more water in the Dutch data, with all kind of small canals and streams that are not always detected yet.
 
I've uploaded a couple of images and their matching image samples for you to use in your testing. The area is in the state of Washington, USA, just a little northwest of the town of Wenatchee should you want to get your bearings in Google Earth. Included are two Area images (large 1.3GB each at a resolution of 60cm), and 375 4-band sample images. Way more than I would ever load into scenProc to process but they randomly get created so I can hopefully capture all the needed water areas in the image.

This area was particularly tricky for me due to the changing of the river water colors, the white rapids and the tree shadows over hanging the water which I think will be a perfect challenge for scenProc, ;) I ended up having to go back and manually correct these areas despite TFE doing its best efforts. I've sent an email with the link... any questions just grab me. Good luck!
 
Thanks, I'll have a look how the current training of the algorithm works on this data. Will be interesting to find out.
 
Hi,

I ran a quick test with parts of the big images with the current algorithm. The results are not that bad I think, but the rapids are not always caught indeed. Probably because my training data did not have such features included. In the second image I have colored the areas that are likely water red to make it easier to see (drawn over the input image).

How does this compare to the results you have been able to achieve with your current texture filter?

WA_01.png

WA_01_water.png


WA_02.png

WA_02_water.png
 
Last edited:
Just to clarify, these results were not generated from within scenProc yet. I have the deep learning algorithm running externally now, as Python code. It can process the images, but I am still trying to figure out how I can use the trained model in C# (the language in which scenProc is written) as well. That seems to be a bit of a challenge :D
 
I would say the top image is much better than what I was able to produce. I see no "water" on the land due to shadows (like where that structure is), yet appears to be correctly producing water over tree shadows that overhang the river. The areas are more solid in its coverage. I believe the "white water" rapids could be acceptable as it would just keep the white water image intact.

The lower image I see a bit more challenging and closer to what I was experience, yet overall still better results than what I was getting. That's again due to no water out on land shadows as that take a long time to clean up and is so easy to miss a pixel or two (or a hundred!, ha!). It's interesting to see that is did not capture the water on that little section at the bottom next to the road. The water appears to be more of a solid color... just a thought. But your even getting a lot of the little white ripples on the lower right where mine would have had probably lots of holes.

I would also try a smaple next to the river that has lots of structures and check out their shadows... if they create water from that.

Overall, I definitely think you are on the right track.
 
I'll run it on some area with buildings tonight. Yes, it was interesting to see that some of the still water was also missed. I guess the color and NIR combination is off from that of most of the water that the algorithm was trained on.
 
I did a quick test in a more urban area. I don't see any water being detected in areas with building shadows. Only in some forest areas I saw a little gray color for likely water, but that can easily be filtered out by setting a threshold before considering the output really as water.

WA_03.png

WA_03_water.png


Now let me continue to see if I can run the algorithm from scenProc in some way as well....
 
That looks really good. I would always see red spots in the shadows of those buildings... like that main street. This detection seems spot on. Funny, that one little area on the river that did not capture? The water color looks a little lighter than the tree shadows across from it but looks similar to some colors down the river. Could that be a clustering issue ya think?
 
I'm slowly making progress in running the algorithm inside scenProc. Here are some first results where the algorithm has been run inside the texture filter editor. The results are still slightly different from the external Python script, so I need to do a little more work on it. But happy with the progress :D

1689418110527.png
 
Ah, it only took three steps!, impressive. Looks like it is coming along "swimmingly" (pun intended). Yea, little differences between scenProc and Python. I'd be curious to hear what the detection/processing time will be. Don't know if it will be ready to test , but I will be resuming "water" in about 4-5 weeks. It's more out in semi arid area of Bakersfield, California so I don't expect many issus at all even using the current version
 
Last edited:
I have fixed the last differences as well now. So happy with how it runs in scenproc now.

Didn't do a lot of performance testing, but it seems to run relatively fast. The step is pre-trained, so it doesn't have to process sample images or so. I'll try a test run on a bigger image later.

I can probably give you a test build later if you want to try. I think it will only came in the development release after I have made a new stable release of version 3.1.
 
I ran a performance test. It takes about 4 minutes to process one of those 1.3 GB test files you provided.

I do see that sometimes water is missed (just like a saw with the Dutch data). And in some forests with a lot of shadow I do doubt the results as well.

Guess I'll need to retrain the algorithm to get better results. But at least happy I can run it from scenproc now.
 
"The step is pre-trained, so it doesn't have to process sample images or so"
Sounds like you are saying we will no longer have to provide samples. Am I hearing this correctly? Yes, I will be interested in getting a beta (or Alpha?), of it but no rush. So busy with vector data, library objects, etc., that I won'y be able to re-test water until the beginning of August. But that may work out for you too. Curious, you use the term of "machine learning" for vegetation and water detection. Would this be considered also as a type of AI?
 
You are correct that no samples are needed. The algorithm has been trained on over 6000 samples already, each with reference data for where water is.

To be honest, I don't exactly know when something is artificial intelligence or machine learning. To me it seems to be relatively similar, but I'm sure experts will say they are something very different.
 
Wow! That's a nice time saver without needing to create samples. Would more samples help it even more? I'm sure I could provide you with many 1'000s of more down the road... different geographic regions (deserts, mountains, coastlines), as the US is so geographically diverse.
 
But your samples don't have accurate water data to match do they? That would be needed to be able to train the algorithm.

I think I'll try to create more samples here for training and see if that improves things.
 
"But your samples don't have accurate water data to match do they?"

Not what you mean by accurate water? Would need to supply some form of data in addition to the samples? If that is the case then I guess your are correct, I don't have the samples you would need.
 
To be able to train the algorithm there needs to be reference data where the water is as well. Else the algorithm can't learn of course.
 
I did a test to train the algorithm with reference data that also includes smaller streams, but the results became less accurate from that. Probably because the location of these smaller streams matches the photo less accurate.

Next step is to generate more samples of the training data I used initially and see if that improves the results. But next week we'll have a short vacation, so that will have to wait a little longer.
 
Back
Top