• Which the release of FS2020 we see an explosition of activity on the forun and of course we are very happy to see this. But having all questions about FS2020 in one forum becomes a bit messy. So therefore we would like to ask you all to use the following guidelines when posting your questions:

    • Tag FS2020 specific questions with the MSFS2020 tag.
    • Questions about making 3D assets can be posted in the 3D asset design forum. Either post them in the subforum of the modelling tool you use or in the general forum if they are general.
    • Questions about aircraft design can be posted in the Aircraft design forum
    • Questions about airport design can be posted in the FS2020 airport design forum. Once airport development tools have been updated for FS2020 you can post tool speciifc questions in the subforums of those tools as well of course.
    • Questions about terrain design can be posted in the FS2020 terrain design forum.
    • Questions about SimConnect can be posted in the SimConnect forum.

    Any other question that is not specific to an aspect of development or tool can be posted in the General chat forum.

    By following these guidelines we make sure that the forums remain easy to read for everybody and also that the right people can find your post to answer it.

WIP: Machine learning in scenProc texture filter

arno

Administrator
Staff member
FSDevConf team
Resource contributor
Messages
34,328
Country
netherlands
Hi all,

The last couple of weeks I have been working on a feature to use machine learning techniques in the texture filter. Below is a first screenshot of some result, but this is still very much work in progress. I'll have to do more testing before it will be part of the development release.

Let me start with a little background why I started to try this technology. With the current texture filter it is possible to detect vegetation or other features from imagery. But to tune the filter to work correctly can be quite a challenge. Inspired by some scientific papers I read about feature recognition I hoped that machine learning techniques would help here. Another improvement that I read about in those papers was to do the classification on objects, not on pixels. In the current texture filter basically every pixel is classified as being vegetation or not based on the rules you code. By first segmentizing the image in object and then classifying these objects the results should be better, because there is less noise.

If the machine learning works well it should take care of determining the exact criteria what is vegetation or not. As a user you determine which characteristics are considered only. In the screenshot below you can see that I use 6 different characteristics in this filter, these were inspired again by the papers that I read. Then on the samples images as user you have to specify a number of locations that are vegetation (the red dots) and afterwards the machine learning algorithm will train itself based on that data.

For this sample image the results are better than I had expected for a first real test. But I need to do more testing and also see how it behaves with more sample images that vary more. But at least the first results are encouraging and that is why I wanted to share them with you as well.

1668110194080.png
 
Arno this is awesome, I noticed ArcGIS now has stuff for classification which is awesome but in ScenProc this is absolutely incredible!
 
Hi Dean,
Arno this is awesome, I noticed ArcGIS now has stuff for classification which is awesome but in ScenProc this is absolutely incredible!
scenProc has the feature detection function for quite some years already now. I have used it in projects to detect vegetation for areas where no vector data is available. It works well enough to generate autogen from, but since it is a PITA to tune the filter correctly and if you use different imagery you need to retune the filter, I am looking for improvements there now.
 
Hi Dean,

scenProc has the feature detection function for quite some years already now. I have used it in projects to detect vegetation for areas where no vector data is available. It works well enough to generate autogen from, but since it is a PITA to tune the filter correctly and if you use different imagery you need to retune the filter, I am looking for improvements there now.

I totally get it and concur, been on the same page for years. :)

I literally watched this on YouTube a week ago especially regarding water classification YT Video and it reminded me of ScenProc and all the awesome work you did, so seeing you post this thread was super timely. I don't have much time for large scale renderings anymore, but what you're doing is totally awesome!
 
Stunning work. I am already baffled by the flow chart above o_O. Hopefully that will boil down to 1 icon step called "machine learning magic", ha! (you could make the icon a magician's hat and wand!). It will be interesting to see the results of an image that has a sports field, areas of grass, water, etc. Looking forward to play with this.
 
Hehe, I'm not planning to combine it all in one step, as that reduces the flexibility to change the criteria that are used by the machine learning. But for sure the manual will come with a good sample to get people started.
 
I did a test with some other imagery today, this time I used different images that also include more things like grass, fields of crops, etc. As expected the detection was less perfect out of the box. So I am not figuring out which parameters I need to tune to make it better or if I need to change the implementation here and there.
 
So I see you had a release on 12/5. Looking thru the manual I take it this is an increment release? That is the machine learning via TFE is not part of it yet? Just patienly waiting before I dive back in on it. I looked at the latest release notes but did not fully understand how to read it pertaining to scenProc changes. I think you said unhandled exceptions have been reduced and one could use non-geo TIFF samples now?
 
The last release mainly was for some ModelConverterX changes.

I'm still working on the machine learning. I have implemented a new segmentation step that gives better objects to work with. But the results are still not as good as I want. So I'm tweaking and testing further.
 
Hi,

Time for an update again. As I wrote above the machine learning worked well on the image I showed, but when I tried more complex imagery with forests and fields it gave less well results. I have no changed the algorithm to segment the image into objects. And I have also changed the machine learning approach, before it was trying to identify one class of objects, now I have made it a two class approach where I also learn it what not to detect. These two things seem to improve the quality quite a bit.

Here are some samples from the more complex imagery. The first two images have sample points defining what to detect (green) and what not to detect (red). The other 3 images have no sample points and are thus detected based on the training data of the first two images. I hope that if I train a bit more images the quality will improve even more.

The main obstacle now is that the segmentation algorithm is quite slow, which makes it takes too much time in the editor to wait for updates of the properties to show the results. That makes testing and tuning the filter a bit annoying. So I probably need to see if I can speed up things a bit first.

1670657708477.png

1670657798675.png

1670657834116.png

1670657869814.png

1670657901424.png
 
Fascinating o_O So would one first manually assign the red and green sample points on a sample image and TFE would cycle thru all the other samples "learning" in the process? Or might one run their normal steps see where "The Machine" might help and thus add sample points at that point? Or maybe both ways? This method would apply for water detection as well? Ready when you are!
 
Hi,

In theory it should work with other things than vegetation as well, I just didn't try it yet.

You determine yourself which characteristics the machine learning uses to determine the classification rules. I based this filter on a scientific article I found which describes that color, standard deviation, NDVI, NDWI and maximum intensity difference are good criteria to detect vegetation. I guess water might need other characteristics.

The idea is that you train the machine learning with the samples in the texture filter editor, than afterwards you can run the trained algorithm on all your imagery of a certain area. I think I would normally also include a way samples in the texture filter editor that the algorithm has not been trained on, because that's a good indication how well the training worked. But as I am still experimenting myself I don't know yet what the best approach is.

You would have to manually assign the green and red dots on the images that you want to use for training indeed. But that goes relatively quickly. The machine learning algorithm only uses the red and green dots for its training. So sample images that you did not assign sample points to are not used in the training.
 
Back
Top