• Which the release of FS2020 we see an explosition of activity on the forun and of course we are very happy to see this. But having all questions about FS2020 in one forum becomes a bit messy. So therefore we would like to ask you all to use the following guidelines when posting your questions:

    • Tag FS2020 specific questions with the MSFS2020 tag.
    • Questions about making 3D assets can be posted in the 3D asset design forum. Either post them in the subforum of the modelling tool you use or in the general forum if they are general.
    • Questions about aircraft design can be posted in the Aircraft design forum
    • Questions about airport design can be posted in the FS2020 airport design forum. Once airport development tools have been updated for FS2020 you can post tool speciifc questions in the subforums of those tools as well of course.
    • Questions about terrain design can be posted in the FS2020 terrain design forum.
    • Questions about SimConnect can be posted in the SimConnect forum.

    Any other question that is not specific to an aspect of development or tool can be posted in the General chat forum.

    By following these guidelines we make sure that the forums remain easy to read for everybody and also that the right people can find your post to answer it.

Object segmentation and machine learning

arno

Administrator
Staff member
FSDevConf team
Resource contributor
Messages
32,999
Country
netherlands
I have an early Christmas present for scenProc users, today I have push a new development release of scenProc that contains exciting new features for the texture filter that I have been working on the last months.

About half a year ago I start to explore how I could make the vegetation detection in the texture filter more accurate and easier to use. I found a number of interesting articles about this in the literature and soon it became clear that all of them were using an object based classification, instead of a pixel based classification. And most were also using the Support Vector Machine (SVM) machine learning algorithm for the actual classification. So I have been working on adding these two features to scenProc as well and from the testing and experimentation I have done until now I think this will indeed make detection of features in imagery easier.

Below is a video tutorial I made about these new features. I would also mention the sample that is provided in section 6.3.5 of the manual. That should be a good starting point to start experimenting for your own project. And if you have questions or suggestions on how to improve this functionality, feel free to post them on the scenProc forum at FSDeveloper.com.

Continue reading...
 
Christmas comes early!!! :santahat:

Thank you Arno for the hard work you done on scenProc. For weeks I have been tugging on your coattails pleading for "give me this-give me that", and you have patiently taken my nagging in stride. I will be diving into this on a daily basis after the holidays if not sooner. Looking forward to it.

Best of the Holidays and Merry Christmas,

Marcus
 
Hi Marcus,

You're welcome. Actually it was on my wishlist for quite some time already to learn more about machine learning, so this was a perfect excuse to do that. The feedback you and other users give certainly is not nagging if you ask me. Often these questions inspire me to try new things (or at least they divert my attention from one bug to another feature :D).

Merry Christmas to you as well!
 
I have an early Christmas present for scenProc users, today I have push a new development release of scenProc that contains exciting new features for the texture filter that I have been working on the last months.

About half a year ago I start to explore how I could make the vegetation detection in the texture filter more accurate and easier to use. I found a number of interesting articles about this in the literature and soon it became clear that all of them were using an object based classification, instead of a pixel based classification. And most were also using the Support Vector Machine (SVM) machine learning algorithm for the actual classification. So I have been working on adding these two features to scenProc as well and from the testing and experimentation I have done until now I think this will indeed make detection of features in imagery easier.

Below is a video tutorial I made about these new features. I would also mention the sample that is provided in section 6.3.5 of the manual. That should be a good starting point to start experimenting for your own project. And if you have questions or suggestions on how to improve this functionality, feel free to post them on the scenProc forum at FSDeveloper.com.

Continue reading...
Merry Christmas Arno and many thanks for everything, may the force stay with you!
 
I have gone thru the video for the first time. THANK GOODNESS you created one, ha! Very informative and I liked how you got more into details such as talking about certain parameters you explained them and show before and after results. I noticed in the video you added a NDWI step but did not go further into it. I have been creating separate TF2 files; one for vegetation and one for water. I do this as I convert the water into shapefiles and use them to create waterpolys and watermasks. In your video example was that to combine the two types of filtering?

Couple of opening questions:
1. I noticed in the video, after explaining about object based classification you "cleaned up" the filter (as you said), by removing some steps. Actually quite a few and then added the Support Factor Machine (SVM) step. I assumed it would be added to the object based steps to further refine the process. But it sounds like it is a different process completely? You try one or the other, not build upon?

2. Can you please create a similar video but dedicated for water detection?

3. Noticed no dual preview panel option. :( Still possible to add this option?

thx
 
Hi,
I have gone thru the video for the first time. THANK GOODNESS you created one, ha! Very informative and I liked how you got more into details such as talking about certain parameters you explained them and show before and after results. I noticed in the video you added a NDWI step but did not go further into it. I have been creating separate TF2 files; one for vegetation and one for water. I do this as I convert the water into shapefiles and use them to create waterpolys and watermasks. In your video example was that to combine the two types of filtering?
That sounds fine. If you want to detect two different types you need to have two TF2 files for that. You can not have one filter output both water and vegetation polygons.
1. I noticed in the video, after explaining about object based classification you "cleaned up" the filter (as you said), by removing some steps. Actually quite a few and then added the Support Factor Machine (SVM) step. I assumed it would be added to the object based steps to further refine the process. But it sounds like it is a different process completely? You try one or the other, not build upon?
Not sure if I fully understand your question. In the first bit of the tutorial I explained the steps to segment objects. Then when I moved into SVM I cleaned it up a bit, but basically the same steps are still used. In principle you need the following stages in your filter:
  1. Segment image into objects.
  2. Calculate the features you want to use for classification for these objects (e.g. mean color, standard deviation, etc.).
  3. Merge all features into one image and feed it to the SVM to classify.
  4. Clean output by removing small detected areas (if needed).
2. Can you please create a similar video but dedicated for water detection?
I haven't tried water detection that much myself and I also have not read that many scientific articles about it. The principle of the video would not be really different though, the main difference is that you need to put the samples of the object you want to detect in the water and the negative samples on land. As for the features to include, I would start with mean color, standard deviation, NDWI and NDVI. And you can always try to add the UWI feature as well and see if that helps.
3. Noticed no dual preview panel option. :( Still possible to add this option?
No, I have been busy the last days in fixing a lot of other open scenProc issues. But I haven't looked at the split view yet. I have added it to the today list and as my Christmas vacation is not yet over I am sure it should make it in in a couple of days.

 
i wonder how efficient this will be with building detection ;)
I haven't tried it as we have quite good building polygons available now, but I think it should work as well. Maybe the features used for the detection need to be varied a bit.

The next idea I want to work on is classify roof types from the imagery. That's about the only thing still missing from my "dream plan" for photoreal scenery that we made 5 years ago or so. I started reading some scientific articles on that yesterday :D
 
No, I have been busy the last days in fixing a lot of other open scenProc issues. But I haven't looked at the split view yet. I have added it to the today list and as my Christmas vacation is not yet over I am sure it should make it in in a couple of days.
I have started on implementing it now, it might look almost done, but I need to ensure all interactions work fine :)

1672321300050.png
 
Wow, that was fast. Doing this over your Christmas vacation too (thought Europeans called it "holiday", :) ). Ok, so it was considered just a clean up. I'll start digging into this today with vegetation to get my feet wet (maybe I should save that terminology for when I start water detection, ha!)

Happy New Year
 
3. Noticed no dual preview panel option. :( Still possible to add this option?
Btw, I just upgraded my monitor. Went from a 24 inch HD monitor to a 27 inch 4K monitor. With all those extra pixels I must say I understand why you want the split view :D
 
I have finished implementing the split view now, so a new development release will be online in about 1 hour with this feature.
 
Was so curious I had to take a look at it now. Looks just like how I envisioned it. I like the added touch that both panels scroll and zoom at the same time.

Maybe I am not understanding how it works. The right preview panel shows just the RGB image sample, exactly how the left panel looks like if I click on Input Image? It doesn't change at all. I was expecting the right panel to show final results if I click on Merge Results?

Example, I click on Input Image and both left and right panel show exactly the same RGB image. As I click on each step the left panel changes based on that step's settings as I would expect. But nothing changes on the right panel? I see no options other than turn on/off split view. I was expecting the left to show results of any changes on that particular step selected while the right would show the final output ( I guess where one could also select Merge Results). Am I just misunderstanding how you set this up?

Otherwise a nice end-of-year update...

Happy New Year!
 
If you right click on a step it is shown in the second panel. So you can choose what is shown there.
 
mmmm... when I right-click on a step all I get the drop down list of all the steps? No changes to the right panel. Have the latest version installed.
 
Last edited:
Sorry, I wrote it wrong. Control click I meant. RTFM as well.
 
Back
Top