- Messages
- 33,681
- Country
I have fixed the UE now, I will push a new development release later this morning when all the regression tests have ran on the change. That will also include the performance optimizations I made yesterday.
Any other question that is not specific to an aspect of development or tool can be posted in the General chat forum.
By following these guidelines we make sure that the forums remain easy to read for everybody and also that the right people can find your post to answer it.
To my pleasant surprise, the process finished not in 23 minutes but 7.5 minutes! Is it as accurate as the segmentation? No, but looks very-very good. Of course need to review sample points and other settings but this method seems very promising to explore deeper.
did a little playing around too, thinking out loud and specifically on detecting certain features in this case example ill be referring to vegetation...
right off the top a wide array of colors can be excluded instead of marking these areas on the image itself a "macro" filtering color array focusing on specific color segment could speed things up further,
for example when doing veg detection blue, white, gray, black, pixels etc., could be excluded/filtered in advance maybe?
I am not sure. the GPU is normally good at things that can be done well in parallel (like processing a lot of pixels separately). But the segmentation considers the whole image, it is checking neighbor segments of pixels to see if they need to be merged in the same segment. So I am not sure how well that would run on a GPU (I don't have a lot of experience with GPU programming).i think i get it, the tile has to be segmented first to then be masked out,
are you able to code just segmentation step to maybe execute on the gpu/dgpu?
I have been struggling with bit filters as well. But I am afraid that a zoom will make the text unreadable. So maybe only panning is enough or do you really think zooming makes sense?while making silly suggestions, a zoom in/out like in the image pane area (possibly dynamic?) for the steps map area will make it bit more manageable with lots of steps,
gotta start somewhereI don't have a lot of experience with GPU programming
i don't see an issue if you can zoom in and out of real-estate sections (that's what I mean by dynamic zoom not linear centered),I am afraid that a zoom will make the text unreadable
gotta start somewhere
i think managing and balancing code execution between cpu and the powerful gpu's we have on the market today is key for maximum optimization,
I don't see an issue if you can zoom in and out of real-estate sections (that's what I mean by dynamic zoom not linear centered),
i think it will make the real-estate more manageable, the text will be readable up close, text not so much zoomed out (step icon/symbols should still be identifiable) when needed for a more logical flow arrangement that's not limited by view,
for example i would place my steps in normal zoom but then zoom out to align and connect their logical strings, i would zoom in to an area when needed and zoom out when i want a wider view of the flow,
when adding points i would reduce steps map area so i could have more room in the image area to mark, if i could zoom out in the steps area so i could see the entire flow (text is irrelevant at that point)
i could jump from one step to the other for adjustment, properties on the top right remain the same size and viewable text there is not effected, just the workflow area itself,
i haven't run the script yet,what's the processing time for your script above?
Is that not just how normal zoom is supposed to work. If you zoom with the mouse, it uses the location under the mouse as anchor.none linear zoom as in zoom that is not encored in the center, zoom that zooms in where you point you mouse and scroll in,
as for the "real-estate" area i am referring to the steps work area as a whole (see area marked in red below),
If both of you are using different images to process and have different sample points to train the machine learning with, you are comparing apples with pears.gave it a try and had to stop the script from consuming all my ram (64GB), overall it run about 21min before i killed it,
ill give that a re run, i think lod15 might be too much,
I don't know yet how much I can optimize it further. The first implementations I made were a lot slower. So I am not sure if I can gain another 50% or not. But I am trying to think of ways to change the implementation of the algorithm that might speed it up further.Curious, are you saying it could be optimized further? Like the possibility of reducing process time, say 50%? If so, would that take days-weeks-months to achieve?
i think there's something wrong with the tif i am using (the one posted in first post), run the script again got out of memory after 30 min,If both of you are using different images
The first post only contains some samples, but not the image that the actual script is ran on. Clutch Cargo provided me a LOD10 image to test with seperately. But if you want to test performance of the entire script you would need to have the same image to be able to compare results. If you are just testing in the texture filter editor having only the samples should be sufficient.i think there's something wrong with the tif i am using (the one posted in first post), run the script again got out of memory after 30 min,
which image did you linked clutch with?