Thread Tools
This thread is privately moderated by Jack Crossfire, who may elect to delete unwanted replies.
Aug 13, 2015, 11:13 PM
Registered User
Jack Crossfire's Avatar
Thread OP
Discussion

More machine vision tests


The quest for machine vision moved to optimization, because it most likely will have to run on the 900Mhz Raspberry in a 1st test.

A quick test at 160x120 showed complete loss of synchronization at the lower resolution, with different lighting. There was definitely an advantage to 640x480 & above, with the bare minimum at 320x240. Using color for matching was hopeless, whether the color was 160x120 or 320x240. Color was required for decent motion searching. The optimum source image was 640x480, with downsampling to 320x240 for all processing, & further downsampling to 160x120 for the 1st pass of motion searching.

Synchronization of any kind between 2 videos during the shady time of day was impossible. The shadows changed position too much. Synchronization between a shady video & a full daylight video was still quite good, though optical flow was hopeless. Since the full daylight video had some compatibility with all the other videos, all the reference videos should be in full daylight. FLANN pair matching was identical but much slower than brute force matching.

Another drive slightly after full daylight but not sunset was pretty awful at optical flow, though it nailed keypoint matching. Reduced the reference frame window to 10 & reduced the reference frame rate to 1 frame every second. This didn't affect the keypoint matching.

Made a logarithmic motion search use a 2x downsampled image, which gave better results than either the brute force search or using logarithmic search with a full resolution image. After 16 years of motion searching, it finally became clear that the image needs to be downsampled & logarithmically searched for best results.

The logarithmic search with downsampling went 7x faster than the brute force search. Downsampling had negligible impact on speed. By now, the full algorithm went at 14fps on a single 4Ghz CPU.

It became obvious that there was no way it could train itself from videos in different lighting. Manually setting waypoints was the only way. The video would be stabilized in an editor, then manually tweeked to get the path in the center. Then the tweeked video would be compared with the original to get the location of the path in the images.
Sign up now
to remove ads between posts


Quick Reply
Message:
Thread Tools