Originally Posted by ~>Miha<~
yes, I understand that with the GCS it is possible to set and transmit new navpoints in real-time. For certain applications (e.g. aerial mapping) it is however more useful to generate the waypoints before flight in a systematic way e.g. in a matrix of X * Y meters with a certain stepsize. The "autopilot" should also trigger the camera shutter, so at each navpoint the camera is triggered. The "autopilot" should also log the attitude parameters at each exposure epoch so the user knows the orientation of the camera during the exposure, e.g. nadir angle, as starting parameters for the rectification process.
Flexipilot has been designed for aerial photography and does exactly that.
It now also drives roll stabilised nose in Pteryx UAV.
There is a choice between legs spacing and altitude and for real job I think there is little that can be adjusted. For typical straight camera lens you get fixed angle of view. Then you can calulate what is neededthen fly many missions in order to discuver how often things turn out to be useless if you enlarge the spacing.
Stabilised camera mount tends to give more reliable results in the sense there are no photos more than a few degrees off, therefore even in turbulence there are rarely missed areas.
It turns out that when wants useful photo overlap (typically 60% or more is highly welcome as in panic you can even drop out a single blurred photo),
then you find yourself with 50-70m leg spacing from 200m altitude or 25-35m leg spacing from 100m altitude.
Let's analyze what does this means to various prothetic algorithms trying to get photo map during return-home.
A plane that returns home from 500m distance, should fly straight lines perpendicular to the home course. Each such line say 200m long (you expect to have at least 200x400m photo area, right?) must include additional length for turning the plane.
At the end you will have at least 450m per leg and 500/50=10 legs from 200m altitude.
End result is that a plane returning home in order to make photo mapping should travel around 4.5km at 200m altitude or as much as 9km at 100m (around 350ft) altitude. The latter is about the 15-20min endurance when getting small RC model with a camera (you must include takeoff, landing etc).
Unfortunately because of the human's ability to diagnose the real plane position on distance (from 500m) the heading error of the RTL enable position
might be as much as 30deg vs intended. Supposing only 20deg error, the return home 'spinal bone' would be sin(20)*500m off target. This is 171m off. In short the furthermost leg produced (which has only 200m in this calculation) will have somewhere between 200m and 30m
of length over desired area. You might want to fly longer in order to incerease hit rate, but you cannot do it easily do that with RC model.
In short this goes back to the key point:
You cannot command you plane manually on distance in order to position the plane for useful photomapping. You might obtain some results, but the best coverage will be overhead - yet this can be done by RC plane (maybe with copilot onboard) and a few lucky shots in random positions.
For photomapping, the missions usually use the platform's range to the max, lifting a lot of extra batteries etc, the end results there is no place for manual positioning.
This is exactly the same argument that rules out classic stabilised RC planes from doing photomapping.
Of course one can extend the range to the max and just scan everything overhead within allowed visual range (500m), but that requires custom platform with huge endurance.
concerning the GCS I think this simply decreases success rate for photomapping.
there is nothing clever you could do on the ground if the camera lens protective cover has not been removed, and the extra equipment distracting from details increases the chance of such events.
Add to this the fact that when the rain is coming you start thinking about packing your laptop right at the moment when you could let the plane fly its last 2 legs. PLus problems, broken/dirty mouse over wet surface, mosquitos, ppl trying to pick your equipment when you get 300m away from takeoff table etc. Then you find you need a second person to support just the ground station, what kills profits whichever method we use.
Plus the argument of time needed to pack/unpack.
GCS for photomapping is really a bad idea. You need this toy for interactive action: surveillance, fire support/firefighting.