The home made steadicam didn't work. The problem is there's no friction. It oscillates like a pendulum. A brushless gimbal makes friction in software. The $600 in a real steadicam is for the secret lubricant which has just the right amount of friction.
Then there was the swinging that would happen from moving it horizontally & the amount of weight required to balance it. The weight could be mitigated by moving the universal joint right up against the camera, but the friction problem & the swinging would remane.
All steadicams have a slight swing from horizontal acceleration. The operator has to be on top of it or get it as close to top heavy as possible.
The things normal goo tube videos never show. The brushless gimbal fixes a lot of problems with the steadicam concept, even if it can't eliminate positional translation. Another invention fails & the pile of scrap wood gets bigger.
A single axis gimbal seems to be the ideal for a body mounted sports cam, but once you have the electronics, the problem is how to fabricate it. The camera needs to be centered in front of the motor shaft, but the motor shaft needs a standoff. It almost needs the same frame as a 2 axis mount, which defeats the purpose of going with 1 axis. Another run timelapse is gong to be handheld.
At 1 axis, you're better off with an old school steadicam. It's lighter than a 3Ah battery.
It's pretty small. Laying it out & the cost of flying outside are the mane things preventing progress on an outdoor vehicle. It seems to be end of lifed already. There is 1 goo tube video of someone struggling with a phone cam & a computer to show how accurate it is.
It's not as accurate as sonar. The mane debate is whether an extremely powerful sonar should be used instead or if sonar should be combined with it. If the outdoor vehicle never goes above 25ft or if it's only flown manually, there's no need for a barometer.
An outdoor vehicle also requires GPS. After 6 years, uBlox is still the gold standard & still the same gold standard price. No-one has been able to surpass uBlox-6 for the same price.
There is debate on whether to use GPS, optical flow, or both. GPS, optical flow, sonar, & barometer would give the absolute best results. It's not likely to use autopilot except for a loss of signal, since brushless gimbals have replaced the autopilot as the source of stability.
There are a lot of people going after the autonomous following camera concept. Their design is standardizing on using IR to point a camera at a subject, using GPS to follow the subject, brushless gimbal to stabilize the camera. It could be done with just optical flow, sonar, & IR. As long as you had 1 eye monitoring the vehicle, it would be able to record 10 minutes of whatever you did at a time.
Flying outside costs $100/hour in crash damages. No way around it. It's not like a 70g indoor toy which can withstand crashes.
With the Kinect One release, time of flight distance measurement in IR has come out of nowhere to become the next big thing. Instead of measuring intensity of reflected light, it measures the actual time of flight. The key was developing cheap transistors fast enough to switch on & off a photodiode in the time it takes light to reflect from the subject.
So the IR LEDs switch at 15-30Mhz. The sensor switches on & off in the amount of time it takes the light to reflect back from the farthest points. Light from the nearest points returns to the sensor for a longer time than light from the farthest points, allowing a capacitor to charge proportionally to the time of flight. There is extra calculation from a 2nd camera to separate distance from intensity.
It's extremely accurate, giving 1mm accuracy at a distance of 2 meters. It can see your blood vessels beating & recognize faces much more easily than imaging alone. Whether it or radar becomes the standard for UAV guidance depends on the cost, size, accuracy.
A single time of flight camera could probably resolve 3D position well enough to fly a vehicle. Automotive radar probably can't produce the same quality image, because the wavelength is much longer than light. Resolving 3 dimensions from a single sensor would always require an enormous amount of computing power, compared to using 3 sensors.
Kiwipedia pegs the beginning of time of flight cameras at 2000. It took improved materials & the popularity of gesture control for it to break through. If only it could be miniaturized enough to put on a monocopter. As far as building a miniaturized Kinect One from scratch, fuggedaboudit.
Got the tiny motors up to .9A with active cooling. The pitch still has .3A without active cooling, to minimize vibration. The tiny motors have the least cogging artifacts, so active cooling might get them all the way.
The indoor blimp has been seen as the most practical indoor UAV platform. It takes very little power, is stable enough to navigate tight spaces with little infrastructure, has low enough energy to coexist with someone not paying attention to it. The mane problem is it needs a lot of space & the helium can't be recompressed when it's not flying.
The Google video highlighted how a big company solves problems. The guys they hire have an MS or PhD in a very specific part of engineering from a very prestigious school, high grades, lots of formal training in data structures, complexity analysis, memorizing interview questions, no experience designing UAV's.
So you're looking at a Google guy put a 7.4V, 2Ah battery, Raspberry pi, & webcam on a blimp, but in a stroke of genious, he took the case off the wifi dongle to make it lighter. The guy is being interviewed by another guy with a camera that can't focus, but it's the latest camera & he has to use the latest camera.
Maybe if you don't have a lot of time to learn about all the different embedded computers, it's a way of doing it, but it shows how the hiring process has gotten really narrowly focused on people who just do 1 thing that they're formally trained for, but get really helpless outside that narrow focus.
Much better than 2 axes, but not perfect. It needs a more rigid structure, more powerful motors, softer wires. The PID gains are about as good as they can get.
There is a routine for equalizing the power as battery voltage drops, but it's not a simple linear relationship between PWM & voltage. If you double the PWM when voltage drops by half, it blows up the motors. The easiest solution was to fudge a linear PWM gradient from 9 - 12V that kept the motor current in a useful range, for the useful battery voltages.
Building a more useful gimbal with bigger motors is a matter of money.
The algorithm from yesterday worked as described. The yaw motor could move anywhere without a loss of control. It was the 1st time the yaw coupling problem was solved outside China. At least for a short time, no-one else in America could do it. The mane problems are now a very wobbly frame, lots of cables getting tangled, too much cogging in the DT700.
After fighting I2C glitches forever, remembering a Logitec webcam was connected to both ends of the shield, decided to solder both ends of the shield. That ended all the interference. So for an I2C device where the only return path is the signal cable, you need to ground both ends of the shield.
Basically, you need a table of some kind for each camera axis
motor 1 PID gains
motor 2 PID gains
motor 3 PID gains
The outputs of the 3 sets of PID equations for each motor are summed to get the motor steps. The hard part is computing the table. All 3 motors have a unique set of gains in the upright position. They have completely different gains in the 2 sideways positions.
There are 2 mane gradients for the PID gains: The roll & yaw motors trade places as imu2 pitch goes from 0 to 90. The yaw motor is replaced by the pitch motor as imu2 roll goes from 0 to 90.
The 2 mane gradients aren't linear. They're a sine wave. When IMU2 is pitched 20 deg over, it adds just 12% of the roll. When it's at 45 deg pitch, it adds 50% of the roll. When it's at 70 deg pitch, it adds 88% of the pitch.
If each motor has 3 sets of gains: the upright yaw, 90 pitched yaw, & 90 rolled yaw, each set of gains has 3 parameters: P1, P2, D2, a total of 27 values need to be manually tuned. The best way to tune it is to tune the fully deflected states.
Only 1 guy outside a corporation ever got the yaw coupling to work:
But it was still unstable when the pitch & yaw motors were parallel. Having 2 motors share the same axis is a buster. You want yaw to be stabilized as much as possible, until the yaw motor is sideways. It requires gradually taking away more & more latitude from 1 of the motors until it's rigid. The easiest solution is to always have the pitch motor control 100% of pitch, with the yaw motor tapering its gains in the yaw direction as it goes horizontal. This doesn't maximize all the available degrees of freedom of the motors.
Without the brute force kinematic search, the table has the motors fighting each other. Doing the brute force search fast enough would be real hard. No-one has tested a Zenmuse to these extremes, but it probably does it right.
The 2 axis gimbal is better than handheld, but it's not a miracle, especially when running. The most useful stabilization is roll. The software is most effective stabilizing pitch & yaw, leaving roll as the only direction which requires mechanics. It still glitches from lens glare, but there's no way it's going to handle running motion without software stabilization.
In a world where the data rate on any phone is virtually infinite, the most bandwidth which can practically be made available to a cellphone user in a month is 3GB. That's 10 minutes of HD video/month. It's been that way forever & it's physically limited by the technology.
The "unlimited" data on T-mobile is actually throttled to 128kilobit, after the 1st 500MB.
With a phone transmitting omnidirectionally & a tower transmitting omnidirectionally, only 1 user can transfer data at a time. If there weren't bandwidth caps, all the users in a cell would use all the bandwidth, all the time, probably limiting the data rate to 128kilobit, which was the last rate at which unlimited data was sustainable.
The limit can technically be overcome, in the downstream direction. A phased array can direct all the energy to a very narrow angle. Hundreds of transmitters can be put on a single chip, to handle many narrow angle transmisions. It might be a large assembly. In the upstream direction, the phone would still be omnidirectional.
It seems inevitable to have highly directional, phased arrays in all towers & phones, in the future. Getting there requires many small steps, first analog, then digital, then software defined radio, than downstream phased array, then bidirectional phased array, all introduced in the capitalist Asia, slowly introduced in the socialist west, as money becomes available.
Unfortunately, the days of free phones with a 50 year contract are over. There are just $600 smartphones or $200 feature phones with 24 months of payments. You know you're old if you remember the 1st iPhone being an astounding $300.
The current, most desired languages all became popular because of a dot com. If they win big, everyone else wants to use the programming language they used at the time & you track the demand in skills for a given time, based on the big dot com. When ebay won big, everyone started looking for Java developers. When Facebook won big, everyone started looking for Python developers.
Kiwipedia shows all the languages they currently use, but not what they were using when they became a hit. The Goog uses Go, but Go wasn't invented until 7 years after they started. Facebook was famously written in Python, but now uses C++.
tumblr: ruby on rails
twitter: ruby on rails
Generally, dot coms winning big before 2000 used C++ & Java. Dot coms from 2000 - 2009 used Python & PHP. Today, it's Ruby on Rails.
Finding the best way to write web applications is still a work in progress. That's why it's a golden age in languages. No-one knows what the best language is. Clearly, interpreted languages are better for it than compiled languages.
Yaw is coupled to roll/pitch when tilted outside a very narrow angle. It used magnetic heading, which seemed to be distorted by the motors. Yaw is a lot of weight & power consumption for something that's always going to need software stabilization. The DT700 rewound with 130 turns of 0.2mm was never perfectly smooth, even at 0.6A.
Eventually, an open source solution to the yaw direction will appear. In the mean time, it's going back to 2 axes.
A rare photo of how someone attached a gimbal to a motor. It was indeed drilling & tapping aluminum.
Shot some video with the 3 axis gimbal. The turnigy 2205 1350k had enough torque to do roll, if it got hot enough & the derivative was longer. Taking enough care to keep the yaw motor vertical made the yaw coupling bearable.
There's a lot of wobble from all 3 motor shafts, aluminum flexing, high impact. It needs a lot of cushioning to eliminate the jarring motion, but it can't be eliminated from running. The yaw motor has to be less effective, to keep from shaking all the stuff hanging off it.
After thinking about the problem, the way to decouple yaw from roll/pitch is to have a 2nd IMU on the yaw beam. The yaw beam is always perpendicular to roll. It doesn't need a compass. The relative roll/pitch of the 2 IMU's determines how much yaw is contributing to roll/pitch. It would take fusing the 2 IMU's to determine just the derivatives & rate feedback of the motors.
There's little point in having a 3 axis gimbal without going all the way, but there's little point in having a 3rd axis, either. It's too heavy to actually run with. Video is always going to need software stabilization because of the play in the structure. It's never going to be stable enough for long exposures.
Airborne video seems good enough with only 2 axes. It's probably better to wait for the 3 axis solution to make it to the open source version.
As soon as all 3 motors were fired up, it immediately became clear that the roll motor needed to be a lot more powerful or the roll assembly needed to be a lot more compact. There's too much inertia in that direction.
It also became clear why a 3 axis gimbal hasn't appeared from Alexmos or open source. The yaw motor fights the roll & pitch motors if it isn't level. The pitch motor handles crosstalk better, since it has less inertia. The roll motor can't handle any crosstalk before it oscillates.
In very little tilting, the yaw motor becomes a roll or pitch motor. The roll motor also becomes a yaw motor when tilted. It could probably be bearable if the motors were powerful enough, but never perfect.
A more complex feedback model is required, which predicts the effect of each motor on the IMU, after translation through the downstream motors. That would require knowing the orientation of each motor.
The DJI Zenmuse does it perfectly. It seems to have potentiometers on all the motors. No-one has ever torn down a DJI. It's only a matter of time before the extra math makes its way into open source. It'll probably use an IMU for each motor, so people can still make their own frames.