Jack Crossfire's blog - Page 54 - RC Groups
Shop our Airplanes Products Drone Products Sales
Jack Crossfire's blog View Details
Posted by Jack Crossfire | Sep 26, 2012 @ 02:10 AM | 4,939 Views
A pan/tilt mount has become the least common denominator for indoor position sensing, whatever the vision algorithm is. Kinekt realized it. Webcams have had pan/tilt mounts forever. It's no longer prohibitively expensive.

Distance sensing worked well enough to go ahead with the pan/tilt mount. It turned out horizontal drift was from pausing the camera clock. If the camera clock was paused by turning off the GPIO, it drifted. If it was paused by turning off the timer, it stopped drifting. Turning off the GPIO added 1 more pulse than turning off the timer, but unless the camera has a 2nd clock, it shouldn't matter.

The frame rate made it up to 40fps, giving 20Hz position updates. The hope is wiggling the servos will increase the horizontal resolution beyond 640. Higher resolution cameras would be nice.

Every pan/tilt mount has to be sent out to make money, so a lot of time goes into building new ones.
Posted by Jack Crossfire | Sep 25, 2012 @ 02:41 AM | 5,089 Views
The cameras finally went on the wooden rod, to try to align them for distance sensing instead of video. The resolution had to be increased to 640x240 & the framerate reduced to 30fps to get any useful distance. Still another problem emerged where 1 camera slowly glitches its horizontal position.


The CF card also had some shots of the shuttle viewing mob. They always vote against the space program, but turn out en masse for a view of some hardware.
Posted by Jack Crossfire | Sep 24, 2012 @ 01:10 AM | 4,949 Views
It's an electronic abomination, but stereo vision works. 5Mhz through a ribbon cable, heavy dependance on realtime network performance, hackneyed SPI protocol, pausing the camera clock for synchronization were cost saving measures better done without.

So the clock trimming register was not accurate enough to synchronize the cameras, but it turned out there was a blanking interval at 1/2 the frame height, where the camera clock could be paused without affecting exposure. The slight pause reduced the framerate to 67fps but nailed the synchronization.


More 3D Magic (3 min 33 sec)


Put together some 3D video to show what it can do.

It takes extremely precise alignment for 3D to work. Lens spacing has to be optimum for the distances. The lenses have to be angled towards the neutral depth plane. Those consumer cameras with fixed lenses are worthless & no-one is going to spend the time tweeking all the required parameters.

During this process, noticed the TCM8230 still generates ITU 601 luminance & Goo Tube still expands ITU 601 luminance. Most every other modern camera generates 0-255 luminance, so all your still photos & phone videos are getting clipped on Goo Tube.

The camera system remains wireless, hence the glitching.

In video mode, it maxes out at 18fps 320x240. In object tracking mode, it does 67fps 320x240. Using it for making videos is an utter waste of time, compared to buying 2 Chinese HD cams.

It wasn't the 1st 3D experiment on the blog. https://www.rcgroups.com/forums/showthread.php?t=743863 3D experiments usually last a few posts before the motion sickness ends them.

Surprising to find Jim Cameraman was the 1st to use a beam splitter mirror to make a 3D camera system. That allowed greater flexibility in the intraocular distance & convergence than was possible before. Now, every professional 3D camera system uses a beam splitter.
Posted by Jack Crossfire | Sep 22, 2012 @ 08:22 PM | 4,959 Views
After another round of bridged traces, the stereo cameras finally returned video. The target output is only a single LED at 70fps instead of greyscale at 5fps, but greyscale is useful for testing.

Those last bridged traces were fiendish, with pressure on the board & flux remover causing some shorts above the resistance threshold of the continuity tester. The SPI cam has an excessively high error rate. It's independent of clockspeed or cable losses. What a rough design, compared to an FPGA.



In other news, got some shuttle footage.

Shuttle over Ames (3 min 57 sec)


Didn't know if the bridge would be fogged in, so shot video of it flying over M.M.'s former territory. The bridge ended up being the premier location & this ended up being almost the worst location. We had an F-15 escort, while LA had some F-18's.
Posted by Jack Crossfire | Sep 21, 2012 @ 01:02 AM | 5,046 Views
So a posterized image with white blacks is a sign of a bridged trace. Probe the data pins for strange voltages. That bridge also made the clock die above 40Mhz & I2C die. Now images from either camera come through at 70fps, the clocks go at 56Mhz, the SPI communication properly accesses the left camera at 1.6Mhz, but using both cameras simultaneously makes it crash.

Surprising just how fast the board layout kills high frequencies. The SPI ribbon cable dies above 2Mhz. The purely etched clock trace only carries 1/2 the voltage at 56Mhz.
Posted by Jack Crossfire | Sep 20, 2012 @ 12:28 AM | 4,817 Views
Dual cameras with dual microcontrollers has been every bit as hard as expected. It would definitely be better done with an FPGA, but the microcontrollers were on hand.

They comprise a modular component used 3 times & the FPGA would have taken some time to bring up. The FPGA would still give much better results if the system proved successful.

The SPI drops bytes from the start of the buffer & has a strange alignment requirement. The cameras are proving flaky to get started on the new board.

Debugging along the chip communication path, step by step, SPI is proving difficult to switch between transmit & receive. You need 1k resistors to keep dueling outputs from burning out, but that causes quite a degradation in signal power.

The mane surprise was that #define has proven superior to bridging pins.

Tweeking the HSI calibration value in response to frame period is all that's needed to synchronize the cameras. They automatically line up just by changing the period, but the synchronization is pretty bad. The HSI calibration value isn't as precise as hoped. It also destroys the UART output.
Posted by Jack Crossfire | Sep 17, 2012 @ 05:07 AM | 5,267 Views
The 1st big test of the Hendal was the fabrication of the 2 boards for Marcy 2's stereo vision. The only problems with the Hendal continued to be the short cable length, bulky box, & clumsy iron holder.

Together with an el cheapo $5 tweezer set, these boards came together faster than anything before. The tools of decent board fabrication & soldering have definitely gotten cheap enough for anyone to build anything except BGAs.

The mane problem is once again communication between the 2 CPUs. SPI using DMA has a few quirks. Cables have to be protected against competing outputs. $parkfun seems to be phasing out the TCM8230 in favor of slower but easier UART cameras.
Posted by Jack Crossfire | Sep 14, 2012 @ 03:26 AM | 4,877 Views
It's rebranded as many 1 hung low brand surface mount rework stations. After 1 day, it seems to be pretty good. It's quite a step up from the 20 year old Weller. It's definitely a business only case, unlike the blue/yellow/pink cases of most gear.

It heats very fast & quickly recovers from loss of heat, allowing it to do a lot more at a lower temperature. The Weller needed 340C to do anything, because it couldn't recover from a loss of heat. The Hendel only needs 250C on the iron & heat gun. The tip is much finer than the weller, but wicked long. It could destroy your eyes or any nearby LCD panel.

The heat gun actually works, using 250C, the smallest tip, & full airflow. It pulled off a 100 TQFP without any issue. Airflow is quite significant. You wouldn't think a long, narrow tube could move that much air. The temperature accuracy is nowhere. It always put out 200C, whether set 100C or 150C.

The mane problem is the box is huge & the cables are too short to have it anywhere besides right next to the project. The heat gun has the flimsiest tube with no stress relief. It's another tool which can't be permanently in the same place.
Posted by Jack Crossfire | Sep 07, 2012 @ 05:34 AM | 4,513 Views
A note that the sonar transducers can be taken apart, but while disassembling them makes them look omnidirectional, they actually produce no sound.
Posted by Jack Crossfire | Sep 02, 2012 @ 02:31 AM | 5,275 Views
An idea came to run the cameras at 640x240. Only horizontal resolution is required for depth. Vertical resolution can be lower by an unknown amount, still limited by altitude accuracy. Extremely fast, accurate altitude is required.

Besides that, attention turns to SPI & dual camera synchronization, not the high bandwidth world of video processing. SPI has to change directions & use DMA. That takes a lot of playing with finished hardware, since there's no multimaster SPI DMA example.

There were some ideas to have a timer on the right ARM drive the clock of the left ARM. There's resetting a timer on the left ARM at the start of every left frame. The timer value at the start of every right frame shows if it's ahead or behind.

There can be a delay state, where the camera clock is paused for a few loops. There aren't any easy ways to speed it up.


It might be possible to slightly slow down both to 65fps by adding a 1ms delay to the master. They could both pause their clocks after a frame. The master would trigger the slave to resume after 1ms. It could be an acceptable reduction in frame rate. That would reduce the position update rate from 35 to 32Hz. It's unknown what the minimum rate for stable flight really is. Overexposure from the delay is a real problem. It could overexpose the whole frame or just 1 random pixel.

Officially, the Rigol says the maximum is 68fps. The synchronized framerate would be 64fps.

There is a HSITRIM register which trims the ARM clock. The algorithm would be 1st measure the time between right eye frames. If it's too long, increase the clockspeed by 1. If it's too short, decrease the clockspeed by 1.

If it's in the bottom half of the left eye frame, increase the clockspeed by 1 if no change or revert if the change was -1. If it's in the top half of the left eye frame, decrease the clockspeed by 1 if no change or revert if the change was +1.
Posted by Jack Crossfire | Aug 31, 2012 @ 08:24 PM | 5,063 Views
After many years & many attempts to keep it going, the Weller finally died. Its heating element seems to have burned out like a lightbulb after 15 years. It started years ago as intermittent failure to heat, which could be defeated by pressing the cable in. Eventually, pressing on the cable stopped working.

A new heating element would be $133, while a Hakko is $80. It still has a working 5:1 50W transformer, potentiometer, & switch. It could be an easy power supply for something else or a source of magnet wire.


In other news, the great task with vision is repurposing the same circuit board for both ground cameras & the aircraft. The same circuit board is used in all 3, with minor firmware changes. There are ways to do it in runtime, by grounding some pins or bridging some pins. There are ways to do it at compile time.


Compile time is a lot more labor intensive, requiring 3 different images to be flashed, 3 different objects being compiled from the same source, either simultaneously or based on a makefile argument. Historically, multiple objects from the same sources have been a lot more challenging than bridging pins.

The PIC autopilots had different enough boards & source code for the ground, aircraft, & human interface that different source code wasn't a burden. Those autopilots would probably move to identical boards & firmware, too, with the same pin shorting to determine firmware behavior.
Posted by Jack Crossfire | Aug 31, 2012 @ 01:20 AM | 5,145 Views
The Rigol revealed the LED was toggling 8 lines after the start of the frame & there was a 1.2ms vertical blanking interval, enough time that there must have been a point in the scanning where the LED could toggle without being split between 2 frames.

So turning the LED on at the beginning of the vertical blanking interval lined it up with the exposure for the 1st line, but caused it to light up the bottom half of the frames when it should have been off. The shutter seemed to expose 2 frames in the time taken to clock out 1 frame. To get in only 1 frame, the LED had to be on for only 1/2 the frame height.

This could be highly dependant on shutter speed & clockspeed, but it has to be done to get 35Hz position updates. This kind of raster line timing recalls the Commodore 64 days, where most any graphics required the software to do something when the CRT hit an exact raster line.

Synchronizing 2 cameras & moving data from a 2nd camera is the next great task. The easiest method is for each camera to stop its clock after each frame, then have the master camera issue a pulse to restart the clocks simultaneously. It would slow down the framerate & overexpose.

The only realistic way is to issue a pulse at the start of each frame & for the 2nd camera to constantly nudge its clock so its frame starts line up. The LED has to be so precisely aligned with the exposure, it doesn't leave much margin.

There's also forgetting about vision based distance & using sonar for the distance or using a large LED ring & a single camera.
Posted by Jack Crossfire | Aug 30, 2012 @ 02:00 AM | 5,310 Views
As every flashing LED designer knows, you can't flash it in every other frame. It has to be toggled every 2 frames because the rolling shutter is always capturing somewhere between the frames. Trying to synchronize the flashing to wherever the rolling shutter is requires previous knowledge of where the LED is.

So the 70fps 320x240 is the only way, yielding only 17 position readouts per second. The webcam users are getting really degraded results.

There might be a way to recover position data from between the flashes. If every blob is accounted for in the last 4 frames, the frames are searched backwards in time for all the blobs in the vicinity of the blobs in the last frame. Blobs which disappear at some point going backwards are tagged. The largest is taken.

This would have a high error rate from false targets overwhelming the target when it just starts lighting up, before the target has grown to its maximum size.

The lowest error rate would come from only comparing the last of the 2 frames when the target is on to the last of the 2 frames when the target was off. Then, velocity data could be improved by comparing the known target in the last frame with any blob in its vicinity in the previous frame.
Posted by Jack Crossfire | Aug 29, 2012 @ 12:50 AM | 5,115 Views
As suspected, downsampling with maximum instead of lowpass filtering made the LED stand out more. The problem was resolution. 88x96 wasn't going to give useful distance data. A solidly lit LED with multiple frames blended might improve the accuracy.

Analyzing the data on the microcontroller without compressing it can get it up to 320x240 70fps. 640x480 still only goes at 20fps. That definitely made you wonder what the point was of not using a webcam. Only the simplest algorithm can go at 70fps & it still takes a lot of optimization.

Accessing heap variables has emerged as a real bottleneck for GCC. Temporaries on the stack greatly speed up processing but create spagetti code.

The microcontroller thresholds the luminance & compresses the 0's. Works fine, as long as only the aircraft light crosses the threshold. A large bright area would make it explode. Compressing the 1's would take too many clockcycles.

The leading flashing LED algorithm begins with blob detection. Blobs which come within a certain distance of another blob in a certain number of sequential frames are considered always on & discarded. The largest of the blobs which don't have anything appear within a certain area in every frame is considered the target.
Posted by Jack Crossfire | Aug 28, 2012 @ 01:27 AM | 5,086 Views
So got the TCM8230 to give 70fps by clocking it at 56Mhz & downsizing to 128x96. 56Mhz is the fastest the STM32F407 can do, according to the Rigol. Luckily, the Rigol was hacked to support 100Mhz.

Detecting the LED has emerged as the great task. If the camera moves, a flashing LED won't differentiate from the background. Movement of the flashing LED is already a difficult issue.

Hardware pixel binning seems to be no good. As the LED goes farther away, the intensity fades too fast. Maximum of nearest pixels is the only way. That makes the Centeye guaranteed to not work.
Posted by Jack Crossfire | Aug 26, 2012 @ 12:16 AM | 5,536 Views
So vision has some options:

2 webcams, 2 isochranous competing USB streams, no control over exposure, no way to synchronize with a flashing LED, but full color 30fps 640x480. Doing this with the red/blue LED pair would work in a controlled environment, but not be a viable product. It would require a lot of wires & a USB hub.

2 board cams, daisychained wireless stream, full exposure control, at least 40fps 160x120 in greyscale.

The centeye may end up being the only useful one, because 2 can be scanned by a single chip. It does 112x112 greyscale with no onboard ADC. The wavelengths aren't given, but it probably does IR through visible.

It has a strange interface, which requires a bunch of pulses to set the row & column, then it clocks out a single row or column. There's no way to read a register. So the STM32F407 ADC goes at 1 megsample, but it would be drastically slowed by all the bit banging.

Min & max registers for the x & y, with automatic row increment would have gone a long way. Normal cameras have hardware windowing support & ADC's that run in the 40 megsample range. Microprocessors have hardware interfaces for normal cameras.

So the centeye would need an FPGA to make it look like a normal camera, before any useful speed could be had.

Trying to use a smaller window has been a house of pain. A 32x32 window can go in the hundreds of fps, maybe fast enough to track a point, but it's been done before with limited success. 60deg has been shown as the minimum viewing angle to track an object.

It also has pixel binning, which would slightly improve the framerate but not provide the required aliasing. For the right aliasing, the maximum of the pixels needs to be taken instead of the average. It may be that a high enough framerate, with a fast enough turret, can track a point with a 32x32 window.
Posted by Jack Crossfire | Aug 24, 2012 @ 06:43 PM | 6,130 Views
Some more sonar ideas were scanning for a peak only within a few samples of the previous peak, finding the start of the peak by searching for the 1st positive derivative or the last point to exceed the maximum before the current peak. How would you find the last point to exceed the maximum? Some pings ware pretty nasty, with maximums set & dropped below. There were more diabolical pattern matching algorithms.

Nothing about sonar jumped out as bulletproof, while vision has already been proven in the dark. A simple test to separate a blinking LED from the background would seal it.

Ramped the sonar up to 60hz on the sending side, giving 20hz position solutions. Had the starts of pings detected from the derivative. When it was in a box directly over the ground array, it was actually very stable. The instant it got out of range, it died. It still didn't have the horizontal range, but the speed & accuracy required for stable flight were shown.

That was a lot of investment in electronics. It was the mane reason for the oscilloscope.
Posted by Jack Crossfire | Aug 23, 2012 @ 10:37 PM | 6,848 Views
After all that work on making a high frequency amplifier, test flights once again led nowhere. It was definitely better than air to ground pings, but much less accurate.

The latest thing is instead of recording & replaying telemetry files, making video of the ground station.

In 2007, you couldn't make video of a screen because AGP was unidirectional. That motivated a lot of investment in flight recordings. PCI express once again brought back the bidirectional speed to allow the screen to be recorded.

The most successful flight showed clear circling around the target position. The sonar successfully tracked it flying below 1 meter, in a 2x2 meter area, with 0.2 meter accuracy, but the feedback couldn't dampen the oscillation.

The sonar range is much less than air to ground. The omnidirectional motor noise is always near the airborne receiver, while it has a chance to diffuse before it hits a ground based receiver.

For confined, indoor use, sonar once again is best suited to a horizontal receiver array on the ground with an airborne transmitter that always points sideways to the array. It might still work for a monocopter, in which the aircraft has a couple pings per revolution which are always guaranteed to hit the ground array.


Going back to vision for daylight operations, Centeye is too expensive for any product, but a cheaper camera might be able to simulate the same high speed, low resolution magic trick, at a lower speed.

The idea with a low...Continue Reading
Posted by Jack Crossfire | Aug 22, 2012 @ 10:35 PM | 5,415 Views
So the final transducer driver board came together. Got extremely short transition times & very loud pings. 30V transitions in 2us are 33% faster than the LF353, but getting short transitions is a very tempermental affair. The transition speed really does matter as much as the voltage, with these transducers.

There's an optimum ratio of current to drive the pullup & pulldown transistors in the h-bridge. Giving both transistors 0.3mA gives rise & fall times of 3us. Raising the pullup transistors to 3mA gives nanosecond rises, but lengthens the falls to 5us. Giving both transistors 3mA gets the rise to nanoseconds & the fall to 2us. So more current buys more speed, but only if the transistors are balanced.

The pulldown transistors have a hard job. They have to remove the charge built up on a capacitive speaker, but the speaker acts like a charge pump. When 1 pullup transistor starts, it tries pushing the 30V generated by the other one to 60V. In reality, it only gets to 40V before the transistor breakdown voltage stops it. That's causing a 40V spike & delay before the pulldown transistor starts removing charge.

There are ways to optimize it, controlling it with 4 GPIOs & staggering the transistors, using the charge pump effect to double the voltage. A career can be devoted to optimizing pings.

The Rigol resets to writing waveforms, every time it's rebooted. You have to remember to constantly set it to writing bitmaps, or nothing useful gets saved.

Also, 1 of the probes has already stopped working in 10x mode, so nothing over 30V can be measured. Reality of the $350 scope has definitely begun after only 1 week. Fortunately, anything with a BNC connector should work as a probe.
Posted by Jack Crossfire | Aug 22, 2012 @ 03:59 AM | 5,418 Views
Yesterday's circuit failed catastrophically, because the voltage emitted by the NPN wasn't high enough to completely turn off the PNP. That was a lot of work.

There's a lot of rationale in building h-bridges instead of using an LF353 or high speed op-amp. The h-bridges can switch 3x faster than an LF353.

A few more ideas came along, to minimize transistor count. The trick is to use all NPN's, but they can't go all the way to the rails. While working at 12V, they died at 30V.

Finally gave up on component minimization & blasted the problem with a transistor mountain, still insisting the h-bridges should be controlled by 1 input & using an NPN to invert the control signal.

That failed miserably. While the rise time was a spectacular 1us, the fall time was longer than the world's worst op-amp at 100us. Virtually no power went to the speakers. It's unknown what killed it. It wasn't the inverting NPN. It wasn't the 100k resistors. The last suspect is shorting the bases of each push/pull pair of transistors to save on resistors.

Whatever the reason, the circuit is going to be replaced by LF353's. H bridges are impractical to mass produce. The LF353 can get 3us rises & falls for a lot less money. The wavelength is 41us, so the 2us difference shouldn't affect anything.

Today had many lessons.

#1 Working at 30V has a lot of traps for young players not present at 12V. 10k pullup resistors won't do. 30V across 10k is 0.1 W or 44% more than...Continue Reading