Jack Crossfire's blog View Details
Archive for October, 2015
Posted by Jack Crossfire | Oct 31, 2015 @ 03:22 AM | 7,762 Views
It became quite clear it was time to put away the autonomous driving idea again. The shadow problem was insurmountable. It needs prior knowledge of what color keys reveal the path. It turns out, if you exhaustively try every color in the image, there is always a combination of color keys which yields the path, with high immunity to shadows. The trick is scoring every path revealed by a different combination of colors. Perhaps the possible paths can be limited to a narrow range in the center of the frame.

The next step is probably giving it a magnetic heading with a known direction of the path to follow. At least on a straight path, it would know where in the frame the vanishing point was, reducing the problem to just finding the edges. With a known vanishing point, it could get the color keys from a line between the vanishing point & the bottom center of the frame. Finding the edges becomes a lot easier. It also needs to know the horizon line.
Posted by Jack Crossfire | Oct 28, 2015 @ 09:50 PM | 7,260 Views
Autonomous driving day 2 (4 min 9 sec)


Activated the lane following code, extending the successful runs but revealing all new problems. Lane following was a simple 2nd proportional feedback added to the vanishing point feedback. There was no rate damping. This kept it going down the path, but it still drifted anywhere from the center to the right side. This also revealed just how bad machine vision was at handling shadows.

It suffered a positive feedback not evident in simulations. A shadow would send it off course, which worsened the wrongly detected path, while the simulation always drove past the wrongly detected path. This was a deal breaker.

Did some 10mph runs with it. This revealed an underdamped oscillation. It didn't take long for it to stray off the path, even in the best vision conditions. Only a rare coincidence kept it on the path. It seemed to oscillate around the lane keeping feedback.

The autonomous driving ended after a 10mph run, when it flipped over, smashed the camera & crashed the computer. So even ground vehicles experience crash damage. The computer lost the last 2 minutes of video.

It's surprising just how accurate the machine vision has to be, compared to what intuitively should be enough. If only there was a way to determine how good the current data was & construct longer term feedback using just the good data. The computer adjusts heading 30 times a second, relying on a large number of data points to drown out the outliers & reveal a true heading. A human only needs to adjust heading every few seconds, with a single very good heading.
Posted by Jack Crossfire | Oct 27, 2015 @ 11:30 PM | 7,761 Views
Compared a simple implementation of temporally selected color keys with the tried & true spatially selected color keys. It took over 200 frames to get enough temporal color keys to see the path. The number could be reduced by throwing out overlapping colors & skipping frames, but for now, the rough implementation gave a good idea of the limitations of each method.

What probably happened was the exposure setting changed slightly between each frame, so the colors of past frames were a lousy match for the current frame. Another thing which can happen is the angle of reflected light changes as the surface moves closer, so it's not exactly the same color with decreasing distance. Of course, without a large buffer of past colors, the temporal algorithm starts dropping areas of the path as it forgets important colors.

The idea for spatially selecting color keys was a single magic Summer moment which seems impossible to outdo. Not sure exactly where or when the idea appeared.
Posted by Jack Crossfire | Oct 26, 2015 @ 11:22 PM | 7,506 Views
Autonomous driving begins (3 min 56 sec)


As it was when copters 1st hovered on their own, it wasn't a sudden ability to assertively drive itself but an incremental journey towards being able to drive down the path for short distances. Recorded the webcam at 30fps. The SD card has enough room for 2 hours. The Odroid seems to have 2 hours of power from the 4Ah battery.

Quite nihilistic to see such a high frame rate come from such a cheap webcam, but it was made possible by the enormous computing power applied to it, compared to what was around in its day.

By the end of the session, it was assertively steering towards the vanishing point without any damping or lead compensation. It had a hard time with shadows. It couldn't stay laterally on the path. Software for lateral control is ready to go, during the next time off.

It slowly emerged that the super chroma key was similar to the chroma keying Goog had in 2005. The Goog used chroma keying in its 1st autonomous car. Like super chroma keying, it applied many different color keys to the same image. Unlike super chroma keying, it took the color keys from the same coordinates in many points in time, rather than taking many different coordinates in the current frame. It had to drive a certain distance to gather all the possible colors in the path, but it didn't gather colors from off the path.

Like copter autopilots 10 years ago, there's no free cookbook for making an autonomous car, so every step takes a long period of discovery. It took a lot of experimenting with new edge detecting methods to discover old fashioned chroma keying was the best method. It would be impossible for 1 person to make it handle every situation, handle turns at 10mph, or avoid obstacles.

Like autonomous copters, an accepted way of doing it will eventually become an easy download. The accepted method is going to take a lot more sensors. The key sensor is LIDAR while with copters it was GPS. There's still something novel in being the 1st & doing it with just a camera.
Posted by Jack Crossfire | Oct 24, 2015 @ 09:08 PM | 7,309 Views
It was another day of rearranging components & debugging steering before any attempt at machine vision could happen. The massive steering changes required for autopilot were a long way from working. Had to move the Odroid to make room for its cables. Captured some video. Added a variable frame rate for recording the input video. Wifi was much more reliable from the macbook than the phone. Couldn't connect long enough to start the program from the phone. Once started, it didn't crash. Managed to access the STM32 just enough to do the required maintenance. Rick Hunter never had to deal with component accessibility. Such is the difference between Japanese cartoons & reality.
Posted by Jack Crossfire | Oct 24, 2015 @ 01:49 AM | 7,816 Views
After some fabrication & debugging, the 'droid was sending geometry to the autopilot & the autopilot was successfully taking configuration parameters from the phone to tune the path following. Tuning the vision algorithm would involve very difficult text editing over the unreliable wifi.

Despite every effort, there was no way to allow access to the STM32 without a major teardown. Only a serial port was extended. The vision algorithm got a slight improvement by converting the geometry detection to a polar line equation & using logarithmic searching.

Finally, there was a desire to record raw video from the camera, to reproduce the path detection offline. After struggling for many years to record 720x480 TV on dual Athlons, it was quite humbling to find the Odroid could record 640x480 in 90% quality JPEG at 30fps, in a single thread, while simultaneously performing path detection. The path detection still can't go above 160x120 without a massive slowdown.

Once fixed on the car, the Odroid has proven much more reliable than the PI. It wasn't reliable when sitting on the bench because its 0201 caps were quite sensitive. It can stay on idle forever & hasn't had any issues in extended machine vision runs using 300% CPU.

With the amount of money & time invested, there's every hope path following will work, but it probably won't.
Posted by Jack Crossfire | Oct 21, 2015 @ 08:06 PM | 7,829 Views
It came & went. Nothing happened. A lot of us were expecting someone from 1985 to show up in a flying car.
Posted by Jack Crossfire | Oct 18, 2015 @ 10:55 PM | 125,088 Views
After weeks of baby steps, the Odroid finally connected to the phone as an access point. It was just as unreliable as the Pi. This didn't involve a kernel recompile, since trying to compile the kernel failed.

The old RT8192 wifi dongle once again reappeared for the access point role. This time, one must download the mysterious hostapd_realtek.tar from somewhere. This contains a mysterious version of hostapd which Realtek modified to actually work. Plugging in the dongle got the right kernel module to load, the 8192cu.

The hostapd command was:

/root/hostapd /root/rtl_minimal.conf&

The dhcp command was:
dnsmasq

The hostapd config file was:

# wlan interface. Check ifconfig to find what's your interface
interface=wlan0

# Network SSID (Your network name)
ssid=Truck

# Channel to be used! preferred 6 or 11
channel=2

# Your network Password
wpa_passphrase=xxxxxxxx

# Only change below if you are sure of what you are doing
ctrl_interface=/var/run/hostapd
#wpa=2
driver=rtl871xdrv
beacon_int=20
hw_mode=g
ieee80211n=1
wme_enabled=1
ht_capab=[SHORT-GI-20][SHORT-GI-40][HT40+]
wpa_key_mgmt=WPA-PSK
wpa_pairwise=CCMP
max_num_sta=8
wpa_group_rekey=86400





Key to getting hostapd to work was not using encryption. Commenting wpa=2 disabled it. Enabling encryption caused dhcp to fail. Another requirement was deleting /usr/sbin/NetworkManager. There was no other way to disable NetworkManager than removing the program.


The dnsmasq config file was /etc/dnsmasq.conf

interface=wlan0
no-dhcp-interface=eth0

dhcp-range=interface:wlan0,10.0.1.5,10.0.1.131,12h



Since the access point's sole purpose was aligning the camera, there was no need for encryption.
Posted by Jack Crossfire | Oct 18, 2015 @ 03:20 PM | 122,914 Views
So the Odroid has a 2nd UART on its 30 pin header, the STM32 board has another unused UART, but it isn't exposed on any pads & no-one was in the mood to make a new board. Back to SPI it was. Based on goog searches, getting SPI to work on the Odroid is a big deal. It wasn't expected to be as easy as the PI because of the small userbase & expectations didn't disappoint. Where the PI has a gootube video on every minute subject, in plain english, the Odroid documentation consists solely of gibberish like "download code" or "just change the dtb". The PI would officially come nowhere close to the required processing power.

Finally, ended up bit banging the SPI using simple writes to the /proc filesystem to change GPIOs. This achieved 3.33kbit while all 4 CPUs were processing video at 50%. Since each bit contains 3 sleep statements, it's a sign the kernel was switching tasks at 10khz. Without sleep statements, it went at 38kbit with spurrious gaps of 100us. 100us is a 10khz frequency.

38kbit is too fast for the STM32, which also uses software defined SPI. 3.33kbit can send only 13 bytes of information per frame at 30fps. All problems would be solved by making a new STM32 board with another UART. Just this small amount of bit banging made a 1% increase in CPU usage on all of the Cortex-A7's.

Finally, the GPIO voltage was only 1.8V, so level conversion transistors had to be soldered to bring it up to 3.3V. The mane problem is now fixing the Odroid to the vehicle.
Posted by Jack Crossfire | Oct 10, 2015 @ 12:22 AM | 8,467 Views
With all its dependancies resolved, the odroid finally underwent its most challenging task ever documented. The vision program compiled & processed the 160x120 test footage at slightly over 30fps. It was way below the predicted 72fps from nbench results. Fortunately, at 160x120, it had no problem consuming the webcam's maximum framerate. The latency would be 1/4 of whatever the framerate was, since it used a pipeline to parallelize an algorithm that was otherwise not parallelizable.

It sucked 2.2A at 7.4V with the fan wirring at full speed & webcam on. The Turnigy converter got scorching, even converting this meager voltage down to 5V.

At 320x240, it was hopelessly slow again at 4fps from the webcam. The CPUs maxed out & used 2.2A at 7.4V. Idle current was 0.65A at 7.4V or 1A at 5V. There was no sign of any frequency scaling. The A7's were always at 1.4Ghz & the A15's were always 2Ghz.

As predicted, it only used 4 CPUs at a time, no matter how many the vision program was configured to use. The kernel automatically allocated the 4 Cortex-A15's to the vision program, only occasionally using the Cortex-A7's. It was unknown whether 4 CPUs was the maximum of both A7's & A15's in use simultaneously, or whether it had to stop the A15's to use any A7.

CFLAGS were hard to come by, but the following seemed to work well:

-O3 -pipe -mcpu=cortex-a15 -mfloat-abi=hard -mfpu=neon-vfpv4

It was still an astounding step up, compared to what the PI or the Gumstixs brought. It was of course the power of billions of premium contracts subsidizing phones that led to such huge gains in the Odroid's Samsung processor. The set top box market that led to the PI's Broadcom processor wasn't nearly lucrative enough to generate such powerful chips. Set top boxes were a race to the bottom while phones were a race to the top. Because cell phones are a vital business expense, they benefited from tax writeoffs & corporate budgets while set top boxes were squarely funded by what meager budgets consumers had after taxes.
Posted by Jack Crossfire | Oct 08, 2015 @ 11:55 PM | 7,966 Views
After a quick goog search revealing nothing, decided to be the 1st to document the interior for you the viewers. 15mm on a full frame proved to be the right camera system, but didn't have as wide a depth of field as hoped.

Once inside, the futuristic hull fades away & it looks like any other ship from 30 years ago, if maybe a bit cleaner. There is no evidence inside the maze of corridors of any futuristic hull except for the strangeness of all aluminum walls.

The same exposed wiring, steep ladders, cramped spaces, & spartan rooms of the old days still abound. Can't imagine getting around there during a storm or a battle. On the bridge, it seemed incredibly expensive to have so much equipment specifically made for the military instead of using commercial boating gear.

Like all its other scandals, false advertizing by the contractor about the required crew led to another 15 people being crammed into a space designed for 40. Fortunately, the 3 hulls, low draft, & aluminum frame can get them home faster than any other ship. Everything was labeled & organized for transitory crews that come & go with their scholarship obligations.

They didn't show the engine room with its twin LM2500's, any debriefing room, or any closeup of the mane gun. Suspect all other areas besides what we saw were crammed with equipment. All the copters & UAV goodies were gone. There were just a couple dingees....Continue Reading
Posted by Jack Crossfire | Oct 07, 2015 @ 12:16 AM | 8,219 Views
There it is. Much to the disappointment of unboxing fans, it'll never run Android or a desktop. It could play a mean game of Asphalt 8. Its only destiny is to injest video from a webcam & output path geometry. Even the raspberry pi saw very little use besides a short stint as an access point.


Much has to be done before it starts driving a vehicle. An operating system has to be installed. A serial console needs to be exposed. A cross compiler needs to build the vision system. The vision system needs to be made parallel. SPI has to be ported from the PI. Wifi needs to be installed. The DC-DC converter needs to be connected. It needs to be configured to use the Cortex-A15.


It's comprised of a very fragile 1/32" board. Removing the headers takes a bit of care to avoid lifting pads or cracking the board. Probably cracked the board anyway. It has a 30 pin 2mm surface mount & a 4 pin 0.100" through hole header which need to be removed. It's not worth removing any other headers since the vertical profile is dictated by the fan, unlike the PI.


The bottom is very fragile compared to the PI. The power pins can be accessed from the bottom. It's disappointing that it's a very old cell phone processor repurposed for hobbyists at great cost compared to a phone containing the same processor. There's no way to have a standard single board computer take whatever the latest phone processor is or repurpose a much cheaper phone as a single board computer.
Posted by Jack Crossfire | Oct 05, 2015 @ 11:41 PM | 8,968 Views
The decision was made to pump $110 on an Odroid XU4. Reality was a bit higher than the $75 price quote because of $10 shipping, $7 tax, $7 for an SD card, $11 to ship & tax the SD card. It was already known the droid could only use the 4 Cortex A15 cores with the 4 A7 cores inoperable.

There are no definitive comparisons between the PI 1, PI 2, & the Odroid XU4 or any clear descriptions of what benchmarks they used. You can only estimate a single core of the Odroid is 3x faster than a single core of the PI 2, which is 2x faster than the PI 1. With all 4 cores, the Odroid would probably give 24x the framerate.

Based on the last optimizations, 160x120 would go at 72fps. 240x180 would go exactly at 30fps. 320x240 would go at 15fps. It brings back memories of spending a fortune on a 600Mhz single core Gumstix, just to use a neural network. It was so fast for its size, but less than a PI.

The chroma key & line detection were confirmed as the slowest parts. Since the algorithm can't be done in parallel, a frame pipeline with a 100ms latency would be best. The mane benefit is from averaging multiple frames rather than lowest latency. Changes in lighting & camera angle may provide more benefit than higher resolution from a single position, so you want to maximize the framerate before maximizing resolution.

An attempt to use linear regression to get the chroma key pixel didn't work. The simplest average algorithm here continued to be the best.
Posted by Jack Crossfire | Oct 04, 2015 @ 05:42 PM | 7,854 Views
It's a good time to reveal to the world that the Accucell 8150 doesn't really balance anything. It appears to spend most of its time with the balancing resistor network on, then briefly float the balancing pins every few seconds, presumably to determine what cell is high. It can't determine the high cell when the resistor network is active, even though the LCD clearly shows it. The Astroblinky used the same cycling algorithm, for some reason. 1 cell usually ends up over 4.20 during the floating.

It might actually work if the difference is < 0.01V but lack enough effectiveness for 0.05V. It would be worth trying the astroblinky with a bunch of voltmeters to see if the astroblinky was equally ineffective. The accucell's cell readout revealed severe imbalance in a 2S & a 3S.

The mane cause of unbalancing seems to be the temperature differential between the cells. In high current applications, the inner cells get hotter than the outer cells & discharge faster. This causes the outer cells to discharge less & overcharge.

In low current applications in hot weather, the outer cells heat up from ambient air before the inner cell. This causes the inner cell to discharge less & overcharge.

In the 2S hand controller, body heat heats up 1 cell faster than the other. Ideally the current usage & enclosure would heat all the cells evenly.

The only way to manage the unbalancing is independantly charging the cells instead of relying on a resistor network like modern chargers. This would be very expensive, requiring independent MOSFETs.
Posted by Jack Crossfire | Oct 03, 2015 @ 09:40 PM | 7,207 Views
It was the longest any vehicle ever went on a single battery. Set it to 6mph. Had the headlights off. Speed was somewhat variable because of the low accuracy of the inverter. Not sure driving the ESC using I2C would make the speed more accurate, since the bottom line is the number of bits in the 8khz waveform.

This achievement took 4Ah or 90% of the remaning capacity in the last full discharge. It would have a hard time going any farther or any faster. A fresh 5Ah battery would probably go 17 miles.
Posted by Jack Crossfire | Oct 03, 2015 @ 01:41 PM | 6,972 Views
In the hardest test video, 2.9825% of the frames failed. Shadows are the hardest. Cloudy days are the easiest, even working quite well at 160x120. Higher resolution improved the results. 4k proved too slow to scan.

Normalization slightly degraded the results. Suspect the best results would come from manually setting the exposure, setting manually tweeked constants based on the exposure & path. The saturation is still manually set. However, it is about capturing the most information & the auto exposure is choosing the exposure that captures the most information.

The key computational user is the chroma key step, with any other step manely free. A different chroma key needs to be applied for each color of the path. Then, the threshold needs to be manually tweeked until the path has the right shape.

Tried dynamically detecting the optimum threshold by applying many thresholds to the same image & measuring the average distance of every mask pixel from the mean mask pixel. The optimum path shape should always have a similar average distance of mask pixels from the center mask pixel. This was slow indeed, but always produced very tight masks. The problem was the threshold converged on a constant value & didn't change in any frames. The return on the investment of clockcycles was minimal.

Also tried finding the thresholds where the total number of masked pixels jumped a certain amount due to a sudden bleeding. This didn't produce consistent...Continue Reading
Posted by Jack Crossfire | Oct 01, 2015 @ 11:08 PM | 7,163 Views
Testing continued to try to improve the algorithm & reduce the computation requirements. On some test videos, the algorithm successfully tracked over 99% of the frames. It was still a game of chance, with the threshold, search radius, maximum offset distance, & edge kernel size just happening to be the right values. More difficult lighting naturally degraded the results.

Camera noise contributed to variability even when the position didn't change. It would benefit from a high frame rate. Smaller frame size degraded results. The computational requirements are still way beyond what can be reasonably done on the vehicle.

Normalizing the YUV channels subjectively improved results. A false color image with the normalized channels has a priori information about the dynamic range which wasn't available when the camera took the image, but it inuitively should need the dynamic range of the color channels to be the same as reality.

It didn't work at all when either converted to RGB or using luminance only. It did best with all YUV channels.

It became possible to estimate the center of the path from the 2 sides. Tracking the center would be still more robust than tracking a side.
Posted by Jack Crossfire | Oct 01, 2015 @ 12:54 AM | 6,219 Views
Back at 200m intervals with the brushless motor, it took 2Ah on 3S to do this 6.2 mile drive, with 10 10mph sprints down & a steady 6.66 mph drive back up, with headlights on. The ESC has proven less accurate than directly driving the H bridge, as expected. GPS rated the sprints at 10.5mph & the fastest sprint at 11mph. At 10mph downhill, the ESC errored high. At 6.66mph uphill, it was right on. Still, it could go 12 miles on 5Ah even with these demanding speeds. The current increases as the voltage decreases, so current usage needs to be padded.

Heartrate was 176, dropping to 173 as the intervals wore on.