New Products Flash Sale
Jack Crossfire's blog View Details
Posted by Jack Crossfire | Nov 21, 2015 @ 07:52 PM | 1,419 Views
Funny watching 1 generation of Linux distributors get rid of core files years ago only to have the next generation replace them years later with the .xsession-errors file & encounter the same old problem of crash logs filling the entire drive.

Now within a matter of days of a video player spitting out h264 debug statements, the .xsession-errors file grew to all 750gigs of the SSD it was stored on before firefox started crashing more than usual. After the usual 50 comments on the multibillion dollar stack sites going absolutely nowhere, the solution at least on Ubunt 14.04.2 was to change the name .xsession-errors in /etc/X11/Xsession to /dev/null

Log files have always been a tradition in UNIX. There are dozens more in /var/log. Every night, the latest ones are compressed & stored in a rotating history of log files. It wasn't a problem when hard drives could be rewritten forever & were slow, but modern SSDs can be filled up in a couple seconds & only take 1000 write cycles before they have to be thrown away. A particular debug statement which prints something for every pixel of an image could fry an SSD instantly.
Posted by Jack Crossfire | Nov 21, 2015 @ 12:41 AM | 909 Views
Just by selecting the right luminance, shadows can be detected. It might even work in variable lighting. It wouldn't be able to fill missing areas of the path, since the shadow borders can't be detected. Its mane use would be throwing out bad frames. The algorithm would be to detect when the edge of the path came within a certain distance of the shadow pixels. If enough shadow pixels were within range, the frame would be discarded. If a shadow crossed the edge, it would be discarded. If the color key was all shaded, it would be discarded.
Posted by Jack Crossfire | Nov 18, 2015 @ 12:24 AM | 1,049 Views
The best shotgun chroma keying algorithm still had a hard time isolating the path in the right lighting, so wasn't convinced making a new board would work. The sun is so low in winter, it makes a lot more shadows, revealing more problems in machine vision. It was back to more notes on robot dogs.

In the old days, it was a tradition to stump Santa by wanting what wouldn't exist for another 30 years: LED TV screens, quad copters, watch TV's. Now the tradition continues. Despite Boston Dynamics introducing a robot that could run 3 years ago, no consumer version of one has appeared. 3 years is an eternity in modern terms.

A running robot dog is still a very difficult problem that preoccupies grad students for years, with no-one having miniaturized one. The Boston Dynamics robots used rotational hydraulics so powerful, they could hop. Their 6 year old consumer sized robot was limited to very slow movement by the speed of the servos.

It became quite clear the spring aided Swiss robot can't turn or support the weight of a battery. If a legged running robot is required, the best alternative would use 2 fans blowing air to support most of the weight & stabilize it. The legs would just provide sideways balance & forward motion. It would have really short battery life. Running robot dogs are still firmly in the imagination.
Posted by Jack Crossfire | Nov 17, 2015 @ 12:59 AM | 1,009 Views
The apartment had the required components to augment the car navigation: an MPU6050 & AK8975. Not sure what the best parts are nowadays, but a 6 DOF IMU instead of a 9 is now ideal, since it's now known the compass needs to be far from the power system.

The truck has done such a good job in its current form, as a training tool, the decision still hasn't been made to rebuild it to augment the navigation. There's still a desire to make a 2nd vehicle for the day job, since most of the time is now spent there. Another pair of radios & all the parts required to upgrade it are in the apartment.

A Losi Micro T would be easiest to fit in the office, but probably too small to negotiate the curbs. Prices are outrageous in 1/24 size. If only quad copters had enough range to do the job. They would need a tethered battery backpack.

Another idea was a robotic dog. Scale model RC cars aren't a big thing, if they exist at all. There's no scale model lunar rover. Scale model flying machines have long included ornithopters. If someone is fascinated enough with biological movement to pick an ornithopter over something more practical, surely a biological ground vehicle would be worthwhile.

There are no affordable robotic dogs which can go faster than 4mph. The last Boston Dynamics models before their buyout relied on electric hydraulic pumps to manetain a reservoir of static pressure fluid. The constantly pressurized reservoir could be called upon at any moment to provide instant, fast & powerful force. It was the only way enough force at enough speed could be produced to generate the required leg movement.

The small robotic dog relies on passive springs to augment servos. It uses 2 servos per leg, pulling cables which must be tightened. It's passively stable, driven by open loop servo commands, & goes 3mph.

There's certainly potential for scaling it up. Replicating the mechanical parts & making them strong enough to go 10mph would be an ordeal.
Posted by Jack Crossfire | Nov 15, 2015 @ 02:19 AM | 1,740 Views
There is an algorithm which generates a basket of possible paths, using a large subset of color keys. For any combination lighting & shadows, it's guaranteed to find at least 1 frame which has the closest approximation to the true path. This reduces the problem to just finding which approximation is the best. In this case, previous knowledge of the absolute heading of the true path & vehicle could be the only pieces required to close the loop. Once the ideal path from chroma keying is known, the edges can be found.

Getting a vehicle heading accurate enough to do the job is still tricky business, but if it worked, no further work would be required. It would start with a test program that superimposed heading on video. The compass would be attached to the Odroid.

A purely vision based filter is still ideal. An attempt to rank images based on edge detection failed. It calculated the best triangle in each frame, picking the frame with the highest number of masked pixels in the triangle. Another idea is to pick the frame where the masked pixels line up with the triangle with the least noise.
Posted by Jack Crossfire | Nov 13, 2015 @ 12:23 AM | 1,630 Views
Started a new project to try to collect every video from the famous flight director until the Goog shuts down the party.

Wayne Hale Mars Society Speech (12 min 56 sec)

Wayne Hale Von Braun Symposium speech (23 min 56 sec)
...Continue Reading
Posted by Jack Crossfire | Nov 08, 2015 @ 10:53 PM | 1,798 Views
The cheap fisheye lens has its widest view in the corners & narrowest view in the center. Given this limitation, you want the widest part of the path on an edge & the narrowest part of the path in the center. It should expand the narrowest part & shrink the widest part.

You can't point the camera diagonally to increase the field of view of the path. The narrowest part would have to be down the exact center to fit in a corner, which it never is.

Pointing the camera down to get more of the path in view for finding color keys doesn't work. This puts the narrowest part in the widest field of view near the top of the frame. It makes it harder to find where the vanishing point it. It puts the widest part in the narrowest field of view near the center of the frame, reducing the amount of path edges visible.

Nevertheless, curiosity about the effects of camera pointing prevailed. It was while driving 10 miles to gather footage that the intuitive effects of the fisheye lens became clear. Tried aiming the camera down, but the suspension still pitched up on its own. There's no way to precisely aim it.

It yielded the 1st footage in cloudy weather. Machine vision was pretty awful, due to the lack of contrast.
Posted by Jack Crossfire | Nov 08, 2015 @ 12:47 AM | 1,338 Views
An unannounced Trident II missile test from a submarine no-one knew was there proved a bankrupt Navy can still shock & awe. Undoubtedly, it was meant to shock Putin as much as the book of face.
Posted by Jack Crossfire | Nov 04, 2015 @ 11:04 PM | 1,319 Views
A short drive with the camera pointed at 2 oclock revealed the path would still occupy most of the frame with the most consistent color. There wouldn't be a reliable negative mask from looking at the sides. There was still an idea of throwing out colors which are outside hard coded boundaries & throwing out frames which don't have a good lock on the path.
Posted by Jack Crossfire | Nov 01, 2015 @ 10:49 PM | 1,243 Views
It's a shame to lose $100 on a computer to run a machine vision algorithm that didn't work. Tried out a more complete implementation of the Goog algorithm. It took color keys from a rectangle spanning several frames in time for 48000 color keys. This was quite robust, but ground down to 1 frame every 4 seconds on the quad 3.8Ghz. If it ever got off the path, it was cactus for a while until the bad color fell off its history.

If the truck simply drove a short distance & stopped for several seconds to compute a new frame, it would be a victory. Another idea came to get a wider horizontal field of view, wide enough to make the path a small fraction of the view. It would take color keys from known areas not to be the path. Lacking any fisheye lens for a webcam, it would use 2 cams pointing at 10 & 2 oclock.
Posted by Jack Crossfire | Oct 31, 2015 @ 04:22 AM | 1,509 Views
It became quite clear it was time to put away the autonomous driving idea again. The shadow problem was insurmountable. It needs prior knowledge of what color keys reveal the path. It turns out, if you exhaustively try every color in the image, there is always a combination of color keys which yields the path, with high immunity to shadows. The trick is scoring every path revealed by a different combination of colors. Perhaps the possible paths can be limited to a narrow range in the center of the frame.

The next step is probably giving it a magnetic heading with a known direction of the path to follow. At least on a straight path, it would know where in the frame the vanishing point was, reducing the problem to just finding the edges. With a known vanishing point, it could get the color keys from a line between the vanishing point & the bottom center of the frame. Finding the edges becomes a lot easier. It also needs to know the horizon line.
Posted by Jack Crossfire | Oct 28, 2015 @ 10:50 PM | 880 Views
Autonomous driving day 2 (4 min 9 sec)

Activated the lane following code, extending the successful runs but revealing all new problems. Lane following was a simple 2nd proportional feedback added to the vanishing point feedback. There was no rate damping. This kept it going down the path, but it still drifted anywhere from the center to the right side. This also revealed just how bad machine vision was at handling shadows.

It suffered a positive feedback not evident in simulations. A shadow would send it off course, which worsened the wrongly detected path, while the simulation always drove past the wrongly detected path. This was a deal breaker.

Did some 10mph runs with it. This revealed an underdamped oscillation. It didn't take long for it to stray off the path, even in the best vision conditions. Only a rare coincidence kept it on the path. It seemed to oscillate around the lane keeping feedback.

The autonomous driving ended after a 10mph run, when it flipped over, smashed the camera & crashed the computer. So even ground vehicles experience crash damage. The computer lost the last 2 minutes of video.

It's surprising just how accurate the machine vision has to be, compared to what intuitively should be enough. If only there was a way to determine how good the current data was & construct longer term feedback using just the good data. The computer adjusts heading 30 times a second, relying on a large number of data points to drown out the outliers & reveal a true heading. A human only needs to adjust heading every few seconds, with a single very good heading.
Posted by Jack Crossfire | Oct 28, 2015 @ 12:30 AM | 1,373 Views
Compared a simple implementation of temporally selected color keys with the tried & true spatially selected color keys. It took over 200 frames to get enough temporal color keys to see the path. The number could be reduced by throwing out overlapping colors & skipping frames, but for now, the rough implementation gave a good idea of the limitations of each method.

What probably happened was the exposure setting changed slightly between each frame, so the colors of past frames were a lousy match for the current frame. Another thing which can happen is the angle of reflected light changes as the surface moves closer, so it's not exactly the same color with decreasing distance. Of course, without a large buffer of past colors, the temporal algorithm starts dropping areas of the path as it forgets important colors.

The idea for spatially selecting color keys was a single magic Summer moment which seems impossible to outdo. Not sure exactly where or when the idea appeared.
Posted by Jack Crossfire | Oct 27, 2015 @ 12:22 AM | 1,220 Views
Autonomous driving begins (3 min 56 sec)

As it was when copters 1st hovered on their own, it wasn't a sudden ability to assertively drive itself but an incremental journey towards being able to drive down the path for short distances. Recorded the webcam at 30fps. The SD card has enough room for 2 hours. The Odroid seems to have 2 hours of power from the 4Ah battery.

Quite nihilistic to see such a high frame rate come from such a cheap webcam, but it was made possible by the enormous computing power applied to it, compared to what was around in its day.

By the end of the session, it was assertively steering towards the vanishing point without any damping or lead compensation. It had a hard time with shadows. It couldn't stay laterally on the path. Software for lateral control is ready to go, during the next time off.

It slowly emerged that the super chroma key was similar to the chroma keying Goog had in 2005. The Goog used chroma keying in its 1st autonomous car. Like super chroma keying, it applied many different color keys to the same image. Unlike super chroma keying, it took the color keys from the same coordinates in many points in time, rather than taking many different coordinates in the current frame. It had to drive a certain distance to gather all the possible colors in the path, but it didn't gather colors from off the path.

Like copter autopilots 10 years ago, there's no free cookbook for making an autonomous car, so every step takes a long period of discovery. It took a lot of experimenting with new edge detecting methods to discover old fashioned chroma keying was the best method. It would be impossible for 1 person to make it handle every situation, handle turns at 10mph, or avoid obstacles.

Like autonomous copters, an accepted way of doing it will eventually become an easy download. The accepted method is going to take a lot more sensors. The key sensor is LIDAR while with copters it was GPS. There's still something novel in being the 1st & doing it with just a camera.
Posted by Jack Crossfire | Oct 24, 2015 @ 10:08 PM | 1,143 Views
It was another day of rearranging components & debugging steering before any attempt at machine vision could happen. The massive steering changes required for autopilot were a long way from working. Had to move the Odroid to make room for its cables. Captured some video. Added a variable frame rate for recording the input video. Wifi was much more reliable from the macbook than the phone. Couldn't connect long enough to start the program from the phone. Once started, it didn't crash. Managed to access the STM32 just enough to do the required maintenance. Rick Hunter never had to deal with component accessibility. Such is the difference between Japanese cartoons & reality.
Posted by Jack Crossfire | Oct 24, 2015 @ 02:49 AM | 1,573 Views
After some fabrication & debugging, the 'droid was sending geometry to the autopilot & the autopilot was successfully taking configuration parameters from the phone to tune the path following. Tuning the vision algorithm would involve very difficult text editing over the unreliable wifi.

Despite every effort, there was no way to allow access to the STM32 without a major teardown. Only a serial port was extended. The vision algorithm got a slight improvement by converting the geometry detection to a polar line equation & using logarithmic searching.

Finally, there was a desire to record raw video from the camera, to reproduce the path detection offline. After struggling for many years to record 720x480 TV on dual Athlons, it was quite humbling to find the Odroid could record 640x480 in 90% quality JPEG at 30fps, in a single thread, while simultaneously performing path detection. The path detection still can't go above 160x120 without a massive slowdown.

Once fixed on the car, the Odroid has proven much more reliable than the PI. It wasn't reliable when sitting on the bench because its 0201 caps were quite sensitive. It can stay on idle forever & hasn't had any issues in extended machine vision runs using 300% CPU.

With the amount of money & time invested, there's every hope path following will work, but it probably won't.
Posted by Jack Crossfire | Oct 21, 2015 @ 09:06 PM | 1,607 Views
It came & went. Nothing happened. A lot of us were expecting someone from 1985 to show up in a flying car.
Posted by Jack Crossfire | Oct 18, 2015 @ 11:55 PM | 118,277 Views
After weeks of baby steps, the Odroid finally connected to the phone as an access point. It was just as unreliable as the Pi. This didn't involve a kernel recompile, since trying to compile the kernel failed.

The old RT8192 wifi dongle once again reappeared for the access point role. This time, one must download the mysterious hostapd_realtek.tar from somewhere. This contains a mysterious version of hostapd which Realtek modified to actually work. Plugging in the dongle got the right kernel module to load, the 8192cu.

The hostapd command was:

/root/hostapd /root/rtl_minimal.conf&

The dhcp command was:

The hostapd config file was:

# wlan interface. Check ifconfig to find what's your interface

# Network SSID (Your network name)

# Channel to be used! preferred 6 or 11

# Your network Password

# Only change below if you are sure of what you are doing

Key to getting hostapd to work was not using encryption. Commenting wpa=2 disabled it. Enabling encryption caused dhcp to fail. Another requirement was deleting /usr/sbin/NetworkManager. There was no other way to disable NetworkManager than removing the program.

The dnsmasq config file was /etc/dnsmasq.conf



Since the access point's sole purpose was aligning the camera, there was no need for encryption.
Posted by Jack Crossfire | Oct 18, 2015 @ 04:20 PM | 116,856 Views
So the Odroid has a 2nd UART on its 30 pin header, the STM32 board has another unused UART, but it isn't exposed on any pads & no-one was in the mood to make a new board. Back to SPI it was. Based on goog searches, getting SPI to work on the Odroid is a big deal. It wasn't expected to be as easy as the PI because of the small userbase & expectations didn't disappoint. Where the PI has a gootube video on every minute subject, in plain english, the Odroid documentation consists solely of gibberish like "download code" or "just change the dtb". The PI would officially come nowhere close to the required processing power.

Finally, ended up bit banging the SPI using simple writes to the /proc filesystem to change GPIOs. This achieved 3.33kbit while all 4 CPUs were processing video at 50%. Since each bit contains 3 sleep statements, it's a sign the kernel was switching tasks at 10khz. Without sleep statements, it went at 38kbit with spurrious gaps of 100us. 100us is a 10khz frequency.

38kbit is too fast for the STM32, which also uses software defined SPI. 3.33kbit can send only 13 bytes of information per frame at 30fps. All problems would be solved by making a new STM32 board with another UART. Just this small amount of bit banging made a 1% increase in CPU usage on all of the Cortex-A7's.

Finally, the GPIO voltage was only 1.8V, so level conversion transistors had to be soldered to bring it up to 3.3V. The mane problem is now fixing the Odroid to the vehicle.
Posted by Jack Crossfire | Oct 10, 2015 @ 01:22 AM | 2,290 Views
With all its dependancies resolved, the odroid finally underwent its most challenging task ever documented. The vision program compiled & processed the 160x120 test footage at slightly over 30fps. It was way below the predicted 72fps from nbench results. Fortunately, at 160x120, it had no problem consuming the webcam's maximum framerate. The latency would be 1/4 of whatever the framerate was, since it used a pipeline to parallelize an algorithm that was otherwise not parallelizable.

It sucked 2.2A at 7.4V with the fan wirring at full speed & webcam on. The Turnigy converter got scorching, even converting this meager voltage down to 5V.

At 320x240, it was hopelessly slow again at 4fps from the webcam. The CPUs maxed out & used 2.2A at 7.4V. Idle current was 0.65A at 7.4V or 1A at 5V. There was no sign of any frequency scaling. The A7's were always at 1.4Ghz & the A15's were always 2Ghz.

As predicted, it only used 4 CPUs at a time, no matter how many the vision program was configured to use. The kernel automatically allocated the 4 Cortex-A15's to the vision program, only occasionally using the Cortex-A7's. It was unknown whether 4 CPUs was the maximum of both A7's & A15's in use simultaneously, or whether it had to stop the A15's to use any A7.

CFLAGS were hard to come by, but the following seemed to work well:

-O3 -pipe -mcpu=cortex-a15 -mfloat-abi=hard -mfpu=neon-vfpv4

It was still an astounding step up, compared to what the PI or the Gumstixs brought. It was of course the power of billions of premium contracts subsidizing phones that led to such huge gains in the Odroid's Samsung processor. The set top box market that led to the PI's Broadcom processor wasn't nearly lucrative enough to generate such powerful chips. Set top boxes were a race to the bottom while phones were a race to the top. Because cell phones are a vital business expense, they benefited from tax writeoffs & corporate budgets while set top boxes were squarely funded by what meager budgets consumers had after taxes.