Jack Crossfire's blog View Details
Posted by Jack Crossfire | Nov 26, 2015 @ 03:30 AM | 2,780 Views
It was the 1st new board in 11 months. The 1st truck board arrived on Dev 7, 2014. It had a lot of rework & a MOSFET for controlling the headlights. Hard to believe that old age has degraded memory so much, but I had no memory of the 1st board or even reworking it. The 2nd board arrived on Jan 18, 2015. It had no headlight control.

Memory of board etching times faded. The ideal times are exposure: 25 min, etching: 40 min. Left it etching for too long, which didn't kill anything, but it had a toner bubble which broke a trace. There's very little memory of working on boards in the last 15 years, despite all the time it took.

The 3rd board has a UART for the Odroid instead of SPI, I2C for a full IMU, a UART for GPS, but probably not enough power for GPS above 1Hz. A board for a 2nd vehicle's hand controller also emerged. Knowing whether that 2nd vehicle will ever be affordable requires checking the current rent.

Whether an IMU heading is accurate enough to keep it on a path is the 1st task. It will have some knowledge of the path heading from manual input & a rough heading from the IMU. Later on, the path heading could come from GPS. The heading from the IMU can help find the path on the camera & provide a boundary for feedback headings. Knowledge of the path edges from the camera can offset errors in the IMU heading. The camera would have a very diminished role, compared to the IMU heading.
Posted by Jack Crossfire | Nov 25, 2015 @ 11:13 PM | 2,788 Views
Does anything still show under all your advertisements?
Posted by Jack Crossfire | Nov 21, 2015 @ 06:52 PM | 3,543 Views
Funny watching 1 generation of Linux distributors get rid of core files years ago only to have the next generation replace them years later with the .xsession-errors file & encounter the same old problem of crash logs filling the entire drive.

Now within a matter of days of a video player spitting out h264 debug statements, the .xsession-errors file grew to all 750gigs of the SSD it was stored on before firefox started crashing more than usual. After the usual 50 comments on the multibillion dollar stack sites going absolutely nowhere, the solution at least on Ubunt 14.04.2 was to change the name .xsession-errors in /etc/X11/Xsession to /dev/null

Log files have always been a tradition in UNIX. There are dozens more in /var/log. Every night, the latest ones are compressed & stored in a rotating history of log files. It wasn't a problem when hard drives could be rewritten forever & were slow, but modern SSDs can be filled up in a couple seconds & only take 1000 write cycles before they have to be thrown away. A particular debug statement which prints something for every pixel of an image could fry an SSD instantly.
Posted by Jack Crossfire | Nov 20, 2015 @ 11:41 PM | 2,977 Views
Just by selecting the right luminance, shadows can be detected. It might even work in variable lighting. It wouldn't be able to fill missing areas of the path, since the shadow borders can't be detected. Its mane use would be throwing out bad frames. The algorithm would be to detect when the edge of the path came within a certain distance of the shadow pixels. If enough shadow pixels were within range, the frame would be discarded. If a shadow crossed the edge, it would be discarded. If the color key was all shaded, it would be discarded.
Posted by Jack Crossfire | Nov 17, 2015 @ 11:24 PM | 3,111 Views
The best shotgun chroma keying algorithm still had a hard time isolating the path in the right lighting, so wasn't convinced making a new board would work. The sun is so low in winter, it makes a lot more shadows, revealing more problems in machine vision. It was back to more notes on robot dogs.

In the old days, it was a tradition to stump Santa by wanting what wouldn't exist for another 30 years: LED TV screens, quad copters, watch TV's. Now the tradition continues. Despite Boston Dynamics introducing a robot that could run 3 years ago, no consumer version of one has appeared. 3 years is an eternity in modern terms.

A running robot dog is still a very difficult problem that preoccupies grad students for years, with no-one having miniaturized one. The Boston Dynamics robots used rotational hydraulics so powerful, they could hop. Their 6 year old consumer sized robot was limited to very slow movement by the speed of the servos.

It became quite clear the spring aided Swiss robot can't turn or support the weight of a battery. If a legged running robot is required, the best alternative would use 2 fans blowing air to support most of the weight & stabilize it. The legs would just provide sideways balance & forward motion. It would have really short battery life. Running robot dogs are still firmly in the imagination.
Posted by Jack Crossfire | Nov 16, 2015 @ 11:59 PM | 3,120 Views
The apartment had the required components to augment the car navigation: an MPU6050 & AK8975. Not sure what the best parts are nowadays, but a 6 DOF IMU instead of a 9 is now ideal, since it's now known the compass needs to be far from the power system.

The truck has done such a good job in its current form, as a training tool, the decision still hasn't been made to rebuild it to augment the navigation. There's still a desire to make a 2nd vehicle for the day job, since most of the time is now spent there. Another pair of radios & all the parts required to upgrade it are in the apartment.

A Losi Micro T would be easiest to fit in the office, but probably too small to negotiate the curbs. Prices are outrageous in 1/24 size. If only quad copters had enough range to do the job. They would need a tethered battery backpack.

Another idea was a robotic dog. Scale model RC cars aren't a big thing, if they exist at all. There's no scale model lunar rover. Scale model flying machines have long included ornithopters. If someone is fascinated enough with biological movement to pick an ornithopter over something more practical, surely a biological ground vehicle would be worthwhile.

There are no affordable robotic dogs which can go faster than 4mph. The last Boston Dynamics models before their buyout relied on electric hydraulic pumps to manetain a reservoir of static pressure fluid. The constantly pressurized reservoir could be called upon at any moment to provide instant, fast & powerful force. It was the only way enough force at enough speed could be produced to generate the required leg movement.

The small robotic dog relies on passive springs to augment servos. It uses 2 servos per leg, pulling cables which must be tightened. It's passively stable, driven by open loop servo commands, & goes 3mph. http://biorob.epfl.ch/cheetah

There's certainly potential for scaling it up. Replicating the mechanical parts & making them strong enough to go 10mph would be an ordeal.
Posted by Jack Crossfire | Nov 15, 2015 @ 01:19 AM | 3,727 Views
There is an algorithm which generates a basket of possible paths, using a large subset of color keys. For any combination lighting & shadows, it's guaranteed to find at least 1 frame which has the closest approximation to the true path. This reduces the problem to just finding which approximation is the best. In this case, previous knowledge of the absolute heading of the true path & vehicle could be the only pieces required to close the loop. Once the ideal path from chroma keying is known, the edges can be found.

Getting a vehicle heading accurate enough to do the job is still tricky business, but if it worked, no further work would be required. It would start with a test program that superimposed heading on video. The compass would be attached to the Odroid.

A purely vision based filter is still ideal. An attempt to rank images based on edge detection failed. It calculated the best triangle in each frame, picking the frame with the highest number of masked pixels in the triangle. Another idea is to pick the frame where the masked pixels line up with the triangle with the least noise.
Posted by Jack Crossfire | Nov 12, 2015 @ 11:23 PM | 3,849 Views
Started a new project to try to collect every video from the famous flight director until the Goog shuts down the party.

Wayne Hale Mars Society Speech (12 min 56 sec)

Wayne Hale Von Braun Symposium speech (23 min 56 sec)
...Continue Reading
Posted by Jack Crossfire | Nov 08, 2015 @ 09:53 PM | 4,344 Views
The cheap fisheye lens has its widest view in the corners & narrowest view in the center. Given this limitation, you want the widest part of the path on an edge & the narrowest part of the path in the center. It should expand the narrowest part & shrink the widest part.

You can't point the camera diagonally to increase the field of view of the path. The narrowest part would have to be down the exact center to fit in a corner, which it never is.

Pointing the camera down to get more of the path in view for finding color keys doesn't work. This puts the narrowest part in the widest field of view near the top of the frame. It makes it harder to find where the vanishing point it. It puts the widest part in the narrowest field of view near the center of the frame, reducing the amount of path edges visible.

Nevertheless, curiosity about the effects of camera pointing prevailed. It was while driving 10 miles to gather footage that the intuitive effects of the fisheye lens became clear. Tried aiming the camera down, but the suspension still pitched up on its own. There's no way to precisely aim it.

It yielded the 1st footage in cloudy weather. Machine vision was pretty awful, due to the lack of contrast.
Posted by Jack Crossfire | Nov 07, 2015 @ 11:47 PM | 4,153 Views
An unannounced Trident II missile test from a submarine no-one knew was there proved a bankrupt Navy can still shock & awe. Undoubtedly, it was meant to shock Putin as much as the book of face.
Posted by Jack Crossfire | Nov 04, 2015 @ 10:04 PM | 4,083 Views
A short drive with the camera pointed at 2 oclock revealed the path would still occupy most of the frame with the most consistent color. There wouldn't be a reliable negative mask from looking at the sides. There was still an idea of throwing out colors which are outside hard coded boundaries & throwing out frames which don't have a good lock on the path.
Posted by Jack Crossfire | Nov 01, 2015 @ 09:49 PM | 4,075 Views
It's a shame to lose $100 on a computer to run a machine vision algorithm that didn't work. Tried out a more complete implementation of the Goog algorithm. It took color keys from a rectangle spanning several frames in time for 48000 color keys. This was quite robust, but ground down to 1 frame every 4 seconds on the quad 3.8Ghz. If it ever got off the path, it was cactus for a while until the bad color fell off its history.

If the truck simply drove a short distance & stopped for several seconds to compute a new frame, it would be a victory. Another idea came to get a wider horizontal field of view, wide enough to make the path a small fraction of the view. It would take color keys from known areas not to be the path. Lacking any fisheye lens for a webcam, it would use 2 cams pointing at 10 & 2 oclock.
Posted by Jack Crossfire | Oct 31, 2015 @ 03:22 AM | 4,364 Views
It became quite clear it was time to put away the autonomous driving idea again. The shadow problem was insurmountable. It needs prior knowledge of what color keys reveal the path. It turns out, if you exhaustively try every color in the image, there is always a combination of color keys which yields the path, with high immunity to shadows. The trick is scoring every path revealed by a different combination of colors. Perhaps the possible paths can be limited to a narrow range in the center of the frame.

The next step is probably giving it a magnetic heading with a known direction of the path to follow. At least on a straight path, it would know where in the frame the vanishing point was, reducing the problem to just finding the edges. With a known vanishing point, it could get the color keys from a line between the vanishing point & the bottom center of the frame. Finding the edges becomes a lot easier. It also needs to know the horizon line.
Posted by Jack Crossfire | Oct 28, 2015 @ 09:50 PM | 3,767 Views
Autonomous driving day 2 (4 min 9 sec)

Activated the lane following code, extending the successful runs but revealing all new problems. Lane following was a simple 2nd proportional feedback added to the vanishing point feedback. There was no rate damping. This kept it going down the path, but it still drifted anywhere from the center to the right side. This also revealed just how bad machine vision was at handling shadows.

It suffered a positive feedback not evident in simulations. A shadow would send it off course, which worsened the wrongly detected path, while the simulation always drove past the wrongly detected path. This was a deal breaker.

Did some 10mph runs with it. This revealed an underdamped oscillation. It didn't take long for it to stray off the path, even in the best vision conditions. Only a rare coincidence kept it on the path. It seemed to oscillate around the lane keeping feedback.

The autonomous driving ended after a 10mph run, when it flipped over, smashed the camera & crashed the computer. So even ground vehicles experience crash damage. The computer lost the last 2 minutes of video.

It's surprising just how accurate the machine vision has to be, compared to what intuitively should be enough. If only there was a way to determine how good the current data was & construct longer term feedback using just the good data. The computer adjusts heading 30 times a second, relying on a large number of data points to drown out the outliers & reveal a true heading. A human only needs to adjust heading every few seconds, with a single very good heading.
Posted by Jack Crossfire | Oct 27, 2015 @ 11:30 PM | 4,279 Views
Compared a simple implementation of temporally selected color keys with the tried & true spatially selected color keys. It took over 200 frames to get enough temporal color keys to see the path. The number could be reduced by throwing out overlapping colors & skipping frames, but for now, the rough implementation gave a good idea of the limitations of each method.

What probably happened was the exposure setting changed slightly between each frame, so the colors of past frames were a lousy match for the current frame. Another thing which can happen is the angle of reflected light changes as the surface moves closer, so it's not exactly the same color with decreasing distance. Of course, without a large buffer of past colors, the temporal algorithm starts dropping areas of the path as it forgets important colors.

The idea for spatially selecting color keys was a single magic Summer moment which seems impossible to outdo. Not sure exactly where or when the idea appeared.
Posted by Jack Crossfire | Oct 26, 2015 @ 11:22 PM | 4,115 Views
Autonomous driving begins (3 min 56 sec)

As it was when copters 1st hovered on their own, it wasn't a sudden ability to assertively drive itself but an incremental journey towards being able to drive down the path for short distances. Recorded the webcam at 30fps. The SD card has enough room for 2 hours. The Odroid seems to have 2 hours of power from the 4Ah battery.

Quite nihilistic to see such a high frame rate come from such a cheap webcam, but it was made possible by the enormous computing power applied to it, compared to what was around in its day.

By the end of the session, it was assertively steering towards the vanishing point without any damping or lead compensation. It had a hard time with shadows. It couldn't stay laterally on the path. Software for lateral control is ready to go, during the next time off.

It slowly emerged that the super chroma key was similar to the chroma keying Goog had in 2005. The Goog used chroma keying in its 1st autonomous car. Like super chroma keying, it applied many different color keys to the same image. Unlike super chroma keying, it took the color keys from the same coordinates in many points in time, rather than taking many different coordinates in the current frame. It had to drive a certain distance to gather all the possible colors in the path, but it didn't gather colors from off the path.

Like copter autopilots 10 years ago, there's no free cookbook for making an autonomous car, so every step takes a long period of discovery. It took a lot of experimenting with new edge detecting methods to discover old fashioned chroma keying was the best method. It would be impossible for 1 person to make it handle every situation, handle turns at 10mph, or avoid obstacles.

Like autonomous copters, an accepted way of doing it will eventually become an easy download. The accepted method is going to take a lot more sensors. The key sensor is LIDAR while with copters it was GPS. There's still something novel in being the 1st & doing it with just a camera.
Posted by Jack Crossfire | Oct 24, 2015 @ 09:08 PM | 3,980 Views
It was another day of rearranging components & debugging steering before any attempt at machine vision could happen. The massive steering changes required for autopilot were a long way from working. Had to move the Odroid to make room for its cables. Captured some video. Added a variable frame rate for recording the input video. Wifi was much more reliable from the macbook than the phone. Couldn't connect long enough to start the program from the phone. Once started, it didn't crash. Managed to access the STM32 just enough to do the required maintenance. Rick Hunter never had to deal with component accessibility. Such is the difference between Japanese cartoons & reality.
Posted by Jack Crossfire | Oct 24, 2015 @ 01:49 AM | 4,363 Views
After some fabrication & debugging, the 'droid was sending geometry to the autopilot & the autopilot was successfully taking configuration parameters from the phone to tune the path following. Tuning the vision algorithm would involve very difficult text editing over the unreliable wifi.

Despite every effort, there was no way to allow access to the STM32 without a major teardown. Only a serial port was extended. The vision algorithm got a slight improvement by converting the geometry detection to a polar line equation & using logarithmic searching.

Finally, there was a desire to record raw video from the camera, to reproduce the path detection offline. After struggling for many years to record 720x480 TV on dual Athlons, it was quite humbling to find the Odroid could record 640x480 in 90% quality JPEG at 30fps, in a single thread, while simultaneously performing path detection. The path detection still can't go above 160x120 without a massive slowdown.

Once fixed on the car, the Odroid has proven much more reliable than the PI. It wasn't reliable when sitting on the bench because its 0201 caps were quite sensitive. It can stay on idle forever & hasn't had any issues in extended machine vision runs using 300% CPU.

With the amount of money & time invested, there's every hope path following will work, but it probably won't.
Posted by Jack Crossfire | Oct 21, 2015 @ 08:06 PM | 4,321 Views
It came & went. Nothing happened. A lot of us were expecting someone from 1985 to show up in a flying car.
Posted by Jack Crossfire | Oct 18, 2015 @ 10:55 PM | 120,988 Views
After weeks of baby steps, the Odroid finally connected to the phone as an access point. It was just as unreliable as the Pi. This didn't involve a kernel recompile, since trying to compile the kernel failed.

The old RT8192 wifi dongle once again reappeared for the access point role. This time, one must download the mysterious hostapd_realtek.tar from somewhere. This contains a mysterious version of hostapd which Realtek modified to actually work. Plugging in the dongle got the right kernel module to load, the 8192cu.

The hostapd command was:

/root/hostapd /root/rtl_minimal.conf&

The dhcp command was:

The hostapd config file was:

# wlan interface. Check ifconfig to find what's your interface

# Network SSID (Your network name)

# Channel to be used! preferred 6 or 11

# Your network Password

# Only change below if you are sure of what you are doing

Key to getting hostapd to work was not using encryption. Commenting wpa=2 disabled it. Enabling encryption caused dhcp to fail. Another requirement was deleting /usr/sbin/NetworkManager. There was no other way to disable NetworkManager than removing the program.

The dnsmasq config file was /etc/dnsmasq.conf



Since the access point's sole purpose was aligning the camera, there was no need for encryption.