New Products Flash Sale
Jack Crossfire's blog View Details
Posted by Jack Crossfire | Oct 26, 2008 @ 01:24 AM | 3,616 Views
Step #1: Get rid of dependance on Taiwan slaves for landing gear.

Step #2: Reduce size of electronics tray to reduce turbulance.

Now some other ideas.

Neural network preprocessor: instead of using for loops & dynamic arrays like the current libraries, generate unrolled SIMD assembly language routines for networks defined at compile time.

Backpropagation through time: not as sexy as evolution & hard to

There is source code for backpropagation through time. It's all from 1993 & can't compile on modern systems.
Posted by Jack Crossfire | Oct 25, 2008 @ 06:04 PM | 3,241 Views
Finally wrote a way to animate flight recordings on THE GOOG.

Neural vs. PID feedback (0 min 41 sec)

That was PID vs neural feedback. Amazing how choppy even 4Hz GPS is. It doesn't matter if U try to smooth it with velocity. It's a small space & GPS only goes to 1m.

Since we're capturing Goog screens, have some thrilling moments in Goog Flight Simulator.

crazy landing in bergen NV (1 min 23 sec)
...Continue Reading
Posted by Jack Crossfire | Oct 24, 2008 @ 07:00 PM | 3,057 Views
Probably going to budget based on flights instead of repairs. That is, finish repairs right after the crashes but don't fly until the next credit card reset. That should keep the rent storing repaired copters instead of smashed copters.

In other news, finally watched Lockheed heroine's complete speech & what a bore. Some aerospace majors younger than U have the most boring jobs in the world. The most boring job in the world is your boss, but all this reminds U of the 2nd most boring job in the world, teaching American history.

Holy mother of sleeping pills that was boring & the reason it was boring was because while European governments excercised total power & elevated leaders to God status, your government could only trade bonds, pay interest, ask permission, & be extremely boring. Fortunately those days R over.

Bring on the Uba buba of buma, the holiest & most godlike leader humans have exulted in over 100 years. Human nature liveth once more. They are once again governeth by God in human form & American history will finally join the human race's action packed legacy of coronations, divine power, & despotic royalty.
Posted by Jack Crossfire | Oct 23, 2008 @ 02:21 PM | 3,359 Views
Repacked the IMU & got more stable autopilot using PID mode. Static neural mode was also more stable, but not as much as PID mode.

Noted frequent glitches in auto & manual mode. It could only be defective wiring to the servos, a BEC malfunction, or a PIC malfunction. While landing in manual mode, went into flat spin. Rudder was hard over after crash, showing software stabilization may have been compensating or may have byte wrapped around....Continue Reading
Posted by Jack Crossfire | Oct 22, 2008 @ 07:31 PM | 2,982 Views
What U can gleam from the GATech paper is that the neural network does not directly control the copter. It's added to the output of a reference model + the output of PD equations to produce pseudo control vectors. The pseudo controls are next reduced into even simpler controls by an inverted approximate model.

The use of a reference model is a technique called Model Reference Adaptive Control. Through some voodoo math, the output of the reference model helps tune the PD output. The neural network contribution is next applied to cancel flaws in the inverted approximate model.

outer loop
commanded velocity -> PD outputs + reference model corrections - NN corrections -> commanded acceleration (pseudo control)

commanded acceleration -> approximate inversion -> commanded attitude + angular rates

inner loop
commanded attitude + angular rates -> PD outputs + reference model corrections - NN corrections -> commanded angular acceleration (pseudo control)

commanded angular acceleration -> approximate inversion -> servo commands

That's the heart of's product. Hardly the biologically inspired system U envisioned.

Research seems to have peaked with this paper. His focus has gone more sideways rather than building on top of the adaptive paper. Using video instead of IMU input. Using hypersonic vehicles instead of copters.
Posted by Jack Crossfire | Oct 22, 2008 @ 02:48 AM | 3,352 Views
Looks like India is the aerospace champion today. Looks like we have another round of IMU alignment to do. PID equations did no better than the static neural networks. Lots of underdamped oscillation. Nasty rate damping oscillations on landing.

In other news, the video of the Lockheed head of spacecraft survivability & engineering at drove home the point & the point is this. Knowing that the smart people in college would become accomplished spacecraft designers at Lockheed by the same age U were a flat broke, low paid programmer would have made no difference.

U still would have goofed off & daydreamed, & become a flat broke, low paid programmer.
Posted by Jack Crossfire | Oct 21, 2008 @ 02:13 AM | 3,439 Views
Looks like the inflight training problem is getting bigger & bigger. Most people probably ramp up the gains in flight, compare a small period of tracking with recorded movement, & ramp up the gains some more, until the guidance is within a certain target. Good enough for a fixed wing.

The mane problem is we're controlling angular rates, not the absolute angles the controller reads in. Humans go from seeing absolute angles to commanding angular rates, based on already knowing how fast they can turn the servos without losing control. Natural frequencies as it were.

Our feedforward impared computer may just have to focus on angular rate feedback & ramping up different natural frequencies for the angular rates until it gets ideal tracking.

Finally found that paper on the neural network in GA Tech's copter. Eric N. Johnson and Suresh K. Kannan "Adaptive Flight Control for an Autonomous Unmanned Helicopter" Lots of crazy symbols. The author's resume looks like the aeronautics program of a small country & he's just an assistant professor. At least he's overweight. He got an MS from MIT. The top software architect of Sony's BluRay player also got an MS from MIT. Might also be a few heads of state in there.

Full professors at GA Tech R indeed a real small club, each of whom probably accounts for 1/5th of the current human knowledge of control systems. Maybe there's a way in from the private sector. Maybe not.

The "NN" seems to be substituting for the integral in a PID equation. He calls integral terms the "trim" & has the remaining feedback as just "PD". The neural network accepts detected position, velocity, linear acceleration, attitude, angular velocity, & angular acceleration. It outputs some kind of offset to convert the output from conventional PD equations to nonlinear output.

Our first use of NN's was assisting the navigation estimate. That seems 2 B what the GA Tech NN is doing.
Posted by Jack Crossfire | Oct 18, 2008 @ 08:33 PM | 3,698 Views
Well, did our mandatory test flight with 100% neural control but no adaptation. The networks were trained for linear feedback.

Attitude control was definitely underpowered & underclamped. Forgot what we were doing with the throttle & may have had throttle on a low value for stable air. No surprises.

Was just about to revert to PID equations for comparison when the nose dove. Starboard cyclic servo failed after under an hour. Definitely an act of Chinese terrorism.

Also, the MaxAmps 3.3Ah did puff slightly after flight & recede after it cooled off. What we're seeing may be a natural event most people live with. As for the servo budget, not flying again until an adaptation algorithm is in place.
Posted by Jack Crossfire | Oct 18, 2008 @ 01:58 AM | 3,096 Views
The delusions of being able to fix the shutter quickly vanished as we dug deeper into the 20D. Then the delusions of being able to reuse the sensor vanished when we saw the number of pins & that they were extremely fine pitch ribbon cabled.

As for reusing the hot shoe & lens mount on their own, the electrical contacts are integral parts of the plastic body. Would have to grind the body up & that still wouldn't give up the lens protocol or provide the plug for a flash extension cord. The lens mount is also for a 62% sized sensor.

U laugh at that dinky sensor nowadays & the sacrifice in coverage once required to get digital pictures out of 35mm lenses. Most people still buy those 62% sized sensors.

Going to miss the 20D. Have a lot of fond memories of it. It recorded a lot of great moments in history, an elegant camera for a more civilized age....Continue Reading
Posted by Jack Crossfire | Oct 17, 2008 @ 02:54 AM | 3,007 Views
After mandatory procrastinating, decided to put down the recurrent neural network & use less sexy feedforward neural networks. There will B neural networks 4 proportional & rate feedback & linear equations for very small integral feedback. Add the neural & integral output to get total feedback. The neural networks are modeled after a PID equation initially & use backpropagation to adapt in flight.

The weeks of recurrent neural networks were necessary to know they couldn't be done. Got all the way to a recurrent network which could give proportional feedback on a graph, was stable in real life, & showed some memory in real life, but could not offer effective proportional feedback in real life.

Getting the integral part to work seems too need a much longer training pattern than we can afford. The amount of computing time & hand tweeking to get evolution out of it wasn't justified. The implementation is on the hard drive for when a CUDA budget appears.
Posted by Jack Crossfire | Oct 16, 2008 @ 02:01 AM | 3,028 Views
Discovered X-Y charts in OpenOffice were broken & the error of the neural network should be a euclidean distance instead of an absolute difference. That, along with more evolution tweeks, less neurons, & more processing time & the genetic algorithm is staying alive.

With the evolved neural networks, there is an optimum neuron count which is not too high & not too low & an optimum range for the starting weights. The chance of hitting a dead end with this algorithm is too high to build extra capacity into the structure.

Tomorrow's news today:

CNN: Ubama wins election

Fox: Ubama's 3rd nephiew's friend's cat's sister attacks hiker with hair ball.

CNN: President Ubama awards a free SUV to every home ower over $1 million underwater.

CNN: Ubama...

Fox: Hugo Chavez accuses Ubama of being too liberal & cuts off oil.

CNN: Treasury secretary Cramer pumps another $700 trillion into bank stocks.

Doomberg: Investors pull $700 trillion out of bank stocks & buy treasury notes.

Fox: Humans lose ability to walk without stepping on their own ****s.

CNN: Ubama promises bailout.

Fox: Ubama bans teaching evolution that requires carefully calibrated starting weights & neuron counts.
Posted by Jack Crossfire | Oct 15, 2008 @ 12:51 AM | 3,195 Views
Discovered rand() is not thread compatible. U need to use rand_r(&seed) for threads. rand() was sucking huge amounts of CPU time updating a global variable. rand_r keeps the CPUs in their own memory spaces & is much faster.

Well, after many hours & a lot of electricity, concluded evolved neural networks seem to require inhuman amounts of computing power to be effective. Maybe our genetic algorithm isn't the best. It just randomizes weights. It doesn't do crossovers. Even if they were trained offline using flight recordings every hour that we didn't fly, they probably wouldn't evolve much.

Have some other ideas, all back to backpropagation in flight:

Feedforward network taking an integrated error as one of the inputs.

Feedforward network that generates changes in feedback instead of absolute feedback. Current state & target state would replace current error as the input.

Well PBS is defending the parallel universe theory. Our obsession with infinite universes began when we asked if humans ever succeeded in simulating a human brain on a computer, & the contents of a real human brain were transferred to the computer, would the real human have sensations from the computer?

That led to defining human sensation as a function of the rate of increase in complexity of a localised packet of information. Similar packets of evolving information in different brains & different universes could share sensations like ESP. Our packets of...Continue Reading
Posted by Jack Crossfire | Oct 13, 2008 @ 12:15 AM | 2,800 Views
Good news: got a genetically trained feedforward neural network to control attitude feedback. Took very carefully designed training data with no clipped regions. Not sure large populations add anything.

The recurrent network is still a disaster....Continue Reading
Posted by Jack Crossfire | Oct 11, 2008 @ 12:38 PM | 3,025 Views
The genetic algorithm has been a disaster. So far, been using 1 member of the population as the seed for the next population. Very cache efficient & fast, but can lead to entire worthless populations. Also been feeding random inputs to it for the training set. This hasn't worked at all.

The algorithms that work use constant sets of training data & large populations as the seed for the next populations. No cache usage. Faster training since the random number generator is slow. Would have to test each population member on the entire training set.
Posted by Jack Crossfire | Oct 10, 2008 @ 01:44 AM | 2,665 Views
Daul layer DVD+R DL:

a much cheaper step on the way to BD with 1/3 the capacity instead of 1/5. Have had 50% coasters with these. Very few dual layer burners actually support them & none of the first dual layer burners support them. The "RW" on the packaging does not mean rewritable.


finally had to face the music & admit it can't pay off Steve Appletree's executive mansion & 23,500 Idaho employees simultaneously. 3,525 Idaho workers must go. Another state fails to create jobs outside Calif*.

The end of the world:

The end of the world is Nov 5. Election complete, no need for more bailouts & no more money. Banks foreclose on everyone because their property is worth 10% of their loans. Paulson & Bernanke catch the last flight to antarctica. McCain & Ubacka take the escape pods to Canadia. Congress hides in the WV bunker. Dubya wanders around looking for a beer. Dogs & cats live together. Dead rise from the grave. 30 years of darkness, earthquakes, volcanoes.

Infinite universes:

Used to think there were infinite universes & our sensations were in a gaussian curve of many related universes. That's depressing because it means no matter what U do & who U meet, there's always another version of U who succeeded or failed & the people change every week, so nothing matters. A more likely story is there is only 1 universe & our sensations R in a gaussian curve of many beings in the same universe. Dying simply causes U to have sensations from another being in another part of the same universe.

Fl*rida timelapses:

Comca$t increased its upload bandwidth to 1.5Mbit. Finished cutting all the timelapse movies from Fl*rida & now U get the reap the rewards with fl*rida timelapses in HD. This is the timelapse footage we collected on the Fl*rida farm, on the mighty EOS 5D.

Fl*rida timelapses (3 min 47 sec)

Posted by Jack Crossfire | Oct 09, 2008 @ 06:51 AM | 3,009 Views
Well, got a recurrent network for cyclic feedback trained to where it was pretty close to the PID equation in the training algorithm. Unfortunately, it was completely erratic when run on the airframe. Training the recurrent network is definitely hit or miss. Most of the time, the evolution gets stuck & U need to restart it.

Now on to algorithm 2, comparing a sequence of steps from each mutation to a sequence of PID equation steps. The 2.4Ghz dual opteron from 4 years ago is coming out much faster than the 2.6Ghz Athlon X2's & core duos of today. Maybe the current stagnation in clockspeed means more assembly language jobs.

There's a limit to how accurate these networks can be & a definite dependancy on the structure of the network.
Posted by Jack Crossfire | Oct 08, 2008 @ 01:32 PM | 2,643 Views
The 4Ah we got for $94 in 2006 R now $150. 60% inflation. Even the 3.3Ah ones R $130, easily 100% what they charged last year. Interest rates just fell to 1.5% so it's time to buy.
Posted by Jack Crossfire | Oct 07, 2008 @ 11:37 PM | 2,193 Views
Like any return of the neurons, it's not going well. The mission is to model a PID equation in a recurrent neural network. Once all our PID equations R modelled in neural networks & flying, the next step is to have the networks evolve from the starting PID equations in flight.

Training the recurrent neural network to act like a PID equation has 2 genetic routes.

1) Feed 1 random input into a PID equation & solve once using a random neural network. Throw the network out if the single solution deteriorates. The previous output of the PID equation is fed into the neural network & the integral in the PID equation is carried over to each step. This gives the neural result which would have resulted if it was predicting correctly & it was carrying an integral. Very slow & less likely to give a good integral part.

2) Feed a sequence of random inputs into a PID equation & compare with a sequence of solutions from a random neural network. Reset the PID integral before every sequence. Throw the network out if the solution sequence deteriorates. Very very very very slow & more likely to work.

It's taking populations of about 1000000 & around 4000 generations for genetic algorithm #1 to arrive at reasonable error rates. Nothing to do but procrastinate & have some more Fl*rida.
Posted by Jack Crossfire | Oct 06, 2008 @ 01:01 PM | 2,356 Views
Fixed some more photos 4 U. Had to relearn how to survive in the dumpy apartment, what to do when heroineclock goes off, how to use the dumpy light switches, where to sit, where the shower head is. Waking up the first time after a long trip, U have to wait a while to figure out where U R. Especially confusing because it looks exactly like Fl*rida outside the dumpy window. U actually still think you're where U were. Who knows how a neural machine would relearn its long term memory....Continue Reading