Jack Crossfire's blog View Details
Archive for April, 2015
Posted by Jack Crossfire | Apr 18, 2015 @ 12:00 AM | 6,299 Views
It certainly wasn't the biggest government bungle, but the giant welding machine began to tilt immediately after it was built. There were rumors that it was shedding bearings.




3 months after it was built, it was leaning .06 degrees or 1/4" out of alignment at its highest point. The whole thing was torn down with plans to rebuild it, someday. In typical government contractor fashion, the swedish contractor was supposed to reinforce the foundation & just simply didn't. Tough beans.


http://spacenews.com/fix-in-works-fo...lding-machine/







Posted by Jack Crossfire | Apr 16, 2015 @ 11:09 PM | 5,464 Views
https://vid.me/i6o5


In a touching finale, the mangled engines & landing gear that someone worked his ass off to get working are revealed before the $60 million piece explodes.



There was a rumor that the mane engine can't throttle low enough to descend. They have to start the engine close enough to the landing pad to reach 0m/s at 0 altitude.
Posted by Jack Crossfire | Apr 15, 2015 @ 10:07 PM | 13,906 Views
CRS-6 First Stage Landing (0 min 23 sec)


After many ice obstructed videos from onboard the rocket, lost viewfinder videos from a chase plane, & partially visible gopro videos from the barge, it was the 1st watchable footage of a landing ever captured. It was most impressive by how hard it was to make the video & how far they continued with no footage of a previous landing to go by.

The final landing attempt was the most aggressive out of control fall, following by last minute suicide burn. The legs deployed 7 seconds before touchdown, while in previous videos they deployed 10 & 9 seconds before touchdown. If they used the more conservative approach of the past landings, it would have made it.

They obviously tried to stretch the fuel farthest with the least stable approach they could get away with, but it's not stable enough to recover from. They need a customer with an even lighter payload than NASA, but so far, NASA is the only customer willing to throw away enough upmass to get this far.

There are 2 more landing attempts this year, in the form of 2 more NASA missions.
Posted by Jack Crossfire | Apr 12, 2015 @ 11:22 PM | 6,296 Views
The blurry, shaky cellphone cams reveal a world of dead ends, buzzwords, & few practical demos. It's like a frozen moment in time when Goog glass just came out & everyone wanted to be bought out by doing a copy of that. 3 years after these concepts hit the kickstarter circuit, none ever became mass produced & all attention shifted to virtual reality goggles.



WEARABLE COMPUTER WITH HAND HELD KEYBOARD AND MOUSE (6 min 20 sec)


That guy loves joysticks & patents.

A reversion to the trackball, but with a pad instead of a ball, might get more mileage than either a joystick or a full pad.


ETAOI - a five key Android keyboard for phones and wearable computers (15 min 25 sec)
...Continue Reading
Posted by Jack Crossfire | Apr 12, 2015 @ 12:11 AM | 4,824 Views
So the idea came up of a simple commuter setup that would allow someone to type while standing up in a crowded train. The system would be compact & light enough to run down 1 mile of city streets. How could it be done as simply & cheaply as possible?

It amounts to 2 inventions that went absolutely nowhere: the virtual reality goggle & the wearable keyboard. Google glass would have been the ideal form factor, but it only did 640x360. No virtual reality goggle has ever been mass produced. Google cardboard might work, but would need a camera projecting outside video in a window. Maybe a new kind of goggle with no sides can be invented. The cheap phone is only 480x800.

The most effective keyboard leans toward a rubber thing. The rubber thing contains a full keyboard & touchpad. A different full keyboard & touchpad is worn on each side of the abdomen. If not a full keyboard, each side overlaps the other by a few keys. Maybe it could be a single rubber keyboard on the abdomen.

This amounts to a lot of money. Maybe there's an incremental step from the bog standard phone.

The average commuter stares at a phone screen, desperately trying to be entertained by what meager, meaningless news bites it can download between tunnels. Don't know what they did 30 years ago.

The mane limitation with this scheme is 1 handed or 1 fingered typing being extremely slow. What you need is a 3rd arm to hold the phone so both hands can type. If an arm can be designed, it conceivably simplifies the problem quite a bit. A simple arm to hold the phone combined with the abdomen keyboard might do the job.

Searches for wearable computing show a wasteland of vaporware stretching 5 years into the past. There's much more vaporware than the old days, showing how much longer vaporware can survive without making money than it used to.
Posted by Jack Crossfire | Apr 09, 2015 @ 10:57 PM | 4,858 Views
Free nuggets about lane following are few & far between. There are many nuggets about line following, but not lane following. This story was fascinating for 2 reasons

http://www.roboticstrends.com/articl...riving_in_1995

It was work my generation did when it was in college, with the tools available during our time. It was so primitive to modern eyes, yet it was the bleeding edge for someone living in that time. There was no GPS. Capturing video on a computer was nearly impossible.

The full text of their path following algorithm costs money, but there's a rough description

http://www.cs.cmu.edu/afs/cs/user/tj...haa/ralph.html

The key to their algorithm was a scanline intensity profile. It was done on a 30x32 greyscale image, on a 486. The scanline intensity profile was derived from a test of lane curvature. The lane curvature can be neglected, since the current issue is a straight path.

They captured the entire lane width, converted the trapezoid shaped path to a rectangle by widening & shifting farther rows horizontally. When making the rectangle shape, they made several images with the farther rows shifted left or right by different amounts. Then they summed each column in the rectangle image. The adjacent columns had the maximum differences when the shifting of rows matched the path's true position.


The sums of each column made up the scanline intensity profile. The scanline intensity profile when the vehicle was centered in the lane could be compared to the current scanline intensity profile to give its lateral offset. The current scanline intensity profile was iteratively shifted left or right until it matched the centered one.


They claimed better results this way than with edge detection. The key advantage was immunity to shadows, relying only on visual features running parallel to the road. This method does require training the algorithm with known scanline intensity profiles for a centered vehicle on different sections of road. There was another issue of cropping the image to where the path should be.
Posted by Jack Crossfire | Apr 07, 2015 @ 11:51 PM | 4,957 Views
2 years after building the 1st handheld 3 axis gimbal suitable for running

https://www.rcgroups.com/forums/show....php?t=1914772

they're now ubiquitous, in a much smaller form factor, for a lot more money.

Tested: Feiyu G3 Ultra 3-Axis GoPro Gimbal (6 min 59 sec)


The modern Chinese versions of course, still haven't figured out the pan needs to be manually controlled by a joystick. The #1 market for brushless gimbals was not the quad copters they were originally sold for, but the single handed use they're just now starting to be sold for.

The other dead ends were the 2 handed James Cameron design & having the camera under the gimbal. After 2 years, they finally discovered the single handed stick design with the camera overslung was the way to go.

1 remarkable aspect is the gyros now calibrate without being perfectly still. No-one reviewing a gimbal ever asks how these things are accomplished, but years ago, it was the biggest question. Perhaps modern gyros are stable enough, their centers can be calculated purely by temperature.
Posted by Jack Crossfire | Apr 06, 2015 @ 01:43 AM | 4,869 Views



The construction is endless. Endless highrises as far as the eye can see are going up. The Yellen said let there be credit & at once there was infinite credit. It's the largest credit boom in all history, infinitely larger than 2007, infinitely larger than 1999. Dow 18,000 was just a dream when it hit 10,000 just 5 years ago & now 40,000 is just around the corner.

Rent here is now the highest in the world. $6000, $7000, $8000 & it rises every week. You can more cheaply build an apartment out stacks of money than make the amount of money required to rent it. The Yellen decreed without total employment, the money would continue to flow, forever.

Overnight, software became the new english. Almost every conceivable task now requires writing software, but instead of the software jobs spreading out to the industries that use them, the industries chased the software jobs to SOMA. Business minds have decreed software can only be written in SOMA, by formally trained programmers with Stanford degrees.

Every conceivable product is now being developed in SOMA. Food, medicine, clothing, pet furniture, mortgages, health insurance, payment schemes, cars, shoes, farms, movies, artwork, spaceships, everything that was once created in an entire country now requires software & can only be done in SOMA.

It's horribly inefficient, but up is the new down with infinite credit. Stanford graduates live in a different universe, now making over $400,000 their 1st year after college. That different universe is rapidly becoming the baseline to stay here. The time is nearing when those of us who can't keep up with the rent are going to have to move out. There's probably 2 more years left at 2010 era salaries, but if anything necessitates going back to school, it's going to be the rising cost of living forcing us to look to bigger companies which can afford $400,000 & who need the Stanford degree more than experience.
Posted by Jack Crossfire | Apr 03, 2015 @ 11:58 PM | 5,072 Views
Took the path following gear off the truck. It was decided that it was too unstable & too fast for the rate of the machine vision algorithm. The hot weather was making daylight drives rare. Getting the test footage that proved the algorithm required lots of manual steering. A slow machine vision autopilot would need to be nearly perfectly on target from the beginning, to have a chance.

The combination of sonar & compass once again emerges as a leading idea. It needs another microcontroller. The initial design keeps a constant heading with the athlete directly behind. The next design keeps a constant heading with the athlete a fixed distance beside.
Posted by Jack Crossfire | Apr 02, 2015 @ 12:01 AM | 5,684 Views
Playing 4k video in Linux is a big deal. The days of software playback are decidedly over. The XVMC interface was the 1st method of hardware decoding. In 2007, it was replaced by the VAAPI interface in Intel cards & VDPAU interface in NVidia cards. Support varies from card to card.

Integrating hardware decoding in an editing program requires intimately dissecting each codec, replacing the specific functions the card supports with hardware calls. The mane codecs to support would be H.264 & JPEG. It's still in the realm of purpose built demos, nothing that could reach a wide user base.

Suppose you had hardware decoding in an editing program. What would you watch in 4k? The format most often viewed is still 640x360. Only rarely is it ever worth downloading something in 720p. The bandwidth to download 720p is now $110/month & rising daily.

The old timers were on to something when they designed the 1st TV resolution, in the 1930's. Motion blur & compression artifacts make most scenes look like 640x360.

It's a lot different than the serious cinema going days of 1999. Most movies are watched on phone screens, in a window, 15 minutes at a time. No-one shuts down for 2 hours to do nothing in front of a big screen, except for single women. They're always shut down.

4k is useful for archives. Probably in the next 10 years, everything is going to start as 4k. Today's 1920x1440 starting footage is still reduced to 1280x720 to make stabilization...Continue Reading