New Products Flash Sale
Jack Crossfire's blog View Details
Posted by Jack Crossfire | Sep 14, 2014 @ 02:34 AM | 2,707 Views

Friction stir welding hits mammoth heights in a machine that will spit out the world's largest fuel tanks in a single piece, pending funding. Or the welding tool could become another A3 test stand without funding.

There is a concept drawing of a completed LH2 tank in the machine. The completed core stage would be much taller than the machine. A lot of pieces still have to be built.

The thing is, if the core stage is just a shuttle external tank to minimize the cost of new tooling, how did they end up spending 10 years building new tooling to build exactly the same tank? The internet has said there's no point in reusing existing rocket parts, because there will always be requirements creepage.

The shuttle tank already used friction stir welding. The 1st 6 tanks used TIG welding. Most of the tanks used polarity plasma arc welding. The final tanks were friction stir welded. They never said how the VAC improves upon what was done before, though it seemed it could make the tanks in fewer pieces.
Posted by Jack Crossfire | Sep 12, 2014 @ 09:10 PM | 2,562 Views

After working in the set top box industry from its meteoric rise to its disintegration, these stories had some interest. By the time digital TV was finally mandated in 2009, the idea of towers blasting gigawatts of scheduled programming was already obsolete. Digital TV was amazing when its 1st blips appeared in 2002. Watching HD movies off the airwaves in 2005 was amazing. By 2009, the party was over.

Never actually watched anything on a commercial digital TV. Only watched some olympics by cobbling together mplayer, pcHDTV, or using custom software for day jobs.

5 years after mandating it, the FCC is already phasing out the digital TV spectrum. TV stations are already abandoning the transmitter in favor of the router. The solutions are wider, more packetized, but strictly private.

Radio spectrum is allocated like the 1920's when everything was private & the government decreed which private business got what. Even bridges were private in those days. Not sure it would all be private if radio was invented today. There would be cries for spectrum neutrality.
Posted by Jack Crossfire | Sep 10, 2014 @ 08:55 PM | 2,598 Views

It was sold out in 3 hours, despite being only an outdoor tour of the unrestricted area. They would not get into the giant wind tunnel, though they would get into the cafeteria, see the bench wind tunnels & RC planes. Surprising more hobbyists don't build bench wind tunnels. The cafeteria is probably the mane attraction, because that's where you actually see people who work on spaceships every day, after a very long line.

The dissapointment is of course that the voters fill up the event in 3 hours, yet when asked to fund a space program, the same voters consistently choose expanded mortage programs instead. You always hope the voters finally decide their granite countertops are worth enough, but they never do. There's always more money to be made from 1 more remodelling program, 1 more medicare program & 1 less space program.

Lockheed hopes to finally launch an unmanned test of Orion, after 11 years of program cancellations & budget cuts. It's scheduled for 2014, but probably won't happen until 2016. A 2nd unmanned test was scheduled for 2018, 15 years after the program started, with nothing funded afterwards. SpaceX has no schedule for Dragon 2 unless NASA can restore commercial crew funding, which is still 1/2 of the required amount.

A final downselect of commercial crew to 1 contractor was planned for August. When that was canceled, September became the moment. SpaceX will be the prime contractor, with a starvation ration given to Boeing, & SNC defunded. There's not much guesswork involved.
Posted by Jack Crossfire | Sep 09, 2014 @ 06:19 AM | 2,264 Views
So the Linux derivatives seem to have become stable enough in the last 5 years that something compiled on 2010 era Fedora 13 with just enough of the libraries static actually runs on 2014 era Ubuntu 14.04.1. libc-2.12 is forwards compatible with libc-2.19 but libc-2.19 is not backwards compatible.

With that, the next Cinelerra release reintroduced the concept of a precompiled binary for the 1st time in decades. Cinelerra only works because most of its libraries are static. Libraries as mundane as JPEG & TIFF still undergo constant redesigns, many times per year.

The idea has always been to make as many libraries static as possible. The C library could not be static because of a problem with runtime linking modules. Other libraries seemed stable enough to dynamically link. Even then, it's not as self contained as Firefox.

It's a small miracle that Firefox/Mozilla/Netscape has always worked in binary form. They just put all of its libraries in a self contained directory & tested against every single operating system. Google Earth tries the same thing with less success. It would be a good idea to put the rest of Cinelerra's libraries except the C library in its own directory.

Sadly, this kind of software is not economically viable. Computer science these days is about learning new languages, not using the languages.
Posted by Jack Crossfire | Sep 07, 2014 @ 08:58 PM | 2,997 Views
After some restless nights of ghost problems, it was finally time to nail down the problem. It turned out, only 1.5 miles from the apartment, hidden from view by centuries of real estate development, was a very very old house, the oldest building in Rain Ramon.

It was a true relic of Donner party lore. After giving up on the idea that the Donner party was remotely connected to the valley, it turned out this connection was just around the bend.

Once above the fence, your phone reveals exactly what came out of a mind which drove the 1st wagon train through Hasting's cutoff, hand pulled a wagon down the virgin Weber canyon & drove oxen across the salt lake desert for 80 miles without water.

After arriving in 1846, making a fortune in the gold rush, Joel Harlan built it in 1852 farther south, moved it here in 1856 after property taxes went up, & died here in 1875 at the ripe old age of 46. The ordeal of Hasting's cutoff undoubtedly contributed to that.

...Continue Reading
Posted by Jack Crossfire | Sep 07, 2014 @ 02:15 AM | 2,363 Views
Tried apt-get freeglut3-dev on the mac & it showed the OpenGL shading language extension was there. After a few segmentation faults & another round of hardening bcpbuffer, the mac did the OpenGL business, 60fps with it on & 44fps with it off. There might be a bit somewhere limiting it to 60fps. It was doing 120fps, decades ago. A 60fps limit may have been added to simplify video programming.

It wasn't tried on anything besides NVidia since 2005. Getting it to work on Nvidia required using NVidia's own header files, but at least they complied with the OpenGL spec. ATI was their only competitor at the time & they only supported the OpenGL ARB extension. It would have taken a 2nd branch to support ATI. ATI did nothing to support shading in the standard Linux base.

Decades later, Intel took the laptop market from ATI & got it done. The shading language was finally included in the Ubunt header files, & Intel finally exposed shading in OpenGL.
Posted by Jack Crossfire | Sep 05, 2014 @ 08:41 PM | 2,428 Views

So heroine's hard drive began failing after only 4 years, so it was time to start backing things up on whatever other smaller hard drives were available until funds appeared for buying a new one. Looking through 10 years of files to decide what to keep is a good reminder of how little you've accomplished.

Posted by Jack Crossfire | Sep 05, 2014 @ 12:01 AM | 2,972 Views

Being raised by a well connected parent in Washington DC can definitely get you a very high paying career with no college education & a life without financial worries.

The big thing now is ultrarunning. Everyone's running 100 miles. All the fitness gadgets are marketed towards these guys. They now make more money from people running 100 miles than people running 5 miles. Onward & upward human biology goes.

Compared to 30 years ago, technology is everywhere. Take your pick from traditional networking equipment jobs to now fitness tracking/telematics/telemedicine/smart homes/smart workspaces/smart toys/wearable computing/quantum computing/near field computing/cloud computing/3D printing/3D scanning/speech recognition/gesture recognition/virtual reality/augmented reality/big data/cryptography/cryptocurrency/computer security/drones/internet of things jobs.

It's just a case of your side of the mountain getting discovered by the cool kids. Technology is no longer a niche for geeks. It's the new football. All the cool kids write hashing algorithms while the loners play ball, & it's harder than hell to make money at it.
Posted by Jack Crossfire | Sep 01, 2014 @ 09:24 PM | 2,257 Views
Once stuck to cvs as long as possible before surrendering to svn. Stuck to svn during the bitkeeper & perforce craze, but after 2 years of employers demanding 50 years of git experience, the mac migration was finally a good time to migrate to git. Having now lived through cvs, svn, bitkeeper, perforce, & git, git is the current summer blockbuster, an entertaining step forwards in some ways & backwards in other ways until next season. Now the cheat sheet.

svn: configure svnserve, create fake user, create mane repository, look up how to fix broken configuration files & broken permissions

git: every checkout is a repository accessed through ssh. Create & copy .ssh/ to .ssh/authorized_keys to allow a user to check out.

It's 1 less copy of the source code to worry about. A checkout is not bound to a single repository. If the repository goes away, the checkout can use another one.

svn: import spagetti commands
git: init spagetti commands, then git add .

svn: ci
git: commit

svn: co
git: clone

svn: up
git: pull

The killer with git is of course

cvs: can't add symbolic links
git: can't add empty directories
svn: what you got out was exactly what you put in

so we're back to fudging build systems & spending hours tracking down what the revision system left out.

cvs: always corrupted
svn: constant berkeley DB & version incompatibility errors
git: seems stable

svn: obscure G, C, U flags when merging
git: nifty +- progress bar when merging

The usual workflow with SVN:

svn ci -mx guicast cinelerra plugins
svn up guicast cinelerra plugins

The workflow with GIT appears to be:

git status -> shows what files changed
git add -> stages changed files
git commit -mx guicast cinelerra plugins
doesn't do anything unless files are added

git fetch
git checkout FETCH_HEAD

It's a bit more laborious. Working on individual directories requires specifying just those files to git add & specifying just those directories to git checkout. Replacing something with the repository version:

svn up filename
git checkout HEAD filename
Posted by Jack Crossfire | Aug 31, 2014 @ 08:55 PM | 2,279 Views

The Asus batteries each lasted only 8 months. Then it was time for the final laptop: the macbook. After much debate, it was decided to dual boot it instead of erasing MacOS completely. Still remember walking out of Stonehenge mall one night, with that thing neatly packaged in its pristine cardboard box, the perfect packaging, the new mac smell. It's a very strange laptop, more like a stone tablet.

An SD card Ubuntu installation failed with MMC driver crashes. Trying to burn the DVD on the mac would always wrap the iso file in another iso file. Finally got it to install from a DVD burned on a Linux box. The installer couldn't initialize the network at all, but it ended up not necessary.

Booting from the DVD requires holding down option to get into the firmer bootloader. Not sure those refit or refind bootloaders are required, since they just go into grub. Once installed, any of the EFI options seems to go into grub, which can then go into bunt.

With a terminal program finally installed, it was possible to load the b43 wifi driver, see the error message, load the required firmware from an SD card, & configure the network manually.

The macbook's audio, suspend mode, & 2D graphics seemed to work, a rarity for Linux. Wifi was intermittent. The keyboard & single button mouse are a buster. The current commands which create alternate mouse buttons:

xmodmap -e "keycode 108 = Pointer_Button3"
xmodmap -e "keycode 134 =...Continue Reading
Posted by Jack Crossfire | Aug 29, 2014 @ 04:12 AM | 2,633 Views





It's finally gone beyond the traditional 2 man startup industry, even if it's still just hope. Also gone are jobs for hobbyists & self taught engineers. Masters degrees are required to have any shot at these places. Whether useful products emerge, at least for now, designing autonomous systems for a living is as typical as designing routers once was. The average engineer of today is judged by how long his quad copter stays aloft as much as someone 30 years ago was judged by how long his router stayed online.
Posted by Jack Crossfire | Aug 28, 2014 @ 02:44 AM | 3,534 Views

The $1400 beast finally blanked out after 7 years. The power light came on, but it would no longer detect a signal. Another monitor died earlier in the year, failing to come on at all. It was attributed to a mane processor burnout. Components can now be optimized precisely enough to hit a minimum required lifespan.

Thus began a period of despair over a future of just laptops or losing $400.

...Continue Reading
Posted by Jack Crossfire | Aug 26, 2014 @ 09:16 PM | 2,376 Views
15 Qassam Rockets intercepted At Once By The Iron Dome (1 min 34 sec)

The modern world war is like a technological version of WWII. The people just watch automated systems blasting each other, 30,000 feet above in endless dogfights that used to be performed by humans in P-38's & B-17's.

To be sure, the modern war is consuming lives faster than WWII. The modern missiles that get through the robotic defenses are far more powerful & accurate than the V-2 & torpedo. There's no allied invasion coming to rescue the refugees in Ukraine, Iraq, & Libya like there was in WWII. They just fight & die forever.
Posted by Jack Crossfire | Aug 25, 2014 @ 11:48 PM | 2,383 Views

So after a week & 3 hours of a technician refurbishing the phone line, the empire came in at only 2.8 megabits down .4 megabits up. Either that's the actual throttling level of the 3 megabit plan or the phone line physically can't deliver anything higher, for any plan. DSL is still a science project, compared to DOCSIS. The phone line hadn't been used for 14 years. 3 presidents, many road repairs, & many remodelings happened.
Posted by Jack Crossfire | Aug 23, 2014 @ 01:56 AM | 2,526 Views
So you wanted to move all the GPS logs from a corporate datacenter to a private datacenter on the phone. The phone has no downtime, it's a lot faster to access, it's not scanned by the government, it's always with you. It's the perfect cloud server.

It could be done in a few minutes with a hand coded Java web server & a custom protocol, but you wanted to parlay it into a profitable skill. The most current strategy is using node.js as the server, but there's no fully functional node.js which can run on Android & no interest. Cloud servers are supposed to be on corporate datacenters like, not phones.

The best solutions have required a complete install of debian on a dedicated partition, with node run inside chroot. Node.js requires too many libraries not part of Android.

The honeymoon with node.js already seems to be over, only months after it began. Server side Go is the new thing if you want to be current.

Chasing these rapidly changing server side languages starts to seem irrelevant if you don't foresee ever getting a web development job. In the end, a simple hand coded Java web server could do all the required functionality. The only buzzwords used were a JSON query & some jquery commands.

It wasn't eye candy compliant, but it was the beginning of a phone cloud. All the workouts could be easily viewed.

In Android's current form, the phone screen has to be on for the web server to work.

Building this into a complete social network with accounts, sharing, permissions, advertizing, eye candy, & spam, would be a huge undertaking. It makes you appreciate what mapmywalk has done. A new version of such a thing would never be discovered in the sea of web apps.
Posted by Jack Crossfire | Aug 21, 2014 @ 12:31 AM | 2,401 Views
Not much can be done over 128kbit with modern software. The frustration of waiting 1 week for a router to arrive by mail in order to access data in a few milliseconds shows how bits live in 2015 while atoms still live in 1945. Same day shipping joins self government & happy marriage in the triangle humans will never achieve.

There is hope for drone delivery, along with infinite batteries, 3D printing, & Steam Valve. Though technically possible, the manual labor involved in drone delivery is still the same as doing it with R-22's. The trick is automating it to the point of no oversight.

There are 6 bay area drone startups. More people work in drone technology than maneframes. They range from agriculture to follow cams to drone delivery. The ages old market of persistent surveillance is gone, but still seems to be the only viable market.

Ended up getting interviews from 2, with no results. Quite a contrast to when jobs at small drone startups were fairly easy to get. 10 years ago, there was just 1. Now there are 6, but there are 6,000,000 more applicants. Like web developmemt, it's become a huge industry with an even huger following making it impossible to get in.

There is surprisingly little attention to safety. In the old days, pilots only bantered about safety. Now, thinking anything could go wrong is an attitude problem.
Posted by Jack Crossfire | Aug 19, 2014 @ 08:55 PM | 2,525 Views
So it took only 2 days to reach 3GB of data usage, at which point Dick Branson throttled it again to 128kbit. Traffic shaping at 1megabit didn't make any difference, though the algorithm still might have been based on usage exceeding a certain amount in a certain number of hours. Given the normal usage of any modern web app, 2.5GB per month is nowhere near enough.

Was surprised to find facebook & eclipse now use 100% of your network capacity, at all times. The stigma of avoiding busy waits in software has yet to reach network usage. Facebook constantly loads content preemtively. Eclipse constantly uploads your code to static analysis tools.

The jump from GPRS to EDGE in 2005 was as fast as wireless ever got. There have been many unlimited plans, but all eventually had to throttle back to EDGE after paltry amounts. It's another area with no practical improvement in 10 years & no further research. The death of wireless research has been blamed on easy money.

It's back to twisted pair copper as the deathstar claims another one. The empire charges $443/year with a $174 down payment for 3 megabits. Pray it doesn't change its mind.

Google fiber remanes vaporware for its 5th year, with 1 affluent neighborhood in Kansas City the only place it was ever implemented, but technical challenges proving insurmountable.
Posted by Jack Crossfire | Aug 18, 2014 @ 08:09 PM | 3,181 Views
Dreamed about converting the entire home network to IPv6. Manually typed IP addresses gave way to either copying 128 bit addresses from a file or giving everything a hostname. The IP masquerading mess was gone. Everything was a live address on the internet again, with some kind of firewall. All the private data which had to be uploaded to a corporate cloud server could be stored on a private server, yet still accessible from anywhere. There was no more datamining, government scanning, employer scanning, SQL injection, heartbleed, amazon EC2 downtime, or advertising for amazing refinance rates. It was all locally stored & free again. It was like 1999 again.

There is a flood of users to cloud data storage just as arrests carried out by data mining are exploding. It's the same strange flocking to control humans have demonstrated for all time. So far, there was the famous arrest of a pedophile based on his gmail content & the arrest of a guy who searched for ways to get rid of his roommate. Who knows how many arrests resulting from data mining aren't making the news or when they're going to target content related to taxes, student loans, speed limits, & mobile data usage.
Posted by Jack Crossfire | Aug 17, 2014 @ 06:30 PM | 2,948 Views

The long sought after bluetooth GPS hack was finally tested.

It was a complete failure.

...Continue Reading
Posted by Jack Crossfire | Aug 16, 2014 @ 07:50 PM | 3,004 Views
The latest theory was bandwidth limiting was the result of total usage + current bitrate going above a certain amount. If the current bitrate was always below a certain amount, maybe the total usage wouldn't be capped. The iwconfig rate command doesn't do anything anymore. The only way to limit your bitrate is now traffic shaping.

Traffic shaping in Linux is a very long, involved process, requiring in depth knowledge of the kernel. It's not supported on the phone itself. There is a tool called which hard codes the most useful configuration. When run on the pi router, it successfully limits bandwidth between wlan0 & eth0, but not bandwidth between wlan0 & another station. It has to be run on every station to limit its own wlan interface.

Traffic shaping is not bulletproof. It can't limit the rate packets come from the internet, so it tries to limit the rate of ACK packets. Bandwidth still often goes above the limit, then settles below the limit once the window is full. The problem is easier on Virgin's side, since they're on the giving end of most of the data.