Once stuck to cvs as long as possible before surrendering to svn. Stuck to svn during the bitkeeper & perforce craze, but after 2 years of employers demanding 50 years of git experience, the mac migration was finally a good time to migrate to git. Having now lived through cvs, svn, bitkeeper, perforce, & git, git is the current summer blockbuster, an entertaining step forwards in some ways & backwards in other ways until next season. Now the cheat sheet.
svn: configure svnserve, create fake user, create mane repository, look up how to fix broken configuration files & broken permissions
git: every checkout is a repository accessed through ssh. Create & copy .ssh/id_rsa.pub to .ssh/authorized_keys to allow a user to check out.
It's 1 less copy of the source code to worry about. A checkout is not bound to a single repository. If the repository goes away, the checkout can use another one.
The Asus batteries each lasted only 8 months. Then it was time for the final laptop: the macbook. After much debate, it was decided to dual boot it instead of erasing MacOS completely. Still remember walking out of Stonehenge mall one night, with that thing neatly packaged in its pristine cardboard box, the perfect packaging, the new mac smell. It's a very strange laptop, more like a stone tablet.
An SD card Ubuntu installation failed with MMC driver crashes. Trying to burn the DVD on the mac would always wrap the iso file in another iso file. Finally got it to install from a DVD burned on a Linux box. The installer couldn't initialize the network at all, but it ended up not necessary.
Booting from the DVD requires holding down option to get into the firmer bootloader. Not sure those refit or refind bootloaders are required, since they just go into grub. Once installed, any of the EFI options seems to go into grub, which can then go into bunt.
With a terminal program finally installed, it was possible to load the b43 wifi driver, see the error message, load the required firmware from an SD card, & configure the network manually.
The macbook's audio, suspend mode, & 2D graphics seemed to work, a rarity for Linux. Wifi was intermittent. The keyboard & single button mouse are a buster. The current commands which create alternate mouse buttons:
It's finally gone beyond the traditional 2 man startup industry, even if it's still just hope. Also gone are jobs for hobbyists & self taught engineers. Masters degrees are required to have any shot at these places. Whether useful products emerge, at least for now, designing autonomous systems for a living is as typical as designing routers once was. The average engineer of today is judged by how long his quad copter stays aloft as much as someone 30 years ago was judged by how long his router stayed online.
The $1400 beast finally blanked out after 7 years. The power light came on, but it would no longer detect a signal. Another monitor died earlier in the year, failing to come on at all. It was attributed to a mane processor burnout. Components can now be optimized precisely enough to hit a minimum required lifespan.
Thus began a period of despair over a future of just laptops or losing $400.
The modern world war is like a technological version of WWII. The people just watch automated systems blasting each other, 30,000 feet above in endless dogfights that used to be performed by humans in P-38's & B-17's.
To be sure, the modern war is consuming lives faster than WWII. The modern missiles that get through the robotic defenses are far more powerful & accurate than the V-2 & torpedo. There's no allied invasion coming to rescue the refugees in Ukraine, Iraq, & Libya like there was in WWII. They just fight & die forever.
So after a week & 3 hours of a technician refurbishing the phone line, the empire came in at only 2.8 megabits down .4 megabits up. Either that's the actual throttling level of the 3 megabit plan or the phone line physically can't deliver anything higher, for any plan. DSL is still a science project, compared to DOCSIS. The phone line hadn't been used for 14 years. 3 presidents, many road repairs, & many remodelings happened.
So you wanted to move all the GPS logs from a corporate datacenter to a private datacenter on the phone. The phone has no downtime, it's a lot faster to access, it's not scanned by the government, it's always with you. It's the perfect cloud server.
It could be done in a few minutes with a hand coded Java web server & a custom protocol, but you wanted to parlay it into a profitable skill. The most current strategy is using node.js as the server, but there's no fully functional node.js which can run on Android & no interest. Cloud servers are supposed to be on corporate datacenters like amazon.com, not phones.
The best solutions have required a complete install of debian on a dedicated partition, with node run inside chroot. Node.js requires too many libraries not part of Android.
The honeymoon with node.js already seems to be over, only months after it began. Server side Go is the new thing if you want to be current.
Chasing these rapidly changing server side languages starts to seem irrelevant if you don't foresee ever getting a web development job. In the end, a simple hand coded Java web server could do all the required functionality. The only buzzwords used were a JSON query & some jquery commands.
It wasn't eye candy compliant, but it was the beginning of a phone cloud. All the workouts could be easily viewed.
Building this into a complete social network with accounts, sharing, permissions, advertizing, eye candy, & spam, would be a huge undertaking. It makes you appreciate what mapmywalk has done. A new version of such a thing would never be discovered in the sea of web apps.
Not much can be done over 128kbit with modern software. The frustration of waiting 1 week for a router to arrive by mail in order to access data in a few milliseconds shows how bits live in 2015 while atoms still live in 1945. Same day shipping joins self government & happy marriage in the triangle humans will never achieve.
There is hope for drone delivery, along with infinite batteries, 3D printing, & Steam Valve. Though technically possible, the manual labor involved in drone delivery is still the same as doing it with R-22's. The trick is automating it to the point of no oversight.
There are 6 bay area drone startups. More people work in drone technology than maneframes. They range from agriculture to follow cams to drone delivery. The ages old market of persistent surveillance is gone, but still seems to be the only viable market.
Ended up getting interviews from 2, with no results. Quite a contrast to when jobs at small drone startups were fairly easy to get. 10 years ago, there was just 1. Now there are 6, but there are 6,000,000 more applicants. Like web developmemt, it's become a huge industry with an even huger following making it impossible to get in.
There is surprisingly little attention to safety. In the old days, pilots only bantered about safety. Now, thinking anything could go wrong is an attitude problem.
So it took only 2 days to reach 3GB of data usage, at which point Dick Branson throttled it again to 128kbit. Traffic shaping at 1megabit didn't make any difference, though the algorithm still might have been based on usage exceeding a certain amount in a certain number of hours. Given the normal usage of any modern web app, 2.5GB per month is nowhere near enough.
Was surprised to find facebook & eclipse now use 100% of your network capacity, at all times. The stigma of avoiding busy waits in software has yet to reach network usage. Facebook constantly loads content preemtively. Eclipse constantly uploads your code to static analysis tools.
The jump from GPRS to EDGE in 2005 was as fast as wireless ever got. There have been many unlimited plans, but all eventually had to throttle back to EDGE after paltry amounts. It's another area with no practical improvement in 10 years & no further research. The death of wireless research has been blamed on easy money.
It's back to twisted pair copper as the deathstar claims another one. The empire charges $443/year with a $174 down payment for 3 megabits. Pray it doesn't change its mind.
Google fiber remanes vaporware for its 5th year, with 1 affluent neighborhood in Kansas City the only place it was ever implemented, but technical challenges proving insurmountable.
Dreamed about converting the entire home network to IPv6. Manually typed IP addresses gave way to either copying 128 bit addresses from a file or giving everything a hostname. The IP masquerading mess was gone. Everything was a live address on the internet again, with some kind of firewall. All the private data which had to be uploaded to a corporate cloud server could be stored on a private server, yet still accessible from anywhere. There was no more datamining, government scanning, employer scanning, SQL injection, heartbleed, amazon EC2 downtime, or advertising for amazing refinance rates. It was all locally stored & free again. It was like 1999 again.
There is a flood of users to cloud data storage just as arrests carried out by data mining are exploding. It's the same strange flocking to control humans have demonstrated for all time. So far, there was the famous arrest of a pedophile based on his gmail content & the arrest of a guy who searched for ways to get rid of his roommate. Who knows how many arrests resulting from data mining aren't making the news or when they're going to target content related to taxes, student loans, speed limits, & mobile data usage.
The latest theory was bandwidth limiting was the result of total usage + current bitrate going above a certain amount. If the current bitrate was always below a certain amount, maybe the total usage wouldn't be capped. The iwconfig rate command doesn't do anything anymore. The only way to limit your bitrate is now traffic shaping.
Traffic shaping in Linux is a very long, involved process, requiring in depth knowledge of the kernel. It's not supported on the phone itself. There is a tool called http://lartc.org/wondershaper/ which hard codes the most useful configuration. When run on the pi router, it successfully limits bandwidth between wlan0 & eth0, but not bandwidth between wlan0 & another station. It has to be run on every station to limit its own wlan interface.
Traffic shaping is not bulletproof. It can't limit the rate packets come from the internet, so it tries to limit the rate of ACK packets. Bandwidth still often goes above the limit, then settles below the limit once the window is full. The problem is easier on Virgin's side, since they're on the giving end of most of the data.
So at 4am, after hitting 6.45GB & a stretch of 9 megabit downloading,
Bandwidth was finally cut down to 128kbit with an email saying 2.5GB was exceeded & the party was over until the next billing cycle. It was definitely strange timing.
The internet screams bloody murder whenever someone complains about unlimited plans not really being unlimited & how it's the corporation's right to cut you off for degrading the network performance, but look. Who's suffering from degraded network performance at 4am?
The solution to network performance is a priority queue. If the quota is exceeded, you put the user on the bottom of the queue. If the network is idle, they get the full bandwidth. If the network is full, they get reduced bandwidth. Voice calls have been prioritized higher than data for all time without requiring the data to be limited to 128kbit.
Bandwidth throttling has nothing to do with improving network performance, but is meant to force you to pay for a higher end plan. The idea makes sense in terms of profits, if you offer a higher end plan, but Virgin doesn't. They just offer 2.5GB plans.
The Goog shows Virgin trying many experiments in network management, from temporary throttling to upload throttling to peak hour throttling, but never the obvious priority queue. Maybe it's technologically unfeasible to achieve such prioritization, but Kiwipedia says it's a tried & true practice.
Virgin obviously can't figure out a solution or doesn't have any incentive to do the obvious. It's another time when easy solutions abound but easy money doesn't make them worth it, much like 20 years ago when IIS crashed constantly & no-one could bother with the easy solution of using apache & Linux.
Well, you're probably more productive in the current situation, but come the next interview or online course, it's going to be DSL.
So you have a pi router with wlan0 & eth0. The internet is on a wlan0 address 10.0.1.11. The home network is on a eth0 address 10.0.0.11. Getting from a network on eth0 to a phone on wlan0 is a big deal. There are many ways to do it. There's using a bridge device on your pi to make eth0 & wlan0 the same device.
apt-get install bridge-utils
In /etc/hostapd/hostapd.conf add
The IP masquerading is the same. The IP addresses of wlan0 & eth0 need to be 0.0.0.0. Setting br0 sets both to the address of br0. Everyone on the wired network uses 10.0.0.11 as their gateway, which forwards everything to the phone. Bridging falls apart when you put more than 1 access point on the network.
Instead of a bridge, you need the wireless parts & the wired parts on different subnets. Every access point is a different subnet, while the wired part is the same. On the phone & every wireless computer, create a routing entry from the wireless subnet to the wired subnet.
It may seem strange to use a fragile optical fiber to move 1.4 megabits of data in this world of gigabit wireless, but it was cheaper than buying a new amplifier. Merely enable IEC958 on Mix2005 & the raw digital audio buffer goes straight to the amplifier. The amplifier automatically switches to its own DAC. There's no longer any level control on the computer, no more ground loop, & no more AC hum. The highs jump out a lot more. The amplifier seems to do the full 48khz.
It's probably the best sounding computer in the world, since no-one ever bothers connecting their computer to a decent speaker, let alone digitally. Computers were still supporting the 30 year old optical audio standard until 2010. There was never any need to bother with RCA cables, since the optical cable was $3. Nowadays, they all use bluetooth.
The next logical step was of course digital audio from the cp33 -> amplifier.
Time once again to open it up to flash a smaller fragment size. With the fragment sizes optimized as small as possible, the delay was estimated at 7ms. Unfortunately, it was a waste. The delay was noticeable. There was no difference in the amount of hiss. It probably wouldn't be robust enough to record anything. The experiment did reveal that Zone 1 changed amplitude in the digital stream, so all the recordings were overloaded. It still might be useful to eliminate ground loops.
Having never played with bluetooth audio besides the crappy old phone headsets, would assume bluetooth audio had the worst latency. USB standards compliant audio probably has the latency of a soundcard, with toslink still the only thing down to single sample latency.
For the 1st time in history, someone overlaid the supermoon on a previous photo of the moon, using the same lens & camera. The only previous photo was the last lunar eclipse. The supermoon actually was slightly bigger. No-one experiments anymore. They just photograph it with no interest in a science experiment to see for themselves a size difference with their own camera, from the same location.
Google & Yahoo jumped on the email encryption bandwagon last year, then prompty got a lot more aggressive about turning over anyone who stored any suspicious content on their account. Google's mane victory was a pedophile who was arrested after a content ID algorithm applied to all gmail content found naked kid photos in his account.
After the pedophile was arrested, Google went on a renewed campaign advertizing complete "end to end" encryption between it & Yahoo's servers. Their email service was not only as private as a hard drive, but so was Yahoo's.
Is the hype about email encryption just a modern dragnet that's trying to leverage ignorance to get criminals, or is there hope for someday having a completely private link between 2 points? Technically, private webmail email is impossible.
The message has to be encrypted in the sender's browser using the receiver's public key, then decrypted in the receiver's browser. The decryption can't happen anywhere besides the receiver's browser. Wherever the receiver wants to run a different browser, the private key has to be entered in the browser & it has to be decrypted in the new browser.
The key pair is usually a hash of the password or something that can be changed. If the user changes keys, every email has to be downloaded from the server, decrypted with the old keypair, reencrypted with the new keypair, & uploaded again. It's completely impractical if the user has 1 gig of stored emails & 1 gig of online storage was originally what sold gmail.
At most, the "end to end" encryption being advertized can only encrypt the transfer over the wire. It has to be decrypted on the final server that dispatches it to the reader.
So skype finally pulled the plug & required everyone to upgrade their 2010 binaries. They got rid of the static binaries & the ALSA support. It now requires 64 libraries, which must all be tracked down, manely the complete 32bit Qt set. Mercifully, it still runs on 2010 era ld-linux.so.
After 20 years of audio problems, the final solution to audio configuration is now considered to be pulseaudio. Skype also no longer retains your password unless explicitly forced by a hidden button.
Sound configuration previously required creating a .asoundrc file with a bunch of routes & some carefully calculated buffer sizes. It automatically worked. The new procedure requires 1st running pulseaudio, then running pavucontrol, then setting the input source in Mix2005 to Rear mic & setting a monitoring level, then manually unmuting the output & input devices in pavucontrol, manually setting the levels pavucontrol can see, then finally running skype.
Pavucontrol can neither store its configuration nor access all the registers. No mixer besides Mix2005 can, so some parameters have to be written by Mix2005 & some have to be written by pavucontrol. Every adjustment to Mix2005 causes pavucontrol to mute again. There's no way to adjust the monitoring level during a call, since that's only available on Mix2005. The current ALSA driver maps the mic to rear mic. Only Mix2005 can access all the registers for monitoring the mic level.
The quest to remove Commie cast revealed Sprint has a transparent proxy transcode all the jpgs at the network level to reduce the file size. The quality reduction is 75%. The same result happens whether you try a proxy server or your own private server.
It is possible to bypass the processing, using https, but very few servers actually support https. Proxy servers that advertize anonymous browsing don't actually support https or don't support wget.
Looks like timelapse webcam movies are going to be the next victim of the recession. With the amount of outrage over Commie throttling Netflix or Netflix charging penalties to Commie customers, people only get outraged when they're told to & Sprint degrading jpg images is just an area they haven't been told to be outraged over.