So you wanted to move all the GPS logs from a corporate datacenter to a private datacenter on the phone. The phone has no downtime, it's a lot faster to access, it's not scanned by the government, it's always with you. It's the perfect cloud server.
It could be done in a few minutes with a hand coded Java web server & a custom protocol, but you wanted to parlay it into a profitable skill. The most current strategy is using node.js as the server, but there's no fully functional node.js which can run on Android & no interest. Cloud servers are supposed to be on corporate datacenters like amazon.com, not phones.
The best solutions have required a complete install of debian on a dedicated partition, with node run inside chroot. Node.js requires too many libraries not part of Android.
The honeymoon with node.js already seems to be over, only months after it began. Server side Go is the new thing if you want to be current.
Chasing these rapidly changing server side languages starts to seem irrelevant if you don't foresee ever getting a web development job. In the end, a simple hand coded Java web server could do all the required functionality. The only buzzwords used were a JSON query & some jquery commands.
It wasn't eye candy compliant, but it was the beginning of a phone cloud. All the workouts could be easily viewed.
In Android's current form, the phone screen has to be on for the web server to work.
Building this into a complete social network with accounts, sharing, permissions, advertizing, eye candy, & spam, would be a huge undertaking. It makes you appreciate what mapmywalk has done. A new version of such a thing would never be discovered in the sea of web apps.
Not much can be done over 128kbit with modern software. The frustration of waiting 1 week for a router to arrive by mail in order to access data in a few milliseconds shows how bits live in 2015 while atoms still live in 1945. Same day shipping joins self government & happy marriage in the triangle humans will never achieve.
There is hope for drone delivery, along with infinite batteries, 3D printing, & Steam Valve. Though technically possible, the manual labor involved in drone delivery is still the same as doing it with R-22's. The trick is automating it to the point of no oversight.
There are 6 bay area drone startups. More people work in drone technology than maneframes. They range from agriculture to follow cams to drone delivery. The ages old market of persistent surveillance is gone, but still seems to be the only viable market.
Ended up getting interviews from 2, with no results. Quite a contrast to when jobs at small drone startups were fairly easy to get. 10 years ago, there was just 1. Now there are 6, but there are 6,000,000 more applicants. Like web developmemt, it's become a huge industry with an even huger following making it impossible to get in.
There is surprisingly little attention to safety. In the old days, pilots only bantered about safety. Now, thinking anything could go wrong is an attitude problem.
So it took only 2 days to reach 3GB of data usage, at which point Dick Branson throttled it again to 128kbit. Traffic shaping at 1megabit didn't make any difference, though the algorithm still might have been based on usage exceeding a certain amount in a certain number of hours. Given the normal usage of any modern web app, 2.5GB per month is nowhere near enough.
Was surprised to find facebook & eclipse now use 100% of your network capacity, at all times. The stigma of avoiding busy waits in software has yet to reach network usage. Facebook constantly loads content preemtively. Eclipse constantly uploads your code to static analysis tools.
The jump from GPRS to EDGE in 2005 was as fast as wireless ever got. There have been many unlimited plans, but all eventually had to throttle back to EDGE after paltry amounts. It's another area with no practical improvement in 10 years & no further research. The death of wireless research has been blamed on easy money.
It's back to twisted pair copper as the deathstar claims another one. The empire charges $443/year with a $174 down payment for 3 megabits. Pray it doesn't change its mind.
Google fiber remanes vaporware for its 5th year, with 1 affluent neighborhood in Kansas City the only place it was ever implemented, but technical challenges proving insurmountable.
Dreamed about converting the entire home network to IPv6. Manually typed IP addresses gave way to either copying 128 bit addresses from a file or giving everything a hostname. The IP masquerading mess was gone. Everything was a live address on the internet again, with some kind of firewall. All the private data which had to be uploaded to a corporate cloud server could be stored on a private server, yet still accessible from anywhere. There was no more datamining, government scanning, employer scanning, SQL injection, heartbleed, amazon EC2 downtime, or advertising for amazing refinance rates. It was all locally stored & free again. It was like 1999 again.
There is a flood of users to cloud data storage just as arrests carried out by data mining are exploding. It's the same strange flocking to control humans have demonstrated for all time. So far, there was the famous arrest of a pedophile based on his gmail content & the arrest of a guy who searched for ways to get rid of his roommate. Who knows how many arrests resulting from data mining aren't making the news or when they're going to target content related to taxes, student loans, speed limits, & mobile data usage.
The latest theory was bandwidth limiting was the result of total usage + current bitrate going above a certain amount. If the current bitrate was always below a certain amount, maybe the total usage wouldn't be capped. The iwconfig rate command doesn't do anything anymore. The only way to limit your bitrate is now traffic shaping.
Traffic shaping in Linux is a very long, involved process, requiring in depth knowledge of the kernel. It's not supported on the phone itself. There is a tool called http://lartc.org/wondershaper/ which hard codes the most useful configuration. When run on the pi router, it successfully limits bandwidth between wlan0 & eth0, but not bandwidth between wlan0 & another station. It has to be run on every station to limit its own wlan interface.
Traffic shaping is not bulletproof. It can't limit the rate packets come from the internet, so it tries to limit the rate of ACK packets. Bandwidth still often goes above the limit, then settles below the limit once the window is full. The problem is easier on Virgin's side, since they're on the giving end of most of the data.
So at 4am, after hitting 6.45GB & a stretch of 9 megabit downloading,
Bandwidth was finally cut down to 128kbit with an email saying 2.5GB was exceeded & the party was over until the next billing cycle. It was definitely strange timing.
The internet screams bloody murder whenever someone complains about unlimited plans not really being unlimited & how it's the corporation's right to cut you off for degrading the network performance, but look. Who's suffering from degraded network performance at 4am?
The solution to network performance is a priority queue. If the quota is exceeded, you put the user on the bottom of the queue. If the network is idle, they get the full bandwidth. If the network is full, they get reduced bandwidth. Voice calls have been prioritized higher than data for all time without requiring the data to be limited to 128kbit.
Bandwidth throttling has nothing to do with improving network performance, but is meant to force you to pay for a higher end plan. The idea makes sense in terms of profits, if you offer a higher end plan, but Virgin doesn't. They just offer 2.5GB plans.
The Goog shows Virgin trying many experiments in network management, from temporary throttling to upload throttling to peak hour throttling, but never the obvious priority queue. Maybe it's technologically unfeasible to achieve such prioritization, but Kiwipedia says it's a tried & true practice.
Virgin obviously can't figure out a solution or doesn't have any incentive to do the obvious. It's another time when easy solutions abound but easy money doesn't make them worth it, much like 20 years ago when IIS crashed constantly & no-one could bother with the easy solution of using apache & Linux.
Well, you're probably more productive in the current situation, but come the next interview or online course, it's going to be DSL.
So you have a pi router with wlan0 & eth0. The internet is on a wlan0 address 10.0.1.11. The home network is on a eth0 address 10.0.0.11. Getting from a network on eth0 to a phone on wlan0 is a big deal. There are many ways to do it. There's using a bridge device on your pi to make eth0 & wlan0 the same device.
apt-get install bridge-utils
In /etc/hostapd/hostapd.conf add
The IP masquerading is the same. The IP addresses of wlan0 & eth0 need to be 0.0.0.0. Setting br0 sets both to the address of br0. Everyone on the wired network uses 10.0.0.11 as their gateway, which forwards everything to the phone. Bridging falls apart when you put more than 1 access point on the network.
Instead of a bridge, you need the wireless parts & the wired parts on different subnets. Every access point is a different subnet, while the wired part is the same. On the phone & every wireless computer, create a routing entry from the wireless subnet to the wired subnet.
It may seem strange to use a fragile optical fiber to move 1.4 megabits of data in this world of gigabit wireless, but it was cheaper than buying a new amplifier. Merely enable IEC958 on Mix2005 & the raw digital audio buffer goes straight to the amplifier. The amplifier automatically switches to its own DAC. There's no longer any level control on the computer, no more ground loop, & no more AC hum. The highs jump out a lot more. The amplifier seems to do the full 48khz.
It's probably the best sounding computer in the world, since no-one ever bothers connecting their computer to a decent speaker, let alone digitally. Computers were still supporting the 30 year old optical audio standard until 2010. There was never any need to bother with RCA cables, since the optical cable was $3. Nowadays, they all use bluetooth.
The next logical step was of course digital audio from the cp33 -> amplifier.
Time once again to open it up to flash a smaller fragment size. With the fragment sizes optimized as small as possible, the delay was estimated at 7ms. Unfortunately, it was a waste. The delay was noticeable. There was no difference in the amount of hiss. It probably wouldn't be robust enough to record anything. The experiment did reveal that Zone 1 changed amplitude in the digital stream, so all the recordings were overloaded. It still might be useful to eliminate ground loops.
Having never played with bluetooth audio besides the crappy old phone headsets, would assume bluetooth audio had the worst latency. USB standards compliant audio probably has the latency of a soundcard, with toslink still the only thing down to single sample latency.
For the 1st time in history, someone overlaid the supermoon on a previous photo of the moon, using the same lens & camera. The only previous photo was the last lunar eclipse. The supermoon actually was slightly bigger. No-one experiments anymore. They just photograph it with no interest in a science experiment to see for themselves a size difference with their own camera, from the same location.
Google & Yahoo jumped on the email encryption bandwagon last year, then prompty got a lot more aggressive about turning over anyone who stored any suspicious content on their account. Google's mane victory was a pedophile who was arrested after a content ID algorithm applied to all gmail content found naked kid photos in his account.
After the pedophile was arrested, Google went on a renewed campaign advertizing complete "end to end" encryption between it & Yahoo's servers. Their email service was not only as private as a hard drive, but so was Yahoo's.
Is the hype about email encryption just a modern dragnet that's trying to leverage ignorance to get criminals, or is there hope for someday having a completely private link between 2 points? Technically, private webmail email is impossible.
The message has to be encrypted in the sender's browser using the receiver's public key, then decrypted in the receiver's browser. The decryption can't happen anywhere besides the receiver's browser. Wherever the receiver wants to run a different browser, the private key has to be entered in the browser & it has to be decrypted in the new browser.
The key pair is usually a hash of the password or something that can be changed. If the user changes keys, every email has to be downloaded from the server, decrypted with the old keypair, reencrypted with the new keypair, & uploaded again. It's completely impractical if the user has 1 gig of stored emails & 1 gig of online storage was originally what sold gmail.
At most, the "end to end" encryption being advertized can only encrypt the transfer over the wire. It has to be decrypted on the final server that dispatches it to the reader.
So skype finally pulled the plug & required everyone to upgrade their 2010 binaries. They got rid of the static binaries & the ALSA support. It now requires 64 libraries, which must all be tracked down, manely the complete 32bit Qt set. Mercifully, it still runs on 2010 era ld-linux.so.
After 20 years of audio problems, the final solution to audio configuration is now considered to be pulseaudio. Skype also no longer retains your password unless explicitly forced by a hidden button.
Sound configuration previously required creating a .asoundrc file with a bunch of routes & some carefully calculated buffer sizes. It automatically worked. The new procedure requires 1st running pulseaudio, then running pavucontrol, then setting the input source in Mix2005 to Rear mic & setting a monitoring level, then manually unmuting the output & input devices in pavucontrol, manually setting the levels pavucontrol can see, then finally running skype.
Pavucontrol can neither store its configuration nor access all the registers. No mixer besides Mix2005 can, so some parameters have to be written by Mix2005 & some have to be written by pavucontrol. Every adjustment to Mix2005 causes pavucontrol to mute again. There's no way to adjust the monitoring level during a call, since that's only available on Mix2005. The current ALSA driver maps the mic to rear mic. Only Mix2005 can access all the registers for monitoring the mic level.
The quest to remove Commie cast revealed Sprint has a transparent proxy transcode all the jpgs at the network level to reduce the file size. The quality reduction is 75%. The same result happens whether you try a proxy server or your own private server.
It is possible to bypass the processing, using https, but very few servers actually support https. Proxy servers that advertize anonymous browsing don't actually support https or don't support wget.
Looks like timelapse webcam movies are going to be the next victim of the recession. With the amount of outrage over Commie throttling Netflix or Netflix charging penalties to Commie customers, people only get outraged when they're told to & Sprint degrading jpg images is just an area they haven't been told to be outraged over.
So there was an interview at a follow cam startup. At 1st sight, the guy made up his mind. There was a courtesy lunch & then what was originally planned as a 5 hour tour was terminated after 1 hour.
They weren't interested in anyone over 30, but there was also a large experience gap. They were all just starting out in UAV design & may have wanted the novel ideas that people with minimal experience give, rather than the clouding of years of bad experiences. They may have wanted absolutely no skepticism that the idea was going to work.
They were way behind the other follow cams. Since everyone relies on the same 3D Robotics load with minor changes, it probably wasn't a big deal to languish. Like most every other idea getting the most funding, it was very unlikely to work. Would still say there is enough time before the sh*t hits the fan for the follow cam startups to make significant progress towards the goal.
It was disappointing that a place with no prototypes, minimal experience, & a very hard idea to realize was able to get significant funding, while the other ventures in toys & video editing were unable to get any funding after many prototypes, massive experience, & despite ideas that were equally feasible.
The interview guy had a PhD in CS from the U of T at Austin. The CEO had an MBA from Stanford. The education is definitely a consistent factor among those who got funded, while the unfunded consistently had very little formal education.
After running out of IP addresses, Comca$t disabled all the old routers. After a few days without service, an upgraded router with private subnet appeared. The upgraded router was almost able to saturate the home network, but completely unaffordable. It was surprisingly the same speed as the $100 plan, but they charged only $88.
It was finally time to start experimenting with shutting down the cable guy.
3 megabit plan, for $73 instead of $88. It's manely a test to see if it can be done away with.
With the growing amount of red on the war map, it's amazing the economy hasn't been affected. The oil keeps flowing & the stock market keeps going up. As long as you don't fly over 1/3 of the world, travel is unimpeded. If conflicts in the last 5 years are all counted, the entire map is red.
It would be convenient if the news provided a summary of all the fighting or referred to it all as "the war", but it's not officially a world war. It can't be officially called "the war" because US's president is completely in over his head.
Despite the media focus on code.org, massive spending on CS in primary school, massive expansion for H-1B visas , there's no demand for people who can just program. There is demand for people with formal CS education & massive experience in very specific languages like Hack, Go, & Swift, languages invented by 1 company specifically for 1 application.
It's a new way of doing business. Languages based on industry standards, that could be transferred between many jobs are all but obsolete. Each company now develops their own language, specifically for their needs. Google invented Go. Apple invented Swift. Facebook invented Hack. There may soon be a time when every single application begins with developing a new language.
The key to the 1 language per application model was eliminating the need for every new language to have new libraries for accessing the system. Instead of requiring new libraries, all new languages access the system through HTML. They just communicate to the system through a character stream & might have some libraries for parsing the DOM tree.
A positive sign is while interpreted languages were the rage from 2000-2010, all the new languages are natively assembled again. It's more secure to have the output of a trusted compiler on the development machine sent directly to the hardware on the consumer machine, than rely on an interpreter on the consumer machine which could have been compromised. Developers have famously breached the Dalvik...Continue Reading
No, it doesn't work, but it has been willed into working by massive numbers of reposts saying it works. It's a fine example of how very little of the internet is real, from the economic boom to the self driving cars, but has been willed into reality by the number of reposts. People are happy about the illusions, which is probably good enough. Someday we may never need know a day outside a simulated world.
Read about the quantum vacuum plasma thruster years ago & it wasn't any more credible then with an audience of 1 than it is now with an audience of 50 billion. The theory is by bouncing microwaves in a container, some of them move forwards & some move backwards. The mass of the photons in the microwaves creates inertia. Since 1 end reflects fewer microwaves than the other, more photons push in 1 direction & since the waves reflect many times, a single wavelength produces many bounces.
Of course, it does rely on expelling mass like every other engine. The mass of the photons leaked from the end with lower reflection is what propels it forwards. Bouncing the microwaves doesn't increase the thrust, but reduces it. The waveguide isn't 100% efficient, so photons are leaked from both ends. If the entire wave was expelled from 1 end with no reflections, it would be more efficient.
Like a modern dot com applying thousands of megabytes of a multitude of programming languages to print hello world, it's a case of success achieved by extreme overhead & enough complexity to sound convincing. So no. You can't buy a self driving car, there are fewer jobs than there were in 2007, newtonian physics hasn't been broken, but who knows if living in reality is still necessary.
Going to Bakersfield would have been a total disaster. Don't think there was ever another time when something more disastrous was about to happen, outside of politicians. There were definitely crooked times & times which felt like they caused disaster, but nothing that extraordinarily, overtly, outright bad. Fortunately, wasn't wrong about marriage.