Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Think about what a Chromebook would be capable of with always-on 700Mbps.

They could sell that thing with practically nothing inside -- no hard drive, barely any RAM, no processor to speak of, just a screen and a keyboard -- and it could outperform the best desktops that money can buy, just by offloading computation to Google's massive data centers in real time.

Of course, in the short term latency will still be an issue. So more realistic would be to just get rid of the hard drive (why bother? It's practically the same delay to access the nearest data center's SSD as your own), but keep the RAM and GPU, and enough of the CPU to keep things running smoothly. But I definitely expect to see computation offloaded to the cloud as Internet speeds ramp up. It would be super cool to be writing a python script to test something out and just type in a call MapReduce or whatever on gigs of data, and have it Just Work in the cloud in real time.



And then something like Plan 9 could finally be for the masses. Just less elegantly implemented by Google than what Bell Labs did back in the 80s.

Come to think of it; perhaps Rob Pike and Ken Thompson are already working on this in Google's secret lab? Maybe using and improving more of their old design than anybody will realize. As PG discovered in 1995 (see Beating the Averages), when your product runs on your servers, you can use whatever technology works best to implement it.


I so wish they're hacking a proof of concept right now.


Re: your Python comment, we already have this ;) http://www.picloud.com/ - I don't usually write comments like this, but I used it about a month ago and was thorougly impressed (no affiliation). Great for stuff where you just want to run a bunch of Python in parallel and not worry about wrangling instances or sysadmin stuff.

When I tried it, jobs up to 40 of their virtual cores (they load share across EC2s) would execute immediately, while more than that triggered job queuing.


I fear latency is always going to be the issue that stops this working across arbitrary networks, with the amount of buffering that goes on. How many hops is Google Fiber to the nearest cloud services provider?


I would imagine very few, everyone wants to peer with Google.


I imagine some form of computer prediction combined with intelligent buffering could minimize the effects of latency for many tasks.


You're still bound by the laws of physics. Nothing real time is ever going to get to you faster than a piece of fiber can carry it.


Latency to access a SSD is calculated in nanoseconds, whereas latency across networks is typically calculated in terms of milliseconds. Order of magnitude difference here.

That being said, to an end user, the difference between 100 nanoseconds and 100 milliseconds is - probably - very small.


The difference is small for a single file. Then they try to start an app that loads 1000 files on startup (say, a game or Rails...), and those milliseconds turns to seconds, while the nanoseconds turns to still mostly imperceptible microseconds.


A nanosecond is 1 billionth of a second. A millisecond is 1 thousandth of a second.

1/1000000000 vs. 1/1000

Six orders of magnitude.

Edit: your point may hold true as I'm not sure SSD access or seek times are in the 10ns range.


Hmm, I thought it was more like 10s to 100s of microseconds of latency for a SSD.

I've seen estimates that Google fiber latency (when plugged in, to the nearest Google datacenter) could be anywhere from microseconds to 10s of milliseconds, but I don't have any reliable sources and I don't have any expertise here myself.

I'm hoping that someone who does know might chime in?


> why bother? It's practically the same delay to access the nearest data center's SSD as your own

Except it's not. And 700Mbps is still only about 1/3 of the throughput of current cheap-ish laptop SSD's, and a magnitude less than higher end server grade SSD setups...

We've got quite a way to go yet, and for many types of applications, latency will remain a reason for local storage "forever" for the simple reason of speed of light.


You can already do everything you've described (assuming you're in Kansas).


Right, but not on $50 worth of hardware. I guess what I'm picturing is companies like Google selling super cheap keyboard+screen combos along with a "cloud" plan, where you get access to so many terabytes of memory and so many cores 24/7 with perhaps leeway to use more cores as desired (but you pay for what you get).


If you want to compute a lot, the cloud is always more expensive than your own hardware.

Cloud only pays off if you're doing something major short-term (your load is spikes and then drops off for a while), or you're just plain lazy.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: