[PLUG-TALK] Sandynet Re: Google Fiber .. not

Keith Lofstrom keithl at kl-ic.com
Thu Oct 27 07:48:11 UTC 2016


SandyNet backhaul, see:

https://ilsr.org/wp-content/uploads/2015/11/sandynet-2015.pdf

The Institute for Local Self Reliance also offers a
video about SandyNet:

http://www.youtube.com/watch?v=fBztjr2uCzg

At about 5:50 into that video. a whiteboard briefly
depicts a twin fiber connection ring around Clackamas
County, 2x10 Gbps terminating at NWAX, the NorthWest
Access Exchange in the Pittock.  

NWAX serves a number of institutions like OHSU, and has
perhaps 100 Gbps connectivity to the rest of the world.
See http://www.nwax.net/topology.php  Their front page
http://www.nwax.net/index.php shows a graph of access
doubling over the last year, with peaks creeping up
towards 70 Gbps.  I imagine some of that bandwidth is
local, moving medical images between hospitals, for
example.

10 Gbps sounds like a lot.  Netflix HD averages 4 Mbps,
and that would be 2500 movies on a 10 Gbps fiber, 5000
movies if they saturated both directions on the Clackamas
County loop.  Probably plenty for a city of 10,000 people.

Except ... internet usage (and probably video download)
is fractal.  Peaks of 5x to 10x average speed can happen.
Look at https://en.wikipedia.org/wiki/Poisson_Distribution
As the system gets close to the limit, packets fail and
retries happen, causing more packets to fail.

Users will find more things to do with internet service,
especially if they can take "as much as they want".  If
there are (WAG) a thousand 1 gigabit connections to the
internet in Sandy, they will eventually all talk at once.

Functional internet speed will always be limited, not 
only by managed caps, but by suckage: "Nobody goes
there anymore. It's too crowded." Yogi Berra

As is happening to Russell Senior now, with his
supposed Gbps ethernet sometimes failing to deliver
20 Mbps, or with me, with my 15 Mbps service failing
to deliver 3 Mbps.  Oversubscription == suckage.

Higher peak bandwidth pipes mostly means packets can be
emitted more quickly, so that some make to the head of
the queue first.  The seemingly clever strategy is for
software to send more redundant packets, just in case,
leading to even more saturation and packet failures. 
Another "tragedy of the commons" scenario.


I'd take a different approach.  Buffer movies locally,
right here in many Portland neighborhoods.  If the 
studios refuse to permit that, pass laws to permit it
anyway in Oregon.  Much like we have laws that permit
recreational marijuana, federal regulations be damned. 
With local buffering, Sandy can move movies to a local
server one time;  perhaps two movies per second during
low usage hours.  Charge for the movies, and compensate
the studios AS IF they were serving them from California. 
In court, studios will not be able to demonstrate economic
harm, merely loss of control.  Yes, there would be
enormous legal battles, but it would put the real issues:
a ridiculous copyright system, ridiculous customer abuse,
and ridiculous misuse of internet bandwidth;  into the
national spotlight.  

With that done, and the internet freeing itself of 
"remote serve Netflix" and movie presentation monopolies,
it will function a LOT faster, using the backhaul
bandwidth already in place.  That will make community
internet more profitable and a lot easier to roll out.
It will encourage new uses for the reliable "new"
bandwidth that is freed up.  In the long run, local
creativity flourishes, because anyone can make and
distribute performances globally.

I've watched expectations for "acceptable bandwidth"
increase by a factor of 10 million in 40 years,
1.5x per year, a factor of 1000 in 17 years.  I can
imagine applications for another factor of 10 million,
10 exabits per second (Eb/s) per user.  The barriers
to this future growth will be legal, not technical
or organizational, because laws adapt too slowly.

The global internet currently conveys about 50 exabits per
month (according to Cisco, http://tinyurl.com/cisc2015 ),
a mere 2 Tb/s average, doubling every 16 months.  However,
please remember that peak bitrates are fractal, Poisson
distributed, and retries increase as limits near, so the
world needs a LOT more than 2 Tb/s PEAK data capacity.

In the lab, single core optical fiber tops out at 10 Tb/s
using exotic electronics and doped fiber amplifiers, see
http://tinyurl.com/spectrumkeck  .  Next-decade backhaul
fibers may contain arrays of hundreds of single mode
cores, driven by VERY exotic electronics.

My guess is that Sandy will upgrade their backhaul and
the optical network terminals connecting to customers
half a dozen times before they finish servicing their
20 year debt, but the fiber deployed to homes won't
need upgrading over that time.  The bureaucrats might.

We cannot move 10 Eb/s to the home with optical fiber,
but we will figure out /something/ in the next 40 years. 
Financing exponential increases will be "interesting".

Keith
-- 
Keith Lofstrom          keithl at keithl.com
-----
Don't waste your vote in 2016!  Give it to the Republicans
and Democrats, and they will gladly waste it for you!



More information about the PLUG-talk mailing list