I'm all for the market in most instances, but I really hope Internet access starts to be treated like a utility in the United States. It is approaching the necessity of electricity a century ago (or water, gas, roadways, etc.) The market had zero incentive to invest so heavily in these utilities for such a drawn out return before the government financed the basic infrastructure. I know the analogy isn't perfect for a few reasons, but imagine where we would be if all these other necessities were privatized?
Actually, the USF fee that's assessed on every phone bill you pay (in the U.S.) was originally supposed to be used to connect schools, libraries and municipal buildings both rural and urban areas with the understanding that once the fiber capacity was in place to service those institutions, the providers would supplement their income by connecting businesses and private residences.
In the mid-to-late '90s, I was part of a team that built significant fiber-based distance learning systems in Kansas, Nebraska, Wisconsin, Texas and smaller installations in other states. The schools, a local fiber provider and our company cooperated to submit the grant proposal and the USF funds ended up offsetting enough of the equipment/infrastructure costs that the schools and SP could agree on a rate that the schools could afford.
The last of our systems (that I was involved with) was installed in about 2000 but stopped the organization overseeing those services held applications for (from memory) about 18 months. I'm not aware of how the funds are distributed now (if at all) but I know for a while some of the schools we'd connected complained that they were no longer receiving their subsidies. I do know that the fee remained on my telephone bill.
So the question is "why didn't the infrastructure installed using USF funds lead to lower prices for consumers?". I think the answer is that Internet pricing for home service in the U.S. is too lucrative - no guarantees on service (yes, you're quoted maximum through-puts but those can be affected by your neighbors) and relatively large bills.
The USF funds are distributed (at least partially) through the e-rate program that school districts can use to help fund communication infrastructure. The amount varies but is primarily influenced by how "poor" the district is. I've been involved with some districts in some extremely poor areas that had full gigabit (or more) fiber links between their schools and the district office and then out to the Internet. They get this money by applying through the e-rate grant process. I know of cases where the funding paid for 90% of the cost of installation.
It sounds like that portion of the program continued (after a period when now funds were disbursed) but I had moved onto another company before that happened. And yes, poor districts had an advantage ... we built systems primarily in rural and urban areas, but very few in suburban areas. Those rural areas were poor due to low population density, while the urban areas were poor due to demographics.
The problem with thinking of internet as a utility is that it doesn't behave the same way as any of the other utilities.
Water, natural gas, and electricity are essentially fungible, with simple to express standards. An internet connection is much more complex to specify: you may have a gigabit fiber connection to the provider's local network node, but what connections do they have to everywhere else? Medium, location, bandwidth, etc. If all of my traffic is routed through [some other city on the other side of the country], that's not going to be a great experience regardless of the bandwidth.
Absolutely, internet access is less fungible than electric power. In my area, I can purchase my electricity from specific generators, but they don't run a line to my house from the windfarm; instead, all of the generators are connected to the grid, and so is my house, and I get a nice sinusoidal 220-240 VAC @ 60hz.
In contrast, if I choose between different ISPs, ignoring the inherent differences in latency and bandwidth of the medium and protocols, my packets could have a very different path through the internet depending on how my ISP is interconnected with the rest of the network.
If they embrace local peering, then my bits may not need to travel over very many networks when connecting to relatively local destinations. If they shun local peering, but have good local transit, then my bits may have to travel over more networks, but not travel far. If they shun local peering, and their transit is also not local, then my bits will have to travel a long way to get where they want to go. If the peering or transit connections are inadequately sized, then there will be congestion there that's dependent on my choice of ISPs.
There is fungibility in bringing my bits to a specific carrier neutral location; but bringing my bits to the internet at large is not fungible. If you want to regulate as a utility connecting homes to the nearest internet exchange point, for some definitions of nearest and internet exchange point, that's probably the limit of coherent regulation.
If you want to regulate as a utility connecting homes to the nearest internet exchange point, for some definitions of nearest and internet exchange point, that's probably the limit of coherent regulation.
Regulating the guys with the physical plant to be nothing more than a physical plant would be a massive improvement over where the US is today.
The biggest problem today is that the guys with the physical plant have aspirations of doing more, and that gives them incentive to degrade quality of service for general internet access (which competes with their aspirations). Thus we get things like ~300GB/month bandwidth caps which are enough to impact a typical family of four trying to cut-the-cord and use alternatives like netflix.
In order for that to be true you have to accept vertical integration of ISP as inevitable, rather than artificial. Structural separation could be implemented tomorrow, and would be a Very Good Thing.
Given today's ISP market with is dominated by vertically integrated players, I assumed "regulate it as a utility" did not implicitly include structural separation. Many other utility examples do not require structural separation to be regulated like a utility.
If the point is structural separation would be a Very Good Thing; then that's what we need to say, and not 'let's regulate it like a utility.' Bear in mind that AT&T already has a history of subverting structural separation with line-shared DSL, so don't expect the existing incumbents to turn it on tomorrow without kicking and screaming.
Most people don't care how it was generated. They want 120V @ 60Hz. That's it. Power companies don't secretly throttle you and then claim they didn't. The poster is talking about broadband access, not 'bits'.
You are off your rocker if you think this is a market problem. Where I live now and where I lived previously (downtown Baltimore) there is some sort of regulation that keeps Verizon/FIOS from being able to provide service so the ONLY option is Comcast besides some shit 3g/Clear service nobody should use.
Just because the internet is important doesn't mean the same laws of economics don't apply to it and it will somehow be better provided by government.
They are still wrapping up the details on a couple of long-term rollouts, but they decided they would rather spend their money on the more lucrative wireless oligopoly. They did manage to get the DoJ to sign off on a deal to collude with comcast so as not to poach each other's customers.
You don't know what you are talking about. Just so you get the point I am going to repeat myself, you don't have a clue of the situation if you think that the story is anything other than the Baltimore City Council erecting regulatory barriers to FIOS in Baltimore City. Yeah, they might call it a "business move" now but the unstated fact is that it isn't worth it to Verizon to pay the city for a license.
There can be more than one problem. In your area regulation may be a significant part of the problem but in other places (e.g. Europe) regulation can really help in places to enable competition.
Most economic "laws" apply in competitive markets with limited barriers to entry. It is hard to see how they apply even in an unregulated situation where infrastructure costs are so high and one market participant already has infrastructure in place.
Looking at an existing market with an incumbent trying to move in and challenge them is rarely a wise move whatever their current price. They are in a position to reduce prices and make your entry unviable, they know it and you know it so they can continue charging high monopoly rates until a challenger does announce themselves.
Government can do better (doesn't mean they always will) because:
1) They can look at community benefits not just the ROI for the provider. In a way they win if the incumbent just reduces prices and improves service even if they actually lose some money they have invested in a competitor.
2) They can look long term.
3) They can set rules such as requiring the infrastructure owner to allow other providers to use their lines and regulate the pricing in that monopoly market (happens in the UK). [Don't know what level of government in the US could be capable of this.]
I'm sorry you feel insulted when someone points out that you believe something which is patently false.
>Most economic "laws" apply in competitive markets with limited barriers to entry. It is hard to see how they apply even in an unregulated situation where infrastructure costs are so high and one market participant already has infrastructure in place.
The problem here is certain city councils putting a price tag on the license to lay cable. Government creates a barrier to entry in order to aid Comcast's monopoly. All the laws of economics are intact and this isn't a problem of infrastructure costs.
>There can be more than one problem. In your area regulation may be a significant part of the problem but in other places (e.g. Europe) regulation can really help in places to enable competition.
Government is not a business. It does not enable competition ever. Maybe you feel better telling yourself such fairy tales but there is no example you can point to. People also generally believe things like that IP aids innovation, yet there is not a single study showing this and a bunch of studies showing it hampers innovation. IP is a state-granted monopoly over an idea/pattern/concept.
When it is favorable to certain corrupt city councils, they aid Comcast's monopoly. I didn't mention it but the place I live now which also has no FIOS is Harrisburg, PA. This is the Detroit you never heard of with 7x the public debt per capita vs. Detroit. The long-time mayor took a bunch of loans for stuff like a Wild West museum and left the city crippled with debt. They are in the process of selling off assets to avoid bankruptcy. Same deal here as Baltimore, too expensive to play ball with the city council so it isn't worth it to get a license but FIOS is everywhere in surrounding municipalities.
I didn't feel insulted but you said that someone else must be off their rocker to believe something you think is wrong which is completely unnecessary.
I don't disagree that government can make things worse or even that they may be the main problem in Baltimore and many areas. However I also believe that there can often be problems in unregulated infrastructure markets and also that government can help.
Almost everyone in the UK has a choice of ADSL providers (and most now have the option of VDSL/FTTC from many providers at higher prices). It isn't perfect but the situation could be much worse. This is enabled by the regulation of the wholesale prices of BT Openreach, the arm of the former telephone monopoly company responsible for the lines. I offer this as an example of government enabling competition and a much better scenario than if the incumbent infrastructure owner could charge monopoly rents and only face competition in the highest density most lucrative areas at the cost of significant disruption as more companies dig up the roads.
Bad government is bad and there are places even good government doesn't help much but that doesn't mean that government always makes things worse and less competitive.
Don't forget that even in pure economic theory there are two "efficient" outcomes, perfect competition where the consumers obtain the entire surplus value and monopoly where the monopolist obtains the surplus.
I don't think price fixing qualifies as enabling competition. You mention a choice of providers, versus I guess previously only having the choice of BT's monopoly? So, government already creates this telecom monopoly then breaks it up a bit, uh no that doesn't work at all sorry. Absent regulation, I have all the choice in the world.
I think if a company wanted to compete with BT Openreach they can it is just impractical in most scenarios. There is also a cable company with its own infrastructure covering about half the country but they aren't interested in increasing coverage.
Because the local loop is regulated you can then have competition at the level above that with different pricing and service offers, so you can choose BT or at very similar price a number of others or for a little more really good ones offering IPv6 and good support (within the limits of the Openreach managed lines).
Is it perfect - definitely not but it is a hell of a lot better than the situation in much of the states and I believe than pure unregulated market in this sector.
"some sort of regulation" is entirely devoid of information. If there indeed is a regulatory hurdle, what precisely is it? Nobody that I can find has ever pointed out any regulatory hurdles for Baltimore FIOS, so if you make that claim, it would be nice to actually back it up.
>In the state of Maryland, Verizon currently offers FiOS service in Anne Arundel, Baltimore, Charles, Harford, Howard, Montgomery and Prince George’s counties. Thus, the only major jurisdiction in the state that is not yet wired for FiOS is Baltimore City. Verizon recently told members of the City Council that they have no plans to apply for a franchise agreement with the Baltimore City Office of Cable and Communications.
I'm not sure on all the ins and outs of it. Feel free to dig more yourself but basically Verizon needs some sort of permit to lay cable and the city council or this office has a certain price for it that makes Verizon see it as unprofitable to provide the service then, unlike most other areas.
I have no issue believing that there are occasionally issues with permits, and that they might sometimes prevent neighborhoods from getting a certain kind of service
But given that Verizon is not rolling out any more FIOS nation-wide[1], I really have trouble believing it's a regulatory issue in this case. And they slowed down way before 2010[2]. VZW gave up on FIOS, plain and simple. Everywhere.
From the article: "This doesn't have to mean providing public financing. One way is to offer regulatory flexibility, speeding up the process of acquiring permits."
Seems Ars is similarly citing unspecified regulations. It's often the case that regulations that vary by city aren't the most straightforward topic to describe.
"The old model was 'everybody has to have service,' which is where cable and telephone came from," Smith said. "This is a model that says, 'we can be patient while demand builds.' We'd like to see some of our most disadvantaged served, but we're not starting out with 'everybody must get service immediately.'"
My understanding is that Chattanooga's gigabit service was rolled out as quickly as they could as limited by funding to everyone within the service area. My understanding is that they are the only city in the US with such complete gigabit coverage.
I've read that they are now selling consulting services to other communities looking to do gigabit rollouts. I'd like to know what their take is on that philosophy since it seems to be the opposite of what they did and they seem to be significantly exceeding revenue targets.
From what I recall, EPB initially planned for the fiber network to be used for their smart grid, which encompasses all of their service area, and then after that was built, they realized they had the capacity to run an ISP. They had the benefit of much of the infrastructure needed to deploy the fiber network already in place and not having to deal with access rights.
Maybe the municipalities should start asking Google why they were rejected. I'm extremely skeptical that cities are opening up to competitive fiber offerings -- a lot of it is political wrangling that most aren't even willing to deal with.
Cut the red tape and they will come. Fuck the exclusivity wheeling and dealing.
My understanding of the Louisville RFI is that it is to be an open access network. The prospect of a local fiber plant providing last mile connectivity between various ISPs and customers excites me greatly.
Once gigabit is in perhaps 25% of homes and businesses, everyone else will want it, too, and a tipping point will have been reached.
I'm guessing it will take that large a market for the really cool apps to become practical and well known: remote desktops and thin clients that serve up high def TV/movies and real-time high quality video telephone (finally). This will be when television and computers finally converge and we will stop thinking of them as separate and different products.
I have to admire Google. They not only envisioned this, but (as usual) very smartly went about implementing a demo. The "fiberhood" approach makes sense; it's the right way to roll out the service (or any non-essential service, really). Make money on neighborhood #1, and that pays for the rollout to neighborhood #2, and so forth.
The whole "either everyone gets it or no one gets it" approach goes out the window, and suddenly, every town wants it. Certainly it will have to be provided to "under-served" areas but the important thing is for there to be a solid business model, preferably with competing providers, else all the problems of a utility-quasi-monopoly come into play: indifferent customer service and support, uneven service quality, etc.
I can't wait until this comes to my town, just outside of Boston, but for various reasons I suspect Mass. will be among the last places to get it. Anyway--congratulations to those who have it, and just don't rub it in!!
TrueCar uses the city's fiber network to connects between its buildings with dark fiber, get a gigabit cross-connect to its colo in downtown LA, and 500 megabits of nLayer bandwidth to the offices. The service has been fantastic and is cheaper by far than anything else we found.
Those time warner prices are cheap. I was quoted $6500/m for 50/50 for a Venice based install. There was already a fiber drop at our building from them, so there was no install cost for TimeWarner.
When I was in Seattle, I had CondoInternet fiber which offered 100 mbps for $60/mo and gigabit for $120/mo.
Then I moved to Santa Clara, and my options are Comcast (currently paying for $50/mo for 50 mbps and actually receiving 10 mbps) or 24 mbps AT&T Uverse for $60/mo.
Why does Silicon Valley, of all places, have such slow and anachronistic Internet options?
As an Aggie and former Bryan / College Station resident (of 15 years), it's fantastic to see them taking a lead on this.
BCS has always been pretty good for high speed internet (I had 2Mbps cable in 1999), but the price disparity between residential and business is insane, and speed increases have stalled somewhat in recent years. Suddenlink is pretty good, but that area has been the target of rather frequent cable company acquisitions and mergers (TCA -> Cox -> Suddenlink just while I was there), so it's not a foregone conclusion that things will stay "ok".
I love living in SF, but I have to admit I kind of like the idea that my "little Texas town" will probably get gigabit and a sensible fiber infrastructure long before SF or most parts of Silicon Valley will.
The electric grid was built on subsidized money and is regulated as a natural monopoly. A large part of that grid is made up of utility poles.
If you want to be a cable provider or ISP, the easiest way is to put your lines on those utility poles. They are already carrying power lines to people's homes any way. Run some cables and you can avoid the extremely expensive proposition of building secondary infrastructure to support more cables running to houses.
When the local municipalities set up agreements with the cable providers, a part of that agreement states nobody else can be on the utility pole. This is why there is a monopoly.
If cities are really fed up with pricey internet, they need to re-evaluate these agreements they've made with the all the incumbents.
having the same DSL connection over recent several years my price has gradually risen from $26 to $40 per month. I'd say something is really wrong with a technology sector if it has such negative price/performance dynamics.
And in Australia we were getting gigabit (rollout was happening), then a new government came to power and said we don't need it, and that copper is fine.
I would love me some gigabit. I'm in Independence, MO, just east of Kansas City, MO. I just switched to Comcast (puke) for $50/month for 50mbps. I was using ATT Uverse where I was paying $60/month for 16mbps (and actually getting 2mbps, hence the switch). Meanwhile, 10 miles West in KC they're paying Google $120 for gigabit + cable. I wish I could get in on that.
Is it really all about the speed race? Why don't I ever hear about quality of service, smaller latencies, etc? Is it because it is less marketable? I am sure ISPs can ultimately do something about it.
Cloud sync for your entire hard drive. Thin clients. Faster spin up of environments that are sourced from the Internet (virtual machine configuration, bundle install, docker images). High(er) definition video. Multi-party video conferencing.
Vídeos, videoconferencing, uploading data, backups, peer to peer legal/other. Usually the proportions are 8mbit downlink and 100k uplink. Sharing data from nodes is hard.
Setting up a new workspace generally is a "go out to lunch and it'll hopefully be mostly done with the perforce checkout by the time you're back" affair on our internal LAN, nevermind if you wanted to work from home. Downloading the built assets through our internal tools takes a similar length of time, so you hopefully can wait until the end of the day to kick that off as you head out.
Various SDKs are delivered by download - my last download was 15GB, compressed. These often require frequent updates, which we save to local shares as a simple matter of expediency when shared widely.
Delivering complete, self contained milestones in specific formats causing 12-50GB .zips to be uploaded via FTP for hours... with the semi-frequent resort to sneakernet due to quota limitations, service issues, or just plain not having hours between when development for a build was stopped and when the build was needed someplace for a demo or whatever else.
I have an archive of some of those deliveries on my local machine. Even with a good 30%+ of my archives missing due to lack of drive space, the archives for the single project I've archives for total over a terrabyte. They're useful for reproducing bugs exactly on occasion. Between all the other projects, other employees, and monstrous shared resources like complete Perforce history, build machines and other servers... complete backup would probably be somewhere around the 0.1 petabytes mark... or close to a full petabit of data. And we're considered only a "small" business, under 50 employees!
Fortunately for us backups are fairly static, and the nightly rsync to offsite storage is usually done bringing all our services to a grinding halt by the time work starts the next day. But it's sufficiently slow that those are our secondary backups, with primaries and restoration being done through sneakernet as the rule, not the exception, despite the "beefy" pipe. A second internet backup would push backup times into our workday, and if we're ever forced to do a major restoration through the internet, it's going to be extremely painful... days of waiting with employees unable to do their jobs. Expensive!
Wow I'm surprised game development is still like that. I haven't done game development in almost 9 years, but the "Perforce saturating the LAN" problem sounds very familiar!
So I'm actually working on a (binary) software distribution system, and I think it would work very well for this use case. This problem can be solved with software -- it doesn't require an upgrade to fiber.
The (straightforward) idea is to use content-addressed storage and differential compression, pretty much like Git does. Then you can actually sync data from your neighbor's machine, in a BitTorrent-ish fashion (possibly getting tiny amounts of metadata from a central server for consistency). Perforce definitely has problems with spurious lock contention, and just a few team members syncing from the same machine can easily clog its pipes if you're not careful.
I'm not up to date, but I'm pretty sure you can do better than Perforce's delta compression. You can probably do it with a single generic algorithm, but one way to really improve would be to use file-type specific compression, e.g. bsdiff or Chrome's Courgette for executables, and use other heuristics for game data, raw audio, video, etc. You actually want to avoid single-file compression in the repository so you can take advantage of the differential compression.
I can understand that the dynamics of game development teams means that this will never get written in house (although perhaps in the years since I've left, people started placing more emphasis on tools).
But for all the entrepreneurs out there, I wouldn't dismiss the market of selling tools to game developers. Perforce is a tiny company that I know made an absolute killing in that market, and I'm somewhat surprised that after a decade they're still the state of the art.
> Wow I'm surprised game development is still like that.
Some of the problems are fundamental. We're forced to use certain compressed and signed packaging schemes which aren't sanely delta compressable. We distribute in self contained blobs because not everyone has older versions lying around, or will not have a worthwhile pipe when they get to where they need to install things and will further sneakernet on their end. Even if we did assume sane deltas are possible, simple information density puts a lower bound on patch size based on how much content changed, and even that theoretical lower bound can frequently be too much and take too long for relatively "beefy" pipes. Heck, even with fiber, people will still be saving these things out to USB drives for good reason.
> Then you can actually sync data from your neighbor's machine, in a BitTorrent-ish fashion (possibly getting tiny amounts of metadata from a central server for consistency). Perforce definitely has problems with spurious lock contention, and just a few team members syncing from the same machine can easily clog its pipes if you're not careful.
That helps if the bottleneck is the server. Our perforce machines are beefy enough that our bottlenecks are frequently the client and the LAN pipe, neither of which will be helped by BitTorrent style networking. Work from home would be 100% pipe bottlenecked, as the work pipe is able to saturate it quite easily. FTP is still bottlenecked by either the developer's or the publisher's pipe, and wouldn't be helped by p2p either.
> I'm not up to date, but I'm pretty sure you can do better than Perforce's delta compression.
There's always room for incremental improvements... but not enough to beat out fiber.
> You actually want to avoid single-file compression in the repository so you can take advantage of the differential compression.
Not always possible due to format requirements. Windows 8 .appx packages, for example, are basically self contained signed .zips containing an application and all it's resources. Want to provide three builds? You just compressed and signed three different entire copies of all your built resources. For bonus points, those were .zip ed again. Insanity! But software cannot magically solve the social problems that let such designs reach the marketplace.
And since things break unless you use the exact "correct" certificate for signing, any script that generated the .appx from one set of resources would require distributing the actual private key which is a non starter...
I had an Apple IIe (with 64KB of RAM) when the PC came out (with 640KB of RAM) and I also once stated "what is anyone going to do with that much RAM?" - It sounds silly now, but it taught me a lesson about automatically rejecting new technologies and 5 years later I was part of the company who was first to send CATV signals over fiber optics.
What I see is an obsession with getting it for no marginal cost. If you're already paying $70/month for 20 Mbps, of course you wouldn't turn down gigabit for the same price (even if Web pages or Netflix won't load any faster).
The $200 Billion Rip-Off: Our broadband future was stolen
http://www.pbs.org/cringely/pulpit/2007/pulpit_20070810_0026...