Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
“I survived the TCP transition” (2013) (blog.google)
175 points by agomez314 on Aug 22, 2022 | hide | past | favorite | 130 comments


Some background: new leadership at ARPANET demanded all hosts to switch from the old protocol (NCP) to the new one developed by Cerf and Kahn (TCP). This change caught many by surprise, and the migration was a long and painful one for teams. "The transition from NCP to TCP was done in a great rush...occupying virtually everyone's time 100% in the year 1982. Nobody was ready" (Janet Abbate, Inventing the Internet, MIT Press 1999)


I started my coding career in the early 90s and at that time there were a buttload of non-TCP network protocols running on small computers (Macs, PCs, etc.) Netware, LANtastic, AppleTalk, Netbios stuff... Even though they all had something going for them I'm glad they have been steamrollered by TCP/IP


I worked at Novell in the late 80s, early 90s. In the LAN world, IPX/SPX worked pretty well but it was certainly steamrolled by TCP/IP.

Since I started out in the testing department, I not only had to deal with a bunch of protocols (IPX, NetBios, etc.) but I also had to deal with a bunch of stuff at the physical layer. Instead of everything being Ethernet, we had Token-ring and ArcNet cables running everywhere.


Yup, I worked on a PC-to-Mac networking server product and we tested with TokenRing as well. Massive hardware but good performance. We also had a lot of coax wire in those days. To this day I still look down on twisted pair and the garbage Ethernet connectors we use now.


Electrically, there's not much reason to look down on Twisted Pair. It is an ingenuous way to achieve what coax does as an unbalanced transmission line with, well, a twisted pair of wires that form a balanced (differential) transmission line. You might need to add some shielding in some situations, but that's just a piece of foil.

Most importantly, the two wires that make up the pair really just are common single-ended wires, not elaborate coax or anything else.

A single coax transmission line supporting 10Gbps Ethernet would likely be much more expensive than the little bundle of twisted pairs we typically use nowadays.

In many ways, for its applications, twisted pair and RJ45 connections are better than coax wiring with BNC.


> Electrically, there's not much reason to look down on Twisted Pair.

Yeah, but mechanically, the RJ45 plug with its finicky easily breakable plastic tab can be an annoyance. And it's easy to see that the pin ordering is not ideal, with the pair in the middle splitting another pair. AFAIK, there exists a more robust connector (the M12 connector), but it doesn't seem to be that common.


True, the plastic tab can be annoying, but I guess the sheer abundance of patch cables nowadays means the cheapness of the connector, while retaining pretty good ease of use (better than many others), makes up for that drawback. Maybe something slightly more resilient could have been designed within the same parameters, had people known just how ubiquitous that connector would become.

Maybe M12 is that, but it looks way more expensive at first glance. Possibly more laborious to connect/disconnect, too, with its screw-locking? Seems to be better for applications where a secured connection is more important (transportation is mentioned).

And yeah, the 1000base-T pin ordering seems unusual. I'm curious about the history there, because even 10base-T (where I thought Ethernet for Twisted Pair begun) had this really weird pinout, which does not support my initial theory that it was because Ethernet kept progressively adding more differential pairs: https://www.arcelect.com/10baset.htm It may well be because they added the original two pairs to a pinout that already carried something else, but the diagrams don't say what those other lines were for, so if anyone knows...

According to those same diagrams, though, it seems to be more common to split up the pairs than not, which now makes me wonder if there is any benefit to that?


> but the diagrams don't say what those other lines were for, so if anyone knows...

Telephones. Telephones are why. Those other two pairs were often used for voice communication. If you had four-pair station cabling, the pairs were provisioned on the modular jack from the inside out. So line one was the blue/blue-white pair on the inner pins, line two was the orange/orange-white pair on the next two pins, and so on.

Ethernet comes along and lots of places where you'd want a network connection already had a phone jack with two pairs unused, so for signal integrity reasons those are moved to the outside and used for data, leaving the inner two pairs where they were to be used for voice.

But why 4 pairs in the first place?

Just about the time that Ethernet was transitioning from coax to twisted pair, the digital PBX was taking over from key systems (1A2) and reduced the number of wires required for a business telephone from 25 pairs (or more ... secretarial sets often had 100 or more pairs) per station down to 4 (for HORIZON[0]) and later two pairs (DIMENSION and eventually Merlin, Definity, etc.). So if you're wiring a new building, you can just run one CAT-3[1] cable to each desk and use the first two pairs for voice and the second two for data[2].

[0] OK, for the pedants out there, HORIZON wasn't ever very popular and really pre-dated Ethernet, but the telecom world moves kinda slow [1] Wasn't really CAT-3 until the early 90s [2] Not on the same jack, but by using pins 1, 2, 7, and 8 for data, you can plug the wrong cable in without risk of hurting the phone or your computer's network card


>Ethernet comes along and lots of places where you'd want a network connection already had a phone jack with two pairs unused, so for signal integrity reasons those are moved to the outside and used for data, leaving the inner two pairs where they were to be used for voice.

But 10BASE-T uses pins 1, 2, 3, 6, thus green and orange pair.


That makes perfect sense now, thanks.

> so for signal integrity reasons those are moved to the outside and used for data

I'm not sure about that bit, though. Would keeping the pair together not help with signal integrity?


I think the advantage of ethernet pinout vs always having adjacent pairs is that it can also be used for a two-line phone or token ring which both use the two inner pairs nested.

I've seen somewhere that a pair of the two outer lines didn't have sufficient performance, so the outer pairs needed to wired side by side instead, but I don't have a reference. Also, there's a reasonable question of why use one inner pair and one outer pair, and not both inners or both outers.


> finicky easily breakable plastic tab

That is not really the fault of the RJ45 specifications. The choice is available between cheap breakable connectors or reliable well-designed connectors: it isn’t the fault of the specification that cheap is often chosen.

> the pin ordering is not ideal

A very minor nitpick. And designed that way for specific reasons.

I like that it works well, was backwards compatible, and the connectors, wiring, and tools are cheap, available, and abundant. 1000Base-T is amazing technology (even if we are blasé about it!)


I've been using multi-gig [1] over short runs of cheap cat 5 cable just fine. Actually, I only have one span that links at 5G. The rest are short enough for 10G with cat-5e, with 100m achievable with Cat-6! Talk about incredible!

1. https://community.fs.com/blog/what-is-multigig-ethernet.html


> And designed that way for specific reasons.

Do you know that reason? I was wondering in my other reply.


It's some legacy from scheme used in RJ connectors used in telephony, where first pair was on connector center and it continued outwards with each pair (like this, where each digit is pair: 4 3 2 1 1 2 3 4). T568 only retains this scheme for two pairs, maybe they realized that splitting last pairs across entire 8 pin connector would be unwieldy.

Nothing stops you from wiring connectors different way though, to the annoyance of anybody splicing that cable in the future :)


> Nothing stops you from wiring connectors different way though

You can only swap the wires in a pair (eg swap solid orange and striped white/orange), or swap pairs (eg blue pair with orange pair).

You must keep each twisted pair matched, so that the electrical signal travels correctly, otherwise you run into problems at length. For very short runs I would guess you could ignore pairs, even though it would increase crosstalk (between signals within a cable) and interference (aerial like transmission or reception, especially with different cables run close together).

Great picture of different twists per centimetre comparing each pair within a cable (compare orange against blue in one cable), and also shows the different twists for different cats (less for cat5, more for cat6): https://store.chipkin.com/articles/differences-between-categ... — if the twist lengths of all pairs were exactly the same within a cable, there would be a lot more crosstalk between pairs.


You can plug an RJ-11 plug into an RJ-45 jack and get two phone lines.


M12 is quite common in "dirty" industrial environments and military apps. its vibration proof, resistant to corrosion, can meet low/no spark requirements, and you can manipulate the connectors with gloves on, which sounds silly but is not to be discounted when you are in a foxhole being shot at.


Huh? Rigid coax and overtightened or damaged shielding is so much more annoying and more common.


In the 90s, coax was way cheaper than utp.


I'm glad IP took over everything, but I wish that TCP hadn't become practically mandatory. There are some other really useful transport layer protocols, like SCTP that are great to use on a LAN, but good luck getting them to work on the internet. The only way to do anything other than TCP is to layer/tunnel it over UDP, and even that has less support than TCP.


Fortunately that era is coming to an end. With QUIC (layered on top of UDP) being the basis for HTTP/3, very few networks will outright block QUIC traffic as many have done with UDP.

And my experience with QUIC so far has been delightful — it's everything I've wanted for decades when TCP was too restrictive and UDP too anaemic.


I hope so. QUIC gives us so much of what SCTP should have given us. But we'll see how the middleboxes deal with things. If they want to block QUIC, like they block UDP, they can.


Wait, who blocks UDP? I've worked many different angles of the technology world and never heard of anyone blocking something so ubiquitous.


"who blocks UDP?"

In the world of virtual machines, UDP is always the first thing to die. Xen on Amazon, for example, would often (and almost always, if there was any load at all?) silently discard most UDP traffic, and this was true for years. VMWare (back when it was still being actively developed and supported) had fairly atrocious support for UDP as well, and the "atrocious support" level was only if you explicitly enabled it and carefully configured it in the first place. In one VM implementation (15+ years ago, so I can't remember which, but probably VMWare or Xen), we determined that the UDP buffer size was a total of one packet (~1500 bytes). So if you wrote a test that sent or received a burst of more than one packet, only one packet out of the entire burst would have greater than a 0% success rate and the rest were lost.

Back then, I was testing with UDP on EC2 before EC2 was open to the public. I was working with UDP-based server clustering tech (previously used by Amazon, since acquired by Oracle). Still have the scars.

It's been many years, so my guess is that reasonably decent UDP support has slowly crept into EC2 and other common cloud and virtualization environments, and may even be decently supported by now. It needs to be very good to support the transition to HTTP/3 (which coincidentally I'm currently working on).


It's "useful" trick to force clients to pay extra for "multiplayer gaming support"


Isn't Youtube using UDP to transfer their video data?


nope, https from https://rr6---sn-xxxxnel.googlevideo.com/videoplayback?expir... ......&sig=xxxx&lsig=xxxx&n=xxxx either mp4 or two separate audio/video streams


Youtube will at the very least fallback to HTTP over TCP for watching (no idea about live streaming).

I did a lot of funky things to get around the "drop all UDP" firewall at my first student dorm :)


They are and there's old firewalls out there that report Youtube traffic as a UDP DoS attack.


Many network owners block UDP at borders. It helps a lot in case of amplification DDOS attacks.


I'm part way through adding support for SCTP to the NetBSD firewall. Have done the basic filtering stuff, still working on doing NAT for it.


IPX worked very well for LAN games, it required no configuration. Compared to how difficult it is to play together now (steam friends. xbox...), it was much better.

Of course it had drawbacks, but for that it was great.


Way back when I first started working for a small networking outfit, we were informally split into 'Team Red' and 'Team Blue'. Everyone agreed that Novell was on the way out, and the younger guys with their MCSE's made up most of Team Blue. I had un-officially started 'Team Yellow' and was sticking Linux boxes in when I could.

Anyway...one afternoon I was at a law office installing some legal library software (or something), and one of the younger lawyers asked me into his office. He had a couple copies of Warcraft and Command & Conquer, he had installed them on a couple of the office computers but couldn't get network play going.

Not really knowing what I was doing, I opened up the properties dialog for the network adapter, added the IPX/SPX protocol, and started the game up on two computers.

It worked! It was that simple. I remember the guy pulling a $50 out of his wallet and handing it to me. And, since they were within walking distance of our office, I got invited back over a couple times and we played a lot of games (and drank a lot of beer) over there.


That's just LAN play dying off in favor of (routeable) Internet play. If developers wanted to they could add IP LAN play to games, but there's just not enough demand.


The assumption is that if you're on an IP network, you already have addresses, etc (because you're routable to the internet).

IPX/SPX worked without that assumption; it was bog simple to find some IPX cards, shove them in the computers, connect them, and go, even if you knew nothing.

The closest for TCP/IP would be to support gaming over link-local links (those 169.* addresses) but everything is assumed to be on the internet now.

And if you have TCP/IP for the internet, rarely do you care or need anything else for local comms.


You could also do it over multicast.

The big downside is that if some people are on WiFi then they'll be reduced down to 802.11b speed.

A better solution would be to do server/player discovery via multicast and then stitch up unicast links for the actual gameplay.


That's all IPX/SPX was was link local multicast. I see all the "no configuration" required love for it in surrounding comments, but I suppose few remember the failure states when it didn't work as expected, including drowning an entire switch (or worse token ring) in multicast noise. I know I hit IPX/SPX config hell a few times over the years in home LAN gaming, and I can't believe I was that alone in it, so I'm assuming the nostalgia goggles are in play in some of these "it just worked" memories.

> A better solution would be to do server/player discovery via multicast and then stitch up unicast links for the actual gameplay.

That's basically what most mDNS applications do today (the modern standards compliant name for used to be called Bonjour): use .local multicast for service discovery and then often use that to bootstrap to unicast links. It's not a bad way to go, with the only caveats that to get good mDNS support in Windows I believe that you still have to dig into WinRT components rather than old school Win32 sockets APIs and that especially seems to cramp many games from even trying to use it for LAN discovery today despite it being a mostly reliable standard in 2022.


Dooms first networking implementation was very inconsiderate and had a habit of killing whole networks. https://doom.fandom.com/wiki/Doom_in_workplaces

>The first version of the Doom IPX network code transmitted its data as broadcast data. As a result of this, all machines on a network where a game of Doom was being played would receive the data, even if the machine was not involved in the game. The degrading effect on network performance forced the system administrators for many office networks to ban Doom.


Most people did the "two cards connected" setup and let it work - or already had an IPX/SPX network setup and running and used that (Doom could crash them IIRC).

Few people actually built IPX networks, let alone routed them, etc.


IMHO Bonjour/mDNS adds a lot of points of failure and doesn't really buy you much. It's so easy to just open a multicast listener port on a specific address and port and then just send out UDP packets to communicate.


mDNS [0] itself is "just" send/listen to multicast UDP packets on a specific address (224.0.0.251 and/or ff02::fb) on a specific port (5353). The only real complication to mDNS itself is that the format of the packets you send/receive are designed to resemble (regular, unicast) DNS and it's a complex (but well known) binary packet format.

The complications most people associate with Bonjour/Avahi and its "points of failure" are most often issues with optional add-on standards such as DNS-SD [1] UPnP [2], and other related add-on standards on top of mDNS that make up so called "zero-conf networking". You don't need DNS-SD or UPnP to use mDNS for basic hostname discovery.

[0] https://en.wikipedia.org/wiki/Multicast_DNS

[1] https://en.wikipedia.org/wiki/Zero-configuration_networking#...

[2] https://en.wikipedia.org/wiki/Universal_Plug_and_Play


Yeah, anything actually doing it today will do something like multicast/bonjour and then do direct links.

Though I have seen games that apparently use an internet service to coordinate direct links ...


I seem to have never had a reliable and working mDNS on any OS. Would not recommend.


Is it possible your network disables multicast?

On a typical home network I've never really had any issues with mDNS. It works great on Mac and I've got it running fine on various bsd's and linux. I found mdnsd worked surprisingly well on FreeBSD, and took less configuration that Avahi. I will admit that windows doesn't seem to play that well with it.

I'm curious what you tried that didn't work.


Only problem I've ever observed is not letting it "calm down" as some mDNS implementations (maybe no more) used to try to cache/answer for other machines.


I can see why it died off. There were preciously few ways to have LAN play over a distance without weird connection issues. At some point I recall that Hamachi did work fairly well, but that meant you still had to rely on a third party in the end.

Even now it's only somewhat doable to do it without relying on 3rd parties by using wireguard. So I can see why relying on a third party became the default.


I remember back in the 90’s and early 00’s my cousin and his neighbors had a neighborhood LAN going. They had stretched I believe Ethernet cables across the street and from house to house. It might actually have been coax cables in a ring network of some sorts. Anyway. They had an IRC server going and shared files and played games. Seemed like good times.


It all comes down to NAT and dynamic IPs. Let's say Anu and Bob and Chen are trying to play a game. Anu runs the server and wants Bob and Chen to connect to her server. Anu follows some guide to get her IP and gives it to the others, but it's not routable publicly because of some form of NAT, or her IP is dynamic and changes and so packets become unroutable to Anu. Now all Anu needs to do is press "Start Game", where a remote server either hosts the game metadata or just acts as a glorified STUN/TURN server and the remote server gets a friendly name that Bob and Chen can use to connect to the game.

If IPv6 and NAT was not a thing then IP LAN games would have probably had no trouble. Factorio still does multiplayer through direct UDP/IP, so you can definitely play it online purely IP P2P. I've played Factorio games served using ZeroTier before with no issue and it's allowed me to host Factorio on my laptop from both home and a hotel.


Bonjour / mDNS / broadcasting doesn’t play nice on all routers.

Scanning your/24 network works, but many people are on 10.x.x.x these days

And I don’t know how this works on ipv6


TCP LAN games were also easy to set up.


The difference between then and now is less protocol and more pervasive authentication.


Not to mention the Real Person All Grown Up Protocol Stack, OSI, which of course was going to displace this ARPANET childishness with protocols like the X.212 data link layer that, like all data link layer protocols, provides checksumming and resending and distinguishes between connection-oriented and connectionless communication, plus X.400 email which, naturally, uses the simple, comprehensible, easy-to-implement X.500 directory service, for email addressing inherently tied to your employer and physical address.

Or OSI will crash and burn and we'll all pretend it was just a model from day one, and insist that TCP/IP is best understood using precisely the kind of strict layering the IETF explicitly rejected in RFC 3439. Y'know, whatever reinforces the notion that we never lose.

https://www.rfc-editor.org/rfc/rfc3439


Heh, I remember attending trade shows in a time when X.400 and X.500 were all the rage.

Always a bad sign when another protocol comes along and calls itself "lightweight", as in LDAP, the "lightweight directory access protocol" merely based on X.500.

SMTP is also the "simple" mail transfer protocol, but it's not based on X.400 in any way and was apparently replacing... FTP! (Only for one particular use case obviously.)


To this day I think ATM was an interesting approach. Virtual circuit switching with quality of service capabilities designed into the protocol. If nothing else, it is a great example of a complex and optimized protocol losing vs a ubiquitous and simple protocol.


It was "interesting", all right. For instance, its packet payloads were 48 bytes long. Why? Because 32 was too short for large streams (with a 5 byte header representing a 13% overhead), and 64 was "too long" (!!!) for real-time voice connections.

The virtual circuit stuff looked very attractive if you worked for Ma Bell and wanted to charge by the connection, but thankfully lost out to packet-switched networking.


The hardware was garbage too. I ripped that crap out of a place that went all in with some IBM/Lucent ATM solution.

I remember they would have a failure a couple of times a week where some switches would freeze up. Someone would send an alert and a data center operator would plug a null modem cable into the switch and remove it. That would un-fubar the switch.


I have AppleTalk compiled into the kernel on the machine I'm using to type this, have also done some work on adding CHAOSNET to it.


Needs a (2013).

The article was posted on January 1, 2013, the 30-year anniversary of the deadline for ARPANET nodes to switch over to TCP. The next New Year's Day will thus be the 40th anniversary.


added. Thanks!


Off topic:

I met Vint Cerf at a Keck Institute for Space Studies [1] workshop on computing infrastructure in deep space. He was knowledgeable, energetic, funny, and volunteered to take notes for an all-day working session. The goal was to lay out requirements and benefits of flying servers to orbit around distant bodies for on-site analysis. You can get a lot of data from cameras, but you can't send nearly any of it back, so do interactive data reduction on site, right?

He was at Google Loon at the time, working on their delay-tolerant networking & dynamic routing for their baloon-internet architecture. He's been super active in the NASA community working on their delay-tolerant networking architecture. The whole stack is really beautiful. In space, you know when nodes are coming over the horizon because they are in regular orbit, so you can plan routes for the future using "contact-graph routing", and use store-and-forward to massively increase throughput. (e.g., orbiters hold data automatically until they are in sight of the next hop). Nothing you can do about latency, with speed of light and all that though :) JPL has an open-source implementation maintained by Scott Burleigh, another really neat person, and I think JHU/APL does too. [2]

Anyway. The guy is smart, sure, but he's also immediately influential: You can't help but agree with him when he pushes these simple, effective ideas naturally.

1. https://kiss.caltech.edu/

2. https://sourceforge.net/projects/ion-dtn/


He gave a talk at a company I used to work at on the history of the internet and his thoughts on its future, it was really cool to listen to him talk on these subjects!


I'd love to see a video of this. Unfortunately archive.org is drawing a blank on this name. Do you perhaps have a link you can share to slides or video?


It was an internal lecture at IOHK, a crypto company [0]. I don’t work in the field anymore but IOHK was very fun for the chance to listen to people like Cerf, Leslie, Wolfson, etc.

[0] https://youtu.be/PvmreRAlHMg


Thanks!


> Nothing you can do about latency, with speed of light and all that though

Why do we take this for granted? I understand the laws of physics and all, but 120 years ago we didn't think humans could fly through the air, and now we have a million+ humans flying every day, and occasionally one goes to outer space.

Why do we consider communication faster than the speed of light so unbreakable?


"The speed of light" is probably not what you think it is.

This constant, c, is actually about how time (one of our four dimensions, often labelled t) is related to the three spatial dimensions (often x, y, z).

Light goes that fast (in a vacuum) because from the light's point of view that's how those dimensions are related, it's not really a "speed limit" it's up against, any more than you'd consider it a "time limit" that hours have sixty minutes in them. The light is just moving through time as well as space, and that's how it has to work.

So, because it's about the relationship between time and space, what you're talking about with "faster than light" is actually a time machine.

Now, you might notice that before the aeroplane there were birds (and bats, and insects, but lets focus on birds). Clearly flying is possible, a sparrow can do it. But you may have noticed from the lack of time travelling visitors that time travel does not seem to be possible.


I knew that about the speed of light (but thank you for writing it out). My knowledge of entanglement is limited, but haven't we observed entangled particles seemingly communicate faster than light?

While time travel may not be possible, maybe time traveling data is?


No, we have not.

We have observed that we can generate a pair of particles and separate them, and when we look at the close one, we now know that the far one has the complementary property. You can't use that to send information. You could use it as a shared secret, but you still had to move the particle out where your recipient is for them to use it.

You can take a flashlight and shine it at the moon, and if you sweep the beam back and forth, you can make the notional front of illumination move faster than the speed of light -- but you can't modulate the signal faster than the propagation velocity c.

Time travel into the future is easy. Time travel into the past doesn't work in this universe.


Entanglement doesn't involve any form of communication.

Only the imaginary "wave function collapse" is faster than light. But collapse isn't actually part of quantum mechanics: there's no formula that would tell you when collapse is triggered. The many world interpretation doesn't have any wave function collapse at all; and it's a valid interpretation of the underlying maths. Any "wave function collapse" is merely an interpretation trick to map the quantum world back to the classical world as experienced by humans. You can't build technology out of imaginary physics.


Nope, it’s a common misunderstanding. While the particles are entangled regardless of distance and the action is instantaneous (at least, that’s one way of interpreting it) there’s no way to actually transmit information.

You may try to come up with clever encodings for electron spins, but you’ll see that you end up having to know a priory what the other end had. It’s a long topic to discuss on a HN thread but a quick YouTube search will get you interesting videos.


Time travelling data leads to paradoxes. It’s unlikely to be possible.

Think of entanglement as pair of boots. You’ve got one boot with you and unwrapped it on the mars. Voila, you know that one left on Earth is left one. This analogy is wrong but good enough to understand why entanglement does not help with communication.


personally, I think it's easiest to just think of the speed of light actually being the maximum speed of information propagation.


If time travel isn't possible how come we're all moving in to the future right now?


“If flying is impossible for me after I have jumped off this tall building, how come I am currently moving through the air towards the ground at high speed?”



The lack of time travelling visitors may only indicate that 'backward' time travel is not possible. It could be that 'forward' time travel will be possible sometime in the future. (And by 'forward', I mean faster than the normal movement through time we all do every nanosecond)


That's relatively trivial by going at relativistic speed.


This is even a thing at non-relativistic speeds.

Proper operation of GPS requires a time correction [1] because the system's satellites are moving at significant speed from the perspective of ground observers. Their onboard clocks are therefore moving relatively faster through space, and thus relatively slower through time.

This is measurable at the nanosecond scale, and must be taken into account every time something uses GPS.

1. https://www.astronomy.ohio-state.edu/pogge.1/Ast162/Unit5/gp...


I mean, technically what's going on is that satellites move at relativistic speed.

"Relativistic speed" doesn't mean some particular speed, like 0.1c or something, it just means any speed where your measurements don't make sense without accounting for relativity, if your measurements were good enough this just doesn't have to be that fast really.

However, do you remember simple school spring balance experiments which show Newtonian mechanics? Greg Egan's "Incandescence" is a novel about some people who live somewhere where those type of experiments need relativity to explain them, so, when they develop physics they go from "I dunno, it just always does that" straight to relativity because Newtonian mechanics don't explain what they already know about the world.


Entering and awakening from a coma comes pretty close...


> Why do we consider communication faster than the speed of light so unbreakable?

Because it's the definition of the simulator we inhabit. c isn't some random thing to do with light that we observe and find curious, it's literally the nature of the universe. The universe is "a place where the speed you can propagate information is : c". The speed of light follows from that, not the other way around.

So if that's breakable, then we made some very big invalid assumptions over the past 200 years.

Also, it's questionable that "we didn't think humans could fly through the air". Obviously some people did think that was possible, otherwise they wouldn't have tried to do so. We had birds and bats as existence proofs too. And balloons.


As an aside, the PBS space time video The Speed of Light is NOT About Light : https://youtu.be/msVuCEs8Ydo

As an aside to the aside - as I rewatch it I quickly notice how young he looks (and then note the date is 2015 on there - one of the early ones and the production is less refined).

You may also like The Geometry of Causality https://youtu.be/1YFrISfN7jo


A combination of pragmatism and hubris.

pragmatism: our best current theories about the universe suggest that the speed of light is a constant. Until someone proposes a theory with more explanatory power that suggests otherwise, we might as well do our work with the assumption that it's correct.

Hubris: our best theories are clearly not complete (see dark matter, conflicts between general relativity and quantum mechanics, and similar), yet we mostly treat them not as provisional theories subject to change, but as ironclad laws by which we may live our lives. Humans don't do well with uncertainty.

(Disclaimer: not everyone lives that way. As far as I can tell most who do have something like this combination of ideas in their heads.)


People had flown before the first powered flights, so 120 years isn't a good measure for that. You probably have to go back a lot further to find natural philosophers or physicists asserting that manned flight was totally impossible. Maybe claims that heavier than air vehicle couldn't fly would be more recent.

Hot air balloons had been around since the 1700s, and gliders were developed in the 1800s. Those were the first "heavier than air" aircraft, and a manned glider was flown by the end of the 19th century. Powered flight was an extension of that model.

We have no model of faster than light communication (or travel) that holds up to scrutiny, let alone has been demonstrated.


Also birds.


Because of relativity. The speed of light is also the speed of causality. Assuming the theory of relativity isn't totally wrong, then if faster-than-light communication is possible, then so must be time travel. http://www.physicsmatt.com/blog/2016/8/25/why-ftl-implies-ti...


Why would information of an event, e.g. light bulb lit up by pushing breaker, decide the causality? Can't you just back track from a model that the breaker is closed then the bulb lits up?


The bulb lights up _after_ you close the breaker.


We've been flying with hot air balloons for over 200 years, and we've seen birds (heavier than air) fly. It's always been considered possible, we just didn't know how to apply that to humans.

We still haven't seen anything in nature that even hints to the possibility of faster than light


120 years ago we knew that some things can fly, because we saw birds. We just had to figure out how to do the same with humans.

On the other hand, we have never encountered anything in nature that goes faster than the speed of light. That's a pretty good hint that it's impossible to do so.


Considering it breakable probably doesn't get you much. Ok, it's breakable. Now we just have no idea what to do. If anyone could demonstrate a proof of concept, I'm sure we'd be considering it much more broadly.


It’s not so much that we take it for granted, the issue is there is so far no contradicting evidence. Humans could see other animals flying, but we don’t see things going faster than the speed of light.


> we don’t see things going faster than the speed of light.

That would be physically impossible to see in the first place, wouldn't it?


We would see some side effect of it depending on the exact nature of reality and time. Since we don't see things from the future randomly appearing now, nor do we have cherenkov radiation occurring in places that it shouldn't in open space it seems unlikely FTL is occurring.


Even if we could go back 120 years, just knowing it's possible to create aircraft doesn't do much without the domain knowledge to build one.

FTL may or may not be possible via physics we don't understand. Until we have that physics and a system to exploit it FTL is a very real constraint to work around. Don't mistake "Nothing we can do" for "nothing we can ever do".


Never mind any down-votes, this is a reasonable question, and many here at HN would relish the opportunity to answer.


The only reason to post this is to troll.


Based on the downvotes I'm getting, it would appear a bunch of people agree with you, but I promise it is not. I genuinely want to understand why we consider this an unbreakable limit when we have in the past broken previously thought "unbreakable" limits.


Ok I take your bona fides but why not just search? There are 186,283 sites covering this, so a quick <https://html.duckduckgo.com/html?q=why%20speed%20light%20lim...> would have helped avoid irritating people, I mean you now have more info at your fingertips than any 10,000 people had collectively up until say 1980 and you don't even type a query?


There's an angle on all of this which I always thought was pretty interesting (Considering they only managed to update a consolidated TCP RFC in the past week especially):

"At one of the quarterly meetings Vint Cerf came in and dropped a bombshell on us: he said TCP had become a standard. Our immediate reaction, or at least my reaction, was 'Wait: it's not done yet. We have this long list of things we still have to figure out'," Haverty recalled in his speech.

The technology was out of developers' hands before they felt ready to let it go.

Haverty said the teams he worked with always expected to fix and polish their work – not just build on top of what he referred to as an "experiment."

"There's all sorts of operational issues that we went through and developed but they haven't made it into the real world,” he said. [1]

[1] https://www.theregister.com/2022/03/01/the_internet_is_so_ha...


Meanwhile, Windows 95 didn’t install TCP/IP by default when setting up a new network card. It was such a problem in the late ‘90s / early ‘00s still that it was an interview question for my university dial-up support job.


Ah, fond memories http://www.hawaii.edu/its/micro/pc/tcpip9x.html

It sucked less than fooling around with Trumpet Winsock on Win3.1 though!


That was right around the time of the Internet Memo of fame. At the time '95 came out it was HIGHLY argued what would take off (and for awhile it seemed AOL/CompuServe were winning).

But the writing was already on the wall.


In thanks for his decades of work getting the Internet going, Postel spent the months before he died getting trashed by anonymous government officials in the Washington Post and elsewhere.


Why was that?


There was a huge controversy over governance of the Internet, in particular the DNS, because it had become clear that Network Solutions had been handed a licence to print money as the monopoly controller of the DNS, and they were providing very poor service (filling in forms over email, very slow response times, $100 fees) and inconsistent enforcement of decency rules.

Part of the response was the IAHC which came up with the template for the fix: break up the monopoly by splitting registries and registrars, force Network Solutions to relinquish some of its TLDs, and create more TLDs.

https://en.m.wikipedia.org/wiki/IAHC

This was not immediately successful, until Postel (as IANA) instructed the root DNS operators (other than Network Solutions) to get the root zone from IANA instead of from NetSol. This caused an epic shitfest, as a result of which Postel reverted the root zone change, and the US NTIA got moving and started the foundation of ICANN.

The company now known as Verisign is the direct successor of NetSol, and they still control .com and .net, and feed the root zone to the other root server operators.


Does anyone else remember when Verisign ran that sketchy ringtone company (Think crazy frog, schnuffel bunny, etc)? [1]

[1] https://en.wikipedia.org/wiki/Jamba!


"Hi, I'd like to have a TCP transition."

"Hello, would you like to have a TCP transition?"

"Yes, I'd like to have a TCP transition."

"OK, I'll get you a TCP transition."

"Ok, I will have a TCP transition."

"Are you ready to have a TCP transition?"

"Yes, I am ready to have a TCP transition."

Network Error (tcp_Error)


We will have to abandon TCP eventually. We can't keep using the same protocol for 100 years. We should think about the next generation stack, what we would ideally like to solve, and not just tiny little incremental changes that preserve legacy design quirks. But sadly it's in the hands of industry rather than researchers and practitioners.


Why? IPv6 enabled the address range. In the TCP space alternatives exists (UDP/Quick). I do not believe TCP will be replaced.

The only thing which surely will be replaced are the routing exchange protocols like BGP with all the shenanigans happening there.

If you believe in space colonialism then there will be some kind of message oriented layer on top to deal with the latencies (think universal SMTP or something).


There's a lot of limitations within those protocols that just make things unnecessarily difficult and limit functionality. Port numbers and the lack of logical service names, lack of routing across disparate unroutable routes, lack of carrying forward/back diagnosis of network segments, etc. HTTP as an additional transport protocol is a shitty hack that people use to work around some of those issues but more remain.


Makes all sense but not enough to migrate away from TCP. Half of the things can be also solved by higher level services (and lower than HTTP).



(dig dig dig...)

The original tweet for any who want that link is https://twitter.com/webfoundation/status/1105425858913816576

(scroll, scroll... oh neat)

The back of the shirt is https://twitter.com/vgcerf/status/1105467776477679616


Never thought I'd see Cerf in a t-shirt. I just assumed he was born and would die in a 3-piece. The man made POTUS look like a schlub when he accepted his Medal of Freedom.


It was almost a non-event for us. I was at a university computer science lab. We were already running tcp/ip before the transition on our Vaxes. We had a TOPS-20 system but rather than transition it, we just retired it. We made very little use of the relay services.


I vaguely recall at the time an email was sent out at Wesleyan mentioning they had cut over to the new system. IIRC they switched TOPS-20 to VMS around that time but I don't think it was related.


So when is everybody transitioning to QUIC?


HTTP/3 has just been finalized as a standard this year, so I'd guess we'll start to see it be adopted maybe in 2028 or so

It will be easy to add support on the back end (at least for any software that already supports HTTP/2). So adoption will be limited by the front end. Adoption may come sooner, but only if telcos and phone makers both agree that it will help them. (Mobile currently represents ~75% of web usage, and still growing.)


[Cue 32-bit IP snark] blah


The transition to IPv6 basically spanned my entire career (in networking, I had a previous career as a hardware engineer). My first task was to participate in the IPNg mailing list because the company I had just joined had an OSI stack, and one of the NG proposals was to bolt TCP on top of OSI's lower layers (TUBA). And this morning I spent some time on the phone to my ISP asking when/if they will roll out IPv6 in my area. 30 year span.


My latest internet provider (AT&T GigE) definitely supports IPv6. I also noticed that every device in my house that I care about (IE, all the laptops, phones, TV streamers, desktops, and servers) supported it. AT&T gives me something like a /64. So i decided to finally do IPv6 at home. Even turn IPv4 off.

Shortly after, I turned IPv4 back on and disabled the extra IPv6 support in my router. What I found was that everything in my house worked more or less fine, but that IPv6 to the outside was a problem because few websites support IPv4. So you need to be dual stack or run a proxy.


In 1994 I was talking to my colleagues about IPv6 and they asked how soon we would need to start transitioning. "Not this year," I said. "Maybe think about it next year in the budgeting process."


I believe that was around the time BGP migrated from v3 to v4 (CIDR support). It's pretty neat that change got pushed through so quickly. Granted the Internet was much smaller then (< 1500 AS's and 20K routes). Makes you think if work on IPv6 started earlier everyone could have migrated over in one swoop.


IPv6 will roll out finally in 2049 or something, and immediately be replaced with IPv10.


Why not IPv8?


IPv7-9 will meet the same fate (or worse!) as IPv5 did.


The parenthesis in the title confused me. What transition happened in 2013? Oh, the article was from 2013 but was about something that happened in 1982.


That's a HN convention. In titles, "(XXXX)" at the end means "published in XXXX".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: