This sounds like an interesting topic, but the article itself was more confusing than enlightening to me.
It seems that the actual explanation (using Hamming distances) could be used instead of the bubble-wrap analogy (in the same amount of space, without making more assumptions about the reader). I felt it didn't represent the trade-off (or rather the strict improvement in this case) very well. In fact, it seems to suggest something that is false (that the two methods are fundamentally different).
They also start using graphs without an (informal) definition.
I didn't know about tree codes before so this could have been interesting but I still don't know much about them. The article alludes to some kind of uniqueness theorem
but remarkably Leonard showed there is actually one out there that’s useful
but the end suggests that we do not actually know the optimal strategy (so I guess its just an existence proof?).
a set of structured binary strings, in which the metric space looks like a tree,
doesn't tell me much either. How should I interpret "look like"? Do I approximately embed the space in R^n? Do they mean that its close to a tree metric?
Finally, the article also didn't mention how little we actually know about, say, the Shannon capacity of many (small) fixed graphs. The impression I got is that we already know all there is to know about "classical" Shannon capacity (which I believe is false).
Thank you all for your thoughtful comments. You've hit the heart of the challenge in any kind of science writing: Trying not to frighten off those unfamiliar with a field and therefore using metaphors, such as bubble wrap, while at least touching on some common ground for those who are (Hamming distances).
My respectful suggestion for those truly interested in following up is to contact either Leonard or Amit directly. Of necessity, any math that is translated into prose is going to be imprecise, and thus, unsatisfying.
To answer a few of the specific questions that were brought up:
Yes, it is essentially an existence proof. At least for now.
Yes, that paper is a good starting point. Also take a look at Amit's work.
Yes, buses inside chips is something they think it could be useful for.
(Sigh) I wish there were a better introduction to tree codes at a lay level. Believe me, I tried very hard to find one. FWIW, both scientists vetted the explanations.
There has to be a better introductory to tree codes than this. One that doesn't assume you work in error correcting codes but can actually explain things in terms of basic math and information entropy.
So, the advantage of this scheme seems to be that you don't have to wait for the end of a transmission block before doing error correction and extracting your data. But, we have error correction schemes which work well with fairly small blocks of data. It seems to me that this isn't a saving in network comms, where the delay for a few hundred bit-times is outweighed by other delays. Maybe it will be useful for buses inside chips.
It seems that the actual explanation (using Hamming distances) could be used instead of the bubble-wrap analogy (in the same amount of space, without making more assumptions about the reader). I felt it didn't represent the trade-off (or rather the strict improvement in this case) very well. In fact, it seems to suggest something that is false (that the two methods are fundamentally different).
They also start using graphs without an (informal) definition.
I didn't know about tree codes before so this could have been interesting but I still don't know much about them. The article alludes to some kind of uniqueness theorem
but remarkably Leonard showed there is actually one out there that’s useful
but the end suggests that we do not actually know the optimal strategy (so I guess its just an existence proof?).
a set of structured binary strings, in which the metric space looks like a tree,
doesn't tell me much either. How should I interpret "look like"? Do I approximately embed the space in R^n? Do they mean that its close to a tree metric?
Finally, the article also didn't mention how little we actually know about, say, the Shannon capacity of many (small) fixed graphs. The impression I got is that we already know all there is to know about "classical" Shannon capacity (which I believe is false).