I am led to this chain of thought as I ponder the layers of a typical WCF implementation. Mind, I *like* WCF -- it's one of those "Microsoft does something right" moments, and implements a lot of really fine ideas. (It's the first really complete WS-* implementation.)
But consider how a fairly typical binding is built. At the bottom you have, essentially, UDP -- raw IP packets. On top of that is TCP, turning those raw packets into an undifferentiated stream. On top of that you have HTTP, imposing a common protocol over that stream. And on top of that, WCF takes that HTTP stream and turns it into -- packets again. (WCF is a strictly service-oriented architecture, where everything is a "Message".) And the stack begins to loop from there: if you want, you can add the WS-ReliableMessaging protocol, which adds TCP-style reliability at the next higher level up. (Yes, that sounds pointless, but it makes an odd sort of sense when you understand the rest of the architecture.) It's all rather odd-looking.
So I'm left wondering: if we knew where we were trying to be at the top level, would we have constructed this network stack underneath it? Probably not: in particular, I more and more think that TCP streams are an unfortunate side-track in the middle of the stack, the wrong way to implement order and reliability. (And the use of HTTP as a standard security workaround is just silly.)
I don't dispute the success of this stack, and I understand perfectly well how it got this way, but the end result is a bit inelegant. It's a classic case of technological evolution: sufficiently well-evolved to its scenario to function adequately, and useful for interoperability, but impressively bloated and over-complex for all that...