More on 802.11 wireless
As part of my top secret plans to try to make a space-age new wireless router, I've decided to wade through the official IEEE 802.11 specification.
Now okay, I decided that before I actually found out the thing is 1233 pages long, so I might yet change my mind. And let me tell you, IEEE reading isn't quite as captivating as IETF reading. There are at least a couple dozen pages of definitions, like "STA" as the abbreviation for "station," because there is apparently a worldwide shortage of lowercase letters.
Word to the wise: if you're interested in this spec, you might want to start at section 5, which actually gives a decent architectural overview. I actually read the entire definitions section before I got there, which was confusing and maybe non-optimal, but I do feel like I recognize a lot more of the unnecessary acronyms now.
My major goal when I started reading the spec was to find the answers to these two questions:
- Is there any reason I can't join more than one wireless network at once?
- If I forward packets from one network to another, will it cause interference and packet loss? And if so, can that be avoided somehow?
Before you ask, the answer to the first question is definitely not "you can join as many networks as you have antennas." I know enough electrical engineering to know why that's nonsense, since I was somehow granted a Bachelor's degree in such things; just enough knowledge to be dangerous. But even if the details are fuzzy, let's try this thought experiment:
Back in the before-times, people used to have these analog powered whatzits called "televisions" which would receive "signals" from the "airwaves" using "antennas." Some of these antennas had cutesy sounding names like "rabbit ears," presumably so that people would be allowed to bring them in the house and ruin the careful job the Local Interior Design Authority had done arranging the furniture.
But if you really got fancy, you could get a big fancy antenna and mount it outside somewhere. Then you could put a splitter on the wire from that antenna, and run its signal to more than one TV at a time. Or to your "VCR," which could record one channel even while you watched a totally different one!!! All with a single antenna!!!
I know, it sounds like science fiction, but bear with me, because I clearly remember it, in an abstract sort of way, from my childhood.
(If you used a splitter, you ended up getting less signal strength to each of your receiving devices. But let's ignore that factor here; that's just an analog artifact, similar to what happens when you copy from one "video tape" to another (another item of science fiction; someday, we'll uninvent DRM and get this sort of stuff back). If the antenna is connected to just one digital signal processor, we should be able to mangle it a million different ways and not worry about analog losses.)
So anyway, as much as technology changes, it still mostly seems to stay the same. 802.11 channels are a lot like TV channels; each one gets its own little band of the frequency spectrum. (I was a little surprised that such a recent technology didn't bother using spread spectrum / frequency hopping stuff, but that's how it is.) Thus, just like with your old TV network, you should be able to use a single antenna and receive as many channels as you want.
Relatedly, it seems that 802.11n gains most of its speed by using multiple channels at once. I haven't gotten to that part of the spec yet; I read it elsewhere. But I notice from my online browsing that there are 802.11n "lite" routers with only one antenna, and 802.11n "real" routers with two or three. I think this is pretty theoretically bogus - one antenna ought to be enough for anyone - but probably practically does make a difference.
Why? Because I have a feeling the chipset manufacturers are still in the past. The problem is, sending/receiving on multiple channels at once is kind of hard to do, even if you're working in a purely digital world. At the very least, you need a much higher clock frequency on your DSP to handle multiple full-rate baseband signals simultaneously. But worse, I don't know how much of this stuff is purely digital; they're probably still using analog modulators/demodulators and whatnot. If so, it's probably hard to modulate/demodulate multiple channels at once without using an analog splitter and multiple analog modulators... which would degrade the signal, just like it did with your old TV antenna.
It sounds to me like a solvable problem, but without having yet looked at the hardware/software that implements this stuff, I'm guessing it hasn't been solved yet. This is some pretty leading-edge signal processing stuff, and cheapskates like you are only willing to pay $50-$75 for it, which makes it extra hard. So it was probably just easier to mount multiple antennas and include multiple DSP cores and modulators - in fact, maybe just throw in the same Broadcom chip more than once on the motherboard - and just run them simultaneously. Not optimal, but easier, which means they got to market faster. Expect single-antenna, full rate 802.11n boxes eventually.
So from the above reasoning - all unconfirmed for now - I conclude that, even still, you ought to be able to send/receive on as many channels as you have antennas. And if there's more than one wireless network (SSID) on a single channel, you should be able to join all those wireless networks at once using only one antenna.
As it happens, already by page 42 of the spec I've read the part where it says you absolutely must not join more than one network (literally, "associate with more than one AP") at a time. Party poopers.
But why? The stated reason for the rule is that otherwise, alas, the poor helpless network infrastructure won't know which AP to route through when it looks for your MAC address and multiple APs respond that they're connected to it. But that actually can't be true, because shortly after, they say that you "must attempt to send a disassociate message" when leaving an AP, while admitting that's kind of impossible to do that reliably, since the reason you're leaving might be that you went out of signal range, and how would you know that in advance? Thus, if you're carrying your laptop around and you move out of range of one AP and into range of another and you don't get to disassociate from the first one, the network must be able to handle it, and therefore by extension, it can handle it if you deliberately join more than one network, since the network won't know the difference.
Apparently the guys down at the IEEE 802.11 working group have never heard of crash-only programming; there never should have been a disassociate command in the first place, just like having a DHCP "release my IP address" command was a stupid idea.
Anyway, question #1 looks promising; it looks like a software hack could let us join multiple networks at once. And systems with multiple antennas could even join multiple networks on multiple channels, perhaps.
For my second question, about forwarding packets from one network to another, things are much more screwy. I suspect that forwarding packets between two networks on the same channel will be a problem unless you're careful (ie. receive packet on A, send it out on B, but someone sends the next packet on A while you're sending on B and they interfere), because the APs on the two networks can't easily coordinate any collision control. On separate non-interfering channels it should be okay, of course. But I'll need to read much more before I can conclude anything here.
Interestingly, the standard has accrued a whole bunch of QoS stuff, supposedly designed for real-time audio and video. I doubt that will go anywhere, because overprovisioning is much simpler, especially on a LAN. But the otherwise-probably-pointless QoS stuff includes some interesting timeslot-oriented transmit algorithms (don't expect the 802.11 guys to ever say "token ring") that might be fudgeable for this kind of forwarding. We could just reserve alternate timeslots on alternate networks, thus avoiding overlap. Maybe.
I bet nobody implements the QoS stuff correctly, though, which is why every router I've seen lets you turn it off.
Other interesting things about 802.11
You might know that WEP stands for "wired equivalent privacy." After reading the spec - which mentions in a few places that WEP is deprecated, by the way, which is wise since it was hacked long ago - I think I see where they got that strange name. See, they correctly noted that all IEEE 802 networks (like ethernet) are pretty insecure; if you can plug in, you can see packets that aren't yours. And the world gets along even so; that's why they invented ssh, which is why I invented sshuttle, and so on. You don't need ethernet-layer security to have application-layer security.
However, they didn't want to make it even worse. The theory at the time they were inventing 802.11 must have been this: the security requirement that "they must be able to physically plug in a wire" isn't very strong, but it's strong enough; it means someone has to physically access our office. By the time they can do that, they can steal paper files too. So most people are happy with wired-level security. With wireless, it goes one step too far; someone standing outside our locked office door could join our office network. That's not good enough, so we have to improve it.
And they decided to improve it: exactly to the same level (they thought) as a wired network. Which is to say, pretty crappy, but not as crappy.
From what I can see, WEP is simply this: everybody on your network takes the same preshared key to encrypt and decrypt all the packets; thus everybody on the network can see everybody else's packets; thus it's exactly as good as (and no better than) a wire. Knowing the digital key is equivalent to having the physical key to the office door, which would let you plug stuff in.
And actually that would have been fine. Wired-equivalent security really is good enough, mostly, on a private network. (If you're in an internet cafe, well, mere wires wouldn't save you, and neither will WEP or WPA2. Imagine that someone has hacked the router.) Unfortunately WEP ended up having some bugs (aka "guess we should have hired a better security consultant") that made it not as good as wired. Reading between the lines of the spec, I gather that one major flaw in WEP is replay attacks: even if someone doesn't have the key, they can replay old packets, which can trick hosts into doing various things even if you yourself can't read the packet contents. You can't do that on a wired network, and therefore WEP isn't "wired-equivalent privacy" at all.
So anyway, all that was interesting because I hadn't realized that WEP wasn't even supposed to be good. The only problem was it was even worse than it was supposed to be, which put it over the edge. The result was the massive overcorrection that became WPA, which as far as I can tell ends up being overkill and horrendously complex, reminiscent of IPsec.
Admittedly I haven't read all the way ahead to WPA though, and the fact that lots of people have implemented it successfully (and interoperably!) kind of implies that it's a better standard than IPsec. (Still: see my previous post for an example of how either dd-wrt or Apple Airport Express apparently still *doesn't* implement it correctly.)
The WEP thing is also a good example of a general trend I'm observing while reading the spec: 802.11 does a lot of stuff that really doesn't belong at the low-level network layer. Now, the original "OSI protocol stack" has long been discredited - despite still being taught in my horrible university courses in 2001 and maybe beyond - but the overall idea of your network stack being a "stack" is still reasonable. The whole debate about network stacks comes down to this: higher layers always end up needing to assume things about lower layers, and those assumptions always end up causing your "stack" to become more of a "mishmash."
Without necessarily realizing it, this happened with the world's most common network stack: ethernet + IP + TCP.
First, people have been assuming that ethernet is "pretty secure" (ie. if you're on a LAN, encryption isn't needed). Second, TCP implicitly assumes that ethernet has very low packet loss - packet loss is assumed to mean Internet congestion, which is not true on a wireless network. And third, most IP setups assume that a given ethernet address will always be on the same physical LAN segment, which is how we should route to a particular IP address.
The 802.11 guys - probably correctly - decided that it's way too late to fix those assumptions; they're embedded in pretty much every network and every application on the Internet. So instead, they hacked up the 802.11 standard to make wireless networks act like ethernet. That means wired-equivalent (and with WPA, better-than-wired-equivalent) encryption to bring back the security; device-level retransmits before TCP ever sees a lost packet; association/disassociation madness to let your MAC address hop around, carrying its IP address with it.
It's kind of sad, really, because it means my network now has two retransmit layers, two encryption layers, and two routing layers. All three of those decrease debuggability, increase complexity (and thus the chance of bugs), increase the minimum code size for any router, and increase the amount of jitter that might be seen by my application for a random packet.
Would the world be a better place if we turned off all this link-layer stuff and just reimagined TCP and other protocols based on the new assumptions? I don't know. I suppose it doesn't matter, since I'm pretty sure we're stuck with it at this point.
Oh, there was one bit of good news too: 802.11 looks like it's designed well enough to be used for all sorts of different physical wireless transports. That is, it looks like they can switch frequencies, increase bandwidth, reduce power usage, etc. without major changes to the standard, in the same way that ethernet standards have been recycled (with changes, but surprisingly small ones) up to a gigabit (with and without optical fibre) and beyond.
So all this time developers have spent getting their 802.11 software stacks working properly? It won't be wasted next time we upgrade. 802.11 is going to be around for a long, long time.
Update 2010/11/09: Note that a perfectly legitimate reason to have more than one antenna is to improve signal reception. I don't know if that's what routers are actually doing - I half suspect that the venerable WRT54G, for example, just has them to give the impression of better reception - but it's at least possible. The idea of multiple antennas to allow subtracting out the noise goes all the way back to the old days of TV rabbit ears, which generally had two separate antenna arms. Or ears, I guess. The math is a bit beyond me, but I can believe it works. My point was that you shouldn't, in theory, need multiple antennas to use multiple channels.