Recent blog entries for apenwarr

Wifi: "beamforming" only begins to describe it

[Note to the impatient: to try out my beamforming simulation, which produced the above image, visit my beamlab test page - ideally in a browser with very fast javascript, like Chrome. You can also view the source.]

I promised you some cheating of Shannon's Law. Of course, as with most things in physics, even the cheating isn't really cheating; you just adjust your model until the cheating falls within the rules.

The types of "cheating" that occur in wifi can be briefly categorized as antenna directionality, beamforming, and MIMO. (People usually think about MIMO before beamforming, but I find the latter to be easier to understand, mathematically, so I switched the order.)

Antenna Directionality and the Signal to Noise Ratio

Previously, we discussed the signal-to-noise ratio (SNR) in some detail, and how Shannon's Law can tell you how fast you can transfer data through a channel with a given SNR.

The thing to notice about SNR is that you can increase it by increasing amplification at the sender (where the background noise is fixed but you have a clear copy of the signal) but not at the receiver. Once a receiver has a copy of the signal, it already has noise in it, so when you amplify the received signal, you just amplify the noise by the same amount, and the SNR stays constant.

(By the way, that's why those "amplified" routers you can buy don't do much good. They amplify the transmitted signal, but amplifying the received signal doesn't help in the other direction. The maximum range is still limited by the transmit power on your puny unamplified phone or laptop.)

On the other hand, one thing that *does* help is making your antenna more "directional." The technical term for this is "antenna gain," but I don't like that name, because it makes it sound like your antenna amplifies the signal somehow for free. That's not the case. Antenna gain doesn't so much amplify the signal as ignore some of the noise. Which has the same net effect on the SNR, but the mechanics of it are kind of important.

You can think of an antenna as a "scoop" that picks up both the signal and the noise from a defined volume of space surrounding the antenna. The shape of that volume is important. An ideal "isotropic" antenna (my favourite kind, although unfortunately it doesn't exist) picks up the signal equally in all directions, which means the region it "scoops" is spherical.

In general we assume that background noise is distributed evenly through space, which is not exactly true, but is close enough for most purposes. Thus, the bigger the volume of your scoop, the more noise you scoop up along with it. To stretch our non-mathematical metaphor well beyond its breaking point, a "bigger sphere" will contain more signal as well as more noise, so just expanding the size of your region doesn't affect the SNR. That's why, and I'm very sorry about this, a bigger antenna actually doesn't improve your reception at all.

(There's another concept called "antenna efficiency" which basically says you can adjust your scoop to resonate at a particular frequency, rejecting noise outside that frequency. That definitely works - but all antennas are already designed for this. That's why you get different antennas for different frequency ranges. Nowadays, the only thing you can do by changing your antenna size is to screw up the efficiency. You won't be improving it any further. So let's ignore antenna efficiency. You need a good quality antenna, but there is not really such a thing as a "better" quality antenna these days, at least for wifi.)

So ok, a bigger scoop doesn't help. But what can help is changing the shape of the scoop. Imagine if, instead of a sphere, we scoop up only the signal from a half-sphere.

If that half-sphere is in the direction of the transmitter - which is important! - then you'll still receive all the same signal you did before. But, intuitively, you'll only get half the noise, because you're ignoring the noise coming in from the other direction. On the other hand, if the half-sphere is pointed away from the incoming signal, you won't hear any of the signal at all, and you're out of luck. Such a half-sphere would have 2x the signal to noise ratio of a sphere, and in decibels, 2x is about 3dB. So this kind of (also not really existing) antenna is called 3dBi, where dBi is "decibels better than isotropic" so an isotropic (spherical) receiver is defined as 0dBi.

Taking it a step further, you could take a quarter of a sphere, ie. a region 90 degrees wide in any direction, and point it at your transmitter. That would double the SNR again, thus making a 6dBi antenna.

Real antennas don't pick up signals in a perfectly spherical shape; math makes that hard. So real ones tend to produce a kind of weirdly-shaped scoop with little roundy bits sticking out all over, and one roundy bit larger than the others, in the direction of interest. Essentially, the size of the biggest roundy bit defines the antenna gain (dBi). For regulatory purposes, the FCC mostly assumes you will use a 6dBi antenna, although of course the 6dBi will not be in the shape of a perfect quarter sphere.

Now is that the most hand-wavy explanation of antenna gain you have ever seen? Good.

Anyway, the lesson from all this is if you use a directional antenna, you can get improved SNR. A typical improvement is around 6dBi, which is pretty good. But the down side of a directional antenna is you have to aim it. With a wifi router, that can be bad news. It's great for outdoors if you're going to set up a long-distance link and aim very carefully; you can get really long range with a very highly directional antenna. But indoors, where distances are short and people move around a lot, it can be trouble.

One simple thing that does work indoors is hanging your wifi router from the ceiling. Then, if you picture eg. a quarter-sphere pointing downwards, you can imagine covering the whole room without really sacrificing anything (other than upstairs coverage, which you don't care about if you put a router up there too). Basically, that's as good as you can do, which is why most "enterprise" wifi deployments hang their routers from the ceiling. If you did that at home too - and had the right kind of antennas with the right gain in the right direction - you could get up to 6dB of improvement on your wifi signal, which is pretty great.

(Another trick some routers do is to have multiple antennas, each one pointing in a different direction, and then switch them on and off to pick the one(s) with the highest SNR for each client. This works okay but it interferes with MIMO - where you want to actively use as many antennas as possible - so it's less common nowadays. It was a big deal in the days of 802.11g, where that was the main reason to have multiple antennas at all. Let's talk about MIMO later, since MIMO is its own brand of fun.)

Beamforming

So okay, that was antenna directionality (gain). To summarize all that blathering: you point your antenna in a particular direction, and you get a better SNR in that particular direction.

But the problem is that wifi clients move around, so antennas permanently pointed in a particular direction are going to make things worse about as often as they help (except for simple cases like hanging from the ceiling, and even that leaves out the people upstairs).

But wouldn't it be cool if, using software plus magic, you could use multiple antennas to create "virtual directionality" and re-aim a "beam" automatically as the client device moves around?

Yes, that would be cool.

Unfortunately, that's not what beamforming actually is.

Calling it "beamforming" is not a *terrible* analogy, but the reality of the signal shape is vastly more complex and calling it a "beam" is pretty misleading.

This is where we finally talk about what I mentioned last time, where two destructively-interfering signals result in zero signal. Where does the power go?

As an overview, let's say you have two unrelated transmitters sending out a signal at the same frequency from different locations. It takes a fixed amount of power for each transmitter to send its signal. At some points in space, the signals interfere destructively, so there's no signal at that point. Where does it go? There's a conservation of energy problem after all; the transmitted power has to equal the power delivered, via electromagnetic waves, out there in the wild. Does it mean the transmitter is suddenly unable to deliver that much power in the first place? Is it like friction, where the energy gets converted into heat?

Well, it's not heat, because heat is vibration, ie, the motion of physical particles with mass. The electromagnetic waves we're talking about don't necessarily have any relationship with mass; they might be traveling through a vacuum where there is no mass, but destructive interference can still happen.

Okay, maybe the energy is re-emitted as radiation? Well, no. The waves in the first place were radiation. If we re-emitted them as radiation, then by definition, they weren't cancelled out. But we know they were cancelled out; you can measure it and see.

The short and not-very-satisfying answer is that in terms of conservation of energy, things work out okay. There are always areas where the waves interfere constructively that exactly cancel out the areas where they interfere destructively.

The reason I find that answer unsatisfying is that the different regions don't really interact. It's not like energy is being pushed, somehow, between the destructive areas and the constructive areas. It adds up in the end, because it has to, but that doesn't explain *how* it happens.

The best explanation I've found relates to quantum mechanics, in a lecture I read by Richard Feynman at some point. The idea is that light (and all electromagnetic waves, which is what we're talking about) actually does not really travel in straight lines. The idea that light travels in a straight line is just an illusion caused by large-scale constructive and destructive interference. Basically, you can think of light as travelling along all the possible paths - even silly paths that involve backtracking and spirals - from point A to point B. The thing is, however, that for almost every path, there is an equal and opposite path that cancels it out. The only exception is the shortest path - a straight line - of which there is only one. Since there's only one, there can't be an equal but opposite version. So as far as we're concerned, light travels in a straight line.

(I offer my apologies to every physicist everywhere for the poor quality of that explanation.)

But there are a few weird experiments you can do (look up the "double slit experiment" for example) to prove that in fact, the "straight line" model is the wrong one, and the "it takes all the possible paths" model is actually more like what's really going on.

So that's what happens here too. When we create patterns of constructive and destructive radio interference, we are simply disrupting the rule of thumb that light travels in a straight line.

Oh, is that all? Okay. Let's call it... beam-un-forming.

There's one last detail we have to note in order to make it all work out. The FCC says that if we transmit from two antennas, we have to cut the power from each antenna in half, so the total output is unchanged. If we do that, naively it might seem like the constructive interference effect is useless. When the waves destructively interfere, you still get zero, but when they constructively interfere, you get 2*½*cos(ωt), which is just the original signal. Might as well just use one antenna with the original transmit power, right?

Not exactly. Until now, I have skipped over talking about signal power vs amplitude, since it hasn't been that important so far. The FCC regulates *power*, not amplitude. The power of A*cos(ωt) turns out to be ½A2. I won't go over all the math, but the energy of f(x) during a given period is defined as ∫ f2(x) dx over that period. Power is energy divided by time. It turns out (via trig identities again) the power of cos(x) is 0.5, and the rest flows from there.

Anyway, the FCC limit requires a *power* reduction of ½. So if the original wave was cos(ωt), then the original power was 0.5. We need the new transmit power (for each antenna) to be 0.25, which is ½A2 = ½(0.5). Thus A = sqrt(0.5) = 1/sqrt(2) = 0.7 or so.

So the new transmit wave is 0.7 cos(ωt). Two of those, interfering constructively, gives about 1.4 cos(ωt). The resulting power is thus around ½(1.4)2 = 1, or double the original (non-reduced, with only one antenna) transmit power.

Ta da! Some areas have twice the power - a 3dB "antenna array gain" or "tx beamforming gain" - while others have zero power. It all adds up. No additional transmit power is required, but a receiver, if it's in one of the areas of constructive interference, now sees 3dB more signal power and thus 3dB more SNR.

We're left with the simple (ha ha) matter of making sure that the receiver is in an area of maximum constructive interference at all times. To make a long story short, we do this by adjusting the phase between the otherwise-identical signals coming from the different antennas.

I don't really know exactly how wifi arranges for the phase adjustment to happen; it's complicated. But we can imagine a very simple version: just send from each antenna, one at a time, and have the receiver tell you the phase difference right now between each variant. Then, on the transmitter, adjust the transmit phase on each antenna by an opposite amount. I'm sure what actually happens is more complicated than that, but that's the underlying concept, and it's called "explicit beamforming feedback." Apparently the 802.11ac standard made progress toward getting everyone to agree on a good way of providing beamforming feedback, which is important for making this work well.

Even more weirdly, the same idea works in reverse. If you know the phase difference between the client's antenna (we're assuming for now that he has only one, so we don't go insane) and each of your router's antennas, then when the client sends a signal *back* to you, you can extract the signal from the different antennas in a particular way that gets you the same amount of gain as in the transmit direction, and we call that rx beamforming. At least, I think you can. I haven't done the math for that yet, so I don't know for sure how well it can work.

Relatedly, even if there is no *explicit* beamforming feedback, in theory you can calculate the phase differences by listening to the signals from the remote end on each of your router's antennas. Because the signals should be following exactly the same path in both directions, you can guess what phase difference your signal arrived with by seeing which difference *his* signal came back with, and compensate accordingly. This is called "implicit beamforming feedback." Of course, if both ends try this trick at once, hilarity ensues.

And finally, I just want to point out how little the result of "beamforming" is like a beam. Although conceptually we'd like to think of it that way - we have a couple of transmitters tuning their signal to point directly at the receiver - mathematically it's not really like that. "Beamforming" creates a kind of on-off "warped checkerboard" sort of pattern that extends in all directions. To the extent that your antenna array is symmetrical, the checkerboard pattern is also symmetrical.

Beamforming Simulation

Of course, a checkerboard is also a flawed analogy. Once you start looking for a checkerboard, you start to see that in fact, the warping is kind of directional, and sort of looks like a beam, and you can imagine that with a hundred antennas, maybe it really would be "beam" shaped.

After doing all the math, I really wanted to know what beamforming looked like, so I wrote a little simulation of it, and the image at the top of this article is the result. (That particular one came from a 9-antenna beamforming array.)

You can also try out the simulation yourself, moving around up to 9 antennas to create different interference patterns. I find it kind of fun and mesmerizing, especially to think that these signals are all around us and if you could see them, they'd look like *that*. On my computer with Chrome, I get about 20 frames per second; with Safari, I get about 0.5 frames per second, which is not as fun. So use a browser with a good javascript engine.

Note that while the image looks like it has contours and "shadows," the shadows are entirely the effect of the constructive/destructive interference patterns causing bright and dark areas. Nevertheless, you can kind of visually see how the algorithm builds one level of constructive interference on top of another, with the peak of the humpiest bump being at the exact location of the receiver. It really works!

Some notes about the simulation:

  • It's 2-dimensional. Real life has at least 3 dimensions. It works pretty much the same though.
  • The intensity (brightness) of the colour indicates the signal strength at that point. Black means almost no signal.
  • "Blue" means cos(ωt) is positive at that point, and red means it's negative.
  • Because of the way phasors work, "blue plus red" is not the only kind of destructive interference, so it's a bit confusing.
  • Click on the visualization to move around the currently-selected transmitter or receiver.
  • When you move the receiver around, it auto-runs the beamforming optimization so you can see the "beam" move around.
  • The anti-optimize button is not very smart; a smarter algorithm could achieve an even less optimal result. But most of the time it does an okay job, and it does show how you can also use beamforming to make a receiver *not* hear your signal. That's the basis of MU-MIMO.
MIMO

The last and perhaps most exciting way to cheat Shannon's Law is MIMO. I'll try to explain that later, but I'm still working out the math :)

Syndicated 2014-07-29 06:41:59 from apenwarr

27 Jul 2014 (updated 29 Jul 2014 at 06:02 UTC) »

Wifi, Interference and Phasors

Before we get to the next part, which is fun, we need to talk about phasors. No, not the Star Trek kind, the boring kind. Sorry about that.

If you're anything like me, you might have never discovered a use for your trigonometric identities outside of school. Well, you're in luck! With wifi, trigonometry, plus calculus involving trigonometry, turns out to be pretty important to understanding what's going on. So let's do some trigonometry.

Wifi modulation is very complicated, but let's ignore modulation for the moment and just talk about a carrier wave, which is close enough. Here's your basic 2.4 GHz carrier:

    A cos (ω t)

Where A is the transmit amplitude and ω = 2.4e9 (2.4 GHz). The wavelength, λ, is the speed of light divided by the frequency, so:

    λ = c / ω = 3.0e8 / 2.4e9 = 0.125m

That is, 12.5 centimeters long. (By the way, just for comparison, the wavelength of light is around 400-700 nanometers, or 500,000 times shorter than a wifi signal. That comes out to 600 Terahertz or so. But all the same rules apply.)

The reason I bring up λ is that we're going to have multiple transmitters. Modern wifi devices have multiple transmit antennas so they can do various magic, which I will try to explain later. Also, inside a room, signals can reflect off the walls, which is a bit like having additional transmitters.

Let's imagine for now that there are no reflections, and just two transmit antennas, spaced some distance apart on the x axis. If you are a receiver also sitting on the x axis, then what you see is two signals:

    cos (ω t) + cos (ω t + φ)

Where φ is the phase difference (between 0 and 2π). The phase difference can be calculated from the distance between the two antennas, r, and λ, as follows:

    φ = r / λ

Of course, a single-antenna receiver can't *actually* see two signals. That's where the trig identities come in.

Constructive Interference

Let's do some simple ones first. If r = λ, then φ = 2π, so:

    cos (ω t) + cos (ω t + 2π)
    = cos (ω t) + cos (ω t)
    = 2 cos (ω t)

That one's pretty intuitive. We have two antennas transmitting the same signal, so sure enough, the receiver sees a signal twice as tall. Nice.

Destructive Interference

The next one is weirder. What if we put the second transmitter 6.25cm away, which is half a wavelength? Then φ = π, so:

    cos (ω t) + cos (ω t + π)
    = cos (ω t) - cos (ω t)
    = 0

The two transmitters are interfering with each other! A receiver sitting on the x axis (other than right between the two transmit antennas) won't see any signal at all. That's a bit upsetting, in fact, because it leads us to a really pivotal question: where did the energy go?

We'll get to that, but first things need to get even weirder.

Orthogonal Carriers

Let's try φ = π/2.

    cos (ω t) + cos (ω t + π/2)
    = cos (ω t) - sin (ω t)

This one is hard to explain, but the short version is, no matter how much you try, you won't get that to come out to a single (Edit 2014/07/29: non-phase-shifted) cos or sin wave. Symbolically, you can only express it as the two separate factors, added together. At each point, the sum has a single value, of course, but there is no formula for that single value which doesn't involve both a cos ωt and a sin ωt. This happens to be a fundamental realization that leads to all modern modulation techniques. Let's play with it a little and do some simple AM radio (amplitude modulation). That means we take the carrier wave and "modulate" it by multiplying it by a much-lower-frequency "baseband" input signal. Like so:

    f(t) cos (ω t)

Where ω >> 1, so that for any given cycle of the carrier wave, f(t) can be assumed to be "almost constant."

On the receiver side, we get the above signal and we want to discover the value of f(t). What we do is multiply it again by the carrier:

    f(t) cos (ω t) cos (ω t)
    = f(t) cos2 (ω t)
    = f(t) (1 - sin2 (ω t))
    = ½ f(t) (2 - 2 sin2 (ω t))
    = ½ f(t) (1 + (1 - 2 sin2 (ω t)))
    = ½ f(t) (1 + cos (2 ω t))
    = ½ f(t) + ½ f(t) cos (2 ω t)

See? Trig identities. Next we do what we computer engineers call a "dirty trick" and, instead of doing "real" math, we'll just hypothetically pass the resulting signal through a digital or analog filter. Remember how we said f(t) changes much more slowly than the carrier? Well, the second term in the above answer changes twice as fast as the carrier. So we run the whole thing through a Low Pass Filter (LPF) at or below the original carrier frequency, removing high frequency terms, leaving us with just this:

    (...LPF...)
    → ½ f(t)

Which we can multiply by 2, and ta da! We have the original input signal.

Now, that was a bit of a side track, but we needed to cover that so we can do the next part, which is to use the same trick to demonstrate how cos(ω t) and sin(ω t) are orthogonal vectors. That means they can each carry their own signal, and we can extract the two signals separately. Watch this:

    [ f(t) cos (ω t) + g(t) sin (ω t) ] cos (ω t)
    = [f(t) cos2 (ω t)] + [g(t) cos (ω t) sin (ω t)]
    = [½ f(t) (1 + cos (2 ω t))] + [½ g(t) sin (2 ω t)]
    = ½ f(t) + ½ f(t) cos (2 ω t) + ½ g(t) sin (2 ω t)
    [...LPF...]
    → ½ f(t)

Notice that by multiplying by the cos() carrier, we extracted just f(t). g(t) disappeared. We can play a similar trick if we multiply by the sin() carrier; f(t) then disappears and we have recovered just g(t).

In vector terms, we are taking the "dot product" of the combined vector with one or the other orthogonal unit vectors, to extract one element or the other. One result of all this is you can, if you want, actually modulate two different AM signals onto exactly the same frequency, by using the two orthogonal carriers.

QAM

But treating it as just two orthogonal carriers for unrelated signals is a little old fashioned. In modern systems we tend to think of them as just two components of a single vector, which together give us the "full" signal. That, in short, is QAM, one of the main modulation methods used in 802.11n. To oversimplify a bit, take this signal:

    f(t) cos (ω t) + g(t) sin (ω t)

And let's say f(t) and g(t) at any given point in time each have a value that's one of: 0, 1/3, 2/3, or 1. Since each function can have one of four values, there are a total of 4*4 = 16 different possible combinations, which corresponds to 4 bits of binary data. We call that encoding QAM16. If we plot f(t) on the x axis and g(t) on the y axis, that's called the signal "constellation."

Anyway we're not attempting to do QAM right now. Just forget I said anything.

Adding out-of-phase signals

Okay, after all that, let's go back to where we started. We had two transmitters both sitting on the x axis, both transmitting exactly the same signal cos(ω t). They are separated by a distance r, which translates to a phase difference φ. A receiver that's also on the x axis, not sitting between the two transmit antennas (which is a pretty safe assumption) will therefore see this:

    cos (ω t) + cos (ω t + φ)
    = cos (ω t) + cos (ω t) cos φ - sin (ω t) sin φ
    = (1 + cos φ) cos (ω t) - (sin φ) sin (ω t)

One way to think of it is that a phase shift corresponds to a rotation through the space defined by the cos() and sin() carrier waves. We can rewrite the above to do this sort of math in a much simpler vector notation:

    [1, 0] + [cos φ, sin φ]
    = [1+cos φ, sin φ]

This is really powerful. As long as you have a bunch of waves at the same frequency, and each one is offset by a fixed amount (phase difference), you can convert them each to a vector and then just add the vectors linearly. The result, the sum of these vectors, is what the receiver will see at any given point. And the sum can always be expressed as the sum of exactly one cos(ω t) and one sin(ω t) term, each with its own magnitude.

This leads us to a very important conclusion:

    The sum of reflections of a signal is just an arbitrarily phase shifted and scaled version of the original.

People worry about reflections a lot in wifi, but because of this rule, they are not, at least mathematically, nearly as bad as you'd think.

Of course, in real life, getting rid of that phase shift can be a little tricky, because you don't know for sure by how much the phase has been shifted. If you just have two transmitting antennas with a known phase difference between them, that's one thing. But when you add reflections, that makes it harder, because you don't know what phase shift the reflections have caused. Not impossible: just harder.

(You also don't know, after all that interference, what happened to the amplitude. But as we covered last time, the amplitude changes so much that our modulation method has to be insensitive to it anyway. It's no different than moving the receiver closer or further away.)

Phasor Notation

One last point. In some branches of eletrical engineering, especially in analog circuit analysis, we use something called "phasor notation." Basically, phasor notation is just a way of representing these cos+sin vectors using polar coordinates instead of x/y coordinates. That makes it easy to see the magnitude and phase shift, although harder to add two signals together. We're going to use phasors a bit when discussing signal power later.

Phasors look like this in the general case:

    A cos (ω t) + B sin (ω t)
    = [A, B]

      Magnitude = M = (A2 + B2)½

      tan (Phase) = tan φ = B / A
      φ = atan2(B, A)

    = M∠φ

or the inverse:

    M∠φ
    = [M cos φ, M sin φ]
    = (M cos φ) cos (ω t) - (M sin φ) sin (ω t)
    = [A, B]
    = A cos (ω t) + B sin (ω t)

Imaginary Notation

There's another way of modeling the orthogonal cos+sin vectors, which is to use complex numbers (ie. a real axis, for cos, and an imaginary axis, for sin). This is both right and wrong, as imaginary numbers often are; the math works fine, but perhaps not for any particularly good reason, unless your name is Euler. The important thing to notice is that all of the above works fine without any imaginary numbers at all. Using them is a convenience sometimes, but not strictly necessary. The value of cos+sin is a real number, not a complex number.

Epilogue

Next time, we'll talk about signal power, and most importantly, where that power disappears to when you have destructive interference. And from there, as promised last time, we'll cheat Shannon's Law.

Syndicated 2014-07-26 01:11:15 (Updated 2014-07-29 06:02:04) from apenwarr

19 Jul 2014 (updated 19 Jul 2014 at 23:07 UTC) »

Wifi maximum range, lies, and the signal to noise ratio

Last time we talked about how wifi signals cross about 12 orders of magnitude in terms of signal power, from +30dBm (1 watt) to -90dBm (1 picowatt). I mentioned my old concern back in school about splitters causing a drop to 1/n of the signal on a wired network, where n is the number of nodes, and said that doesn't matter much after all.

Why doesn't it matter? If you do digital circuits for a living, you are familiar with the way digital logic works: if the voltage is over a threshold, say, 1.5V, then you read a 1. If it's under the threshold, then you read a 0. So if you cut all the voltages in half, that's going to be a mess because the threshold needs to get cut in half too. And if you have an unknown number of nodes on your network, then you don't know where the threshold is at all, which is a problem. Right?

Not necessarily. It turns out analog signal processing is - surprise! - not like digital signal processing.

ASK, FSK, PSK, QAM

Essentially, in receiving an analog signal and converting it back to digital, you want to do one of three things:

  • see if the signal power is over/under a threshold ("amplitude shift keying" or ASK)
  • or: see if the signal is frequency #0 or frequency #1 ("frequency shift keying" or FSK)
  • or: fancier FSK-like schemes such as PSK or QAM (look it up yourself :)).
Realistically nowadays everyone does QAM, but the physics are pretty much the same for FSK and it's easier to explain, so let's stick with that.

But first, what's wrong with ASK? Why toggle between two frequencies (FSK) when you can just toggle one frequency on and off (ASK)? The answer comes down mainly to circuit design. To design an ASK receiver, you have to define a threshold, and when the amplitude is higher than the threshold, call it a 1, otherwise call it a 0. But what is the threshold? It depends on the signal strength. What is the signal strength? The height of a "1" signal. How do we know whether we're looking at a "1" signal? It's above the threshold ... It ends up getting tautological.

The way you implement it is to design an "automatic gain control" (AGC) circuit that amplifies more when too few things are over the threshold, and less when too many things are over the threshold. As long as you have about the same number of 1's and 0's, you can tune your AGC to do the right thing by averaging the received signal power over some amount of time.

In case you *don't* have an equal number of 1's and 0's, you can fake it with various kinds of digital encodings. (One easy encoding is to split each bit into two halves and always flip the signal upside down for the second half, producing a "balanced" signal.)

So, you can do this of course, and people have done it. But it just ends up being complicated and fiddly. FSK turns out to be much easier. With FSK, you just build two circuits: one for detecting the amplitude of the signal at frequency f1, and one for detecting the amplitude of the signal at frequency f2. It turns out to be easy to design analog circuits that do this. Then you design a "comparator" circuit that will tell you which of two values is greater; it turns out to be easy to design that too. And you're done! No trying to define a "threshold" value, no fine-tuned AGC circuit, no circular reasoning. So FSK and FSK-like schemes caught on.

SNR

With that, you can see why my original worry about a 1/n signal reduction from cable splitters didn't matter. As long as you're using FSK, the 1/n reduction doesn't mean anything; your amplitude detector and comparator circuits just don't care about the exact level, essentially. With wifi, we take that to the extreme with tiny little FSK-like signals down to a picowatt or so.

But where do we stop? Why only a picowatt? Why not even smaller?

The answer is, of course, background noise. No signal exists in perfect isolation, except in a simulation (and even in a simulation, the limitations of floating point accuracy might cause problems). There might be leftover bits of other people's signals transmitted from far away; thermal noise (ie. molecules vibrating around which happen to be at your frequency); and amplifier noise (ie. inaccuracies generated just from trying to boost the signal to a point where your frequency detector circuits can see it at all). You can also have problems from other high-frequency components on the same circuit board emitting conflicting signals.

The combination of limits from amplifier error and conflicting electrical components is called the receiver sensitivity. Noise arriving from outside your receiver (both thermal noise and noise from interfering signals) is called the noise floor. Modern circuits - once properly debugged, calibrated, and shielded - seem to be good enough that receiver sensitivity is not really your problem nowadays. The noise floor is what matters.

It turns out, with modern "low-noise amplifier" (LNA) circuits, we can amplify a weak signal essentially as much as we want. But the problem is... we amplify the noise along with it. The ratio between signal strength and noise turns out to be what really matters, and it doesn't change when you amplify. (Other than getting slightly worse due to amplifier noise.) We call that the signal to noise ratio (SNR), and if you ask an expert in radio signals, they'll tell you it's one of the most important measurements in analog communications.

A note on SNR: it's expressed as a "ratio" which means you divide the signal strength in mW by the noise level in mW. But like the signal strength and noise levels, we normally want to express the SNR in decibels to make it more manageable. Decibels are based on logarithms, and because of the way logarithms work, you subtract decibels to get the same effect as dividing the original values. That turns out to be very convenient! If your noise level is -90dBm and your signal is, say, -60dBm, then your SNR is 30dB, which means 1000x. That's awfully easy to say considering how complicated the underlying math is. (By the way, after subtracting two dBm values we just get plain dB, for the same reason that if you divide 10mW by 2mW you just get 5, not 5mW.)

The Shannon Limit

So, finally... how big does the SNR need to be in order to be "good"? Can you just receive any signal where SNR > 1.0x (which means signal is greater than noise)? And when SNR < 1.0x (signal is less than noise), all is lost?

Nope. It's not that simple at all. The math is actually pretty complicated, but you can read about the Shannon Limit on wikipedia if you really want to know all the details. In short, the bigger your SNR, the faster you can go. That makes a kind of intuitive sense I guess.

(But it's not really all that intuitive. When someone is yelling, can they talk *faster* than when they're whispering? Perhaps it's only intuitive because we've been trained to notice that wifi goes faster when the nodes are closer together.)

The Shannon limit even calculates that you can transfer some data even when the signal power is lower than the noise, which seems counterintuitive or even impossible. But it's true, and the global positioning system (GPS) apparently actually does this, and it's pretty cool.

The Maximum Range of Wifi is Unchangeable

So that was all a *very* long story, but it has a point. Wifi signal strength is fundamentally limited by two things: the regulatory transmitter power limit (30dBm or less, depending on the frequency and geography), and the distance between transmitter and receiver. You also can't do much about background noise; it's roughly -90dBm or maybe a bit worse. Thus, the maximum speed of a wifi link is fixed by the laws of physics. Transmitters have been transmitting at around the regulatory maximum since the beginning.

So how, then, do we explain the claims that newer 802.11n devices have "double the range" of the previous-generation 802.11g devices?

Simple: they're marketing lies. 802.11g and 802.11n have exactly the same maximum range. In fact, 802.11n just degrades into 802.11g as the SNR gets worse and worse, so this has to be true.

802.11n is certainly faster at close and medium range. That's because 802.11g tops out at an SNR of about 20dB. That is, the Shannon Limit says you can go faster when you have >20dB, but 802.11g doesn't try; technology wasn't ready for it at the time. 802.11n can take advantage of that higher SNR to get better speeds at closer ranges, which is great.

But the claim about longer range, by any normal person's definition of range, is simply not true.

Luckily, marketing people are not normal people. In the article I linked above they explain how. Basically, they define "twice the range" as a combination of "twice the speed at the same distance" and "the same speed at twice the distance." That is, a device fulfilling both criteria has double the range as an original device which fulfills neither.

It sounds logical, but in real life, that definition is not at all useful. You can do it by comparing, say, 802.11g and 802.11n at 5ft and 10ft distances. Sure enough, 802.11n is more than twice as fast as 802.11g at 5ft! And at 10ft, it's still faster than 802.11g at 5ft! Therefore, twice the range. Magic, right? But at 1000ft, the same equations don't work out. Oddly, their definition of "range" does not include what happens at maximum range.

I've been a bit surprised at how many people believe this "802.11n has twice the range" claim. It's obviously great for marketing; customers hate the limits of wifi's maximum range, so of course they want to double it, or at least increase it by any nontrivial amount, and they will pay money for a new router if it can do this. As of this writing, even wikipedia's table of maximum ranges says 802.11n has twice the maximum range of 802.11g, despite the fact that anyone doing a real-life test could easily discover that this is simply not the case. I did the test. It's not the case. You just can't cheat Shannon and the Signal to Noise Ratio.

...

Coming up next, some ways to cheat Shannon and the Signal to Noise Ratio.

Syndicated 2014-07-18 02:23:26 (Updated 2014-07-19 23:07:34) from apenwarr

14 Jul 2014 (updated 14 Jul 2014 at 07:02 UTC) »

Wifi and the square of the radius

I have many things to tell you about wifi, but before I can tell you most of them, I have to tell you some basic things.

First of all, there's the question of transmit power, which is generally expressed in watts. You may recall that a watt is a joule per second. A milliwatt is 1/1000 of a watt. A transmitter generally can be considered to radiate its signal outward from a center point. A receiver "catches" part of the signal with its antenna.

The received signal power declines with the square of the radius from the transmit point. That is, if you're twice as far away as distance r, the received signal power at distance 2r is 1/4 as much as it was at r. Why is that?

Imagine a soap bubble. It starts off at the center point and there's a fixed amount of soap. As it inflates, the same amount of soap is stretched out over a larger and larger area - the surface area. The surface area of a sphere is 4 π r2.

Well, a joule of energy works like a millilitre of soap. It starts off at the transmitter and gets stretched outward in the shape of a sphere. The amount of soap (or energy) at one point on the sphere is proportional to
1 / 4 π r2.

Okay? So it goes down with the square of the radius.

A transmitter transmitting constantly will send out a total of one joule per second, or a watt. You can think of it as a series of ever-expanding soap bubbles, each expanding at the speed of light. At any given distance from the transmitter, the soap bubble currently at that distance will have a surface area of 4 π r2, and so the power will be proportional to 1 / that.

(I say "proportional to" because the actual formula is a bit messy and depends on your antenna and whatnot. The actual power at any point is of course zero, because the point is infinitely small, so you can only measure the power over a certain area, and that area is hard to calculate except that your antenna picks up about the same area regardless of where it is located. So although it's hard to calculate the power at any given point, it's pretty easy to calculate that a point twice as far away will have 1/4 the power, and so on. That turns out to be good enough.)

If you've ever done much programming, someone has probably told you that O(n^2) algorithms are bad. Well, this is an O(n^2) algorithm where n is the distance. What does that mean?

    1cm -> 1 x
    2cm -> 1/4 x
    3cm -> 1/9 x
    10cm -> 1/100 x
    20cm -> 1/400 x
    100cm (1m) -> 1/10000 x
    10,000cm (100m) -> 1/100,000,000 x

As you get farther away from the transmitter, that signal strength drops fast. So fast, in fact, that people gave up counting the mW of output and came up with a new unit, called dBm (decibels times milliwatts) that expresses the signal power logarithmically:

    n dBm = 10n/10 mW

So 0 dBm is 1 mW, and 30 dBm is 1W (the maximum legal transmit power for most wifi channels). And wifi devices have a "receiver sensitivity" that goes down to about -90 dBm. That's nine orders of magnitude below 0; a billionth of a milliwatt, ie. a trillionth of a watt. I don't even know the word for that. A trilliwatt? (Okay, I looked it up, it's a picowatt.)

Way back in university, I tried to build a receiver for wired modulated signals. I had no idea what I was doing, but I did manage to munge it, more or less, into working. The problem was, every time I plugged a new node into my little wired network, the signal strength would be cut down by 1/n. This seemed unreasonable to me, so I asked around: what am I doing wrong? What is the magical circuit that will let me split my signal down two paths without reducing the signal power? Nobody knew the answer. (Obviously I didn't ask the right people :))

The answer is, it turns out, that there is no such magical circuit. The answer is that 1/n is such a trivial signal strength reduction that essentially, on a wired network, nobody cares. We have RF engineers building systems that can handle literally a 1/1000000000000 (from 30 dBm to -90 dBm) drop in signal. Unless your wired network has a lot of nodes or you are operating it way beyond distance specifications, your silly splitter just does not affect things very much.

In programming terms, your runtime is O(n) + O(n^2) = O(n + n^2) = O(n^2). You don't bother optimizing the O(n) part, because it just doesn't matter.

(Update 2014/07/14: The above comment caused a bit of confusion because it talks about wired networks while the rest of the article is about wireless networks. In a wireless network, people are usually trying to extract every last meter of range, and a splitter is a silly idea anyway, so wasting -3 dB is a big deal and nobody does that. Wired networks like I was building at the time tend to have much less, and linear instead of quadratic, path loss and so they can tolerate a bunch of splitters. For example, good old passive arcnet star topology, or ethernet-over-coax, or MoCA, or cable TV.)

There is a lot more to say about signals, but for now I will leave you with this: there are people out there, the analog RF circuit design gods and goddesses, who can extract useful information out of a trillionth of a watt. Those people are doing pretty amazing work. They are not the cause of your wifi problems.

Syndicated 2014-07-10 05:15:49 (Updated 2014-07-14 07:02:06) from apenwarr

The Curse of Vicarious Popularity

I had already intended for this next post to be a discussion of why people seem to suddenly disappear after they go to work for certain large companies. But then, last week, I went and made an example of myself.

Everything started out normally (the usual bit of attention on news.yc). It then progressed to a mention on Daring Fireball, which was nice, but okay, that's happened before. A few days later, though, things started going a little overboard, as my little article about human nature got a company name attached to it and ended up quoted on Business Insider and CNet.

Now don't get me wrong, I like fame and fortune as much as the next person, but those last articles crossed an awkward threshold for me. I wasn't quoted because I said something smart; I was quoted because what I said wasn't totally boring, and an interesting company name got attached. Suddenly it was news, where before it was not.

Not long after joining a big company, I asked my new manager - wow, I had a manager for once! - what would happen if I simply went and did something risky without getting a million signoffs from people first. He said something like this, which you should not quote because he was not speaking for his employer and neither am I: "Well, if it goes really bad, you'll probably get fired. If it's successful, you'll probably get a spot bonus."

Maybe that was true, and maybe he was just telling me what I, as a person arriving from the startup world, wanted to hear. I think it was the former. So far I have received some spot bonuses and no firings, but the thing about continuing to take risks is my luck could change at any time.

In today's case, the risk in question is... saying things on the Internet.

What I have observed is that the relationship between big companies and the press is rather adversarial. I used to really enjoy reading fake anecdotes about it at Fake Steve Jobs, so that's my reference point, but I'm pretty sure all those anecdotes had some basis in reality. After all, Fake Steve Jobs was a real journalist pretending to be a real tech CEO, so it was his job to know both sides.

There are endless tricks being played on everyone! PR people want a particular story to come out so they spin their press releases a particular way; reporters want more conflict so they seek it out or create it or misquote on purpose; PR people learn that this happens so they become even more absolutely iron-fisted about what they say to the press. There are classes that business people at big companies can take to learn to talk more like politicians. Ironically, if each side would relax a bit and stop trying so hard to manipulate the other, we could have much better and more interesting and less tabloid-like tech news, but that's just not how game theory works. The first person to break ranks would get too much of an unfair advantage. And that's why we can't have nice things.

Working at a startup, all publicity is good publicity, and you're the underdog anyway, and you're not publicly traded, so you can be pretty relaxed about talking to the press. Working at a big company, you are automatically the bad guy in every David and Goliath story, unless you are very lucky and there's an even bigger Goliath. There is no maliciousness in that; it's just how the story is supposed to be told, and the writers give readers what they want.

Which brings me back to me, and people like me, who just write for fun. Since I work at a big company, there are bunch of things I simply should not say, not because they're secret or there's some rule against saying them - there isn't, as far as I know - but because no matter what I say, my words are likely to be twisted and used against me, and against others. If I can write an article about Impostor Syndrome and have it quoted by big news organizations (to their credit, the people quoting it so far have done a good job), imagine the damage I might do if I told you something mean about a competitor, or a bug, or a missing feature, or an executive. Even if, or especially if, it were just my own opinion.

In the face of that risk - the risk of unintentionally doing a lot of damage to your friends and co-workers - most people just give up and stop writing externally. You may have noticed that I've greatly cut back myself. But I have a few things piling up that I've been planning to say, particularly about wifi. Hopefully it will be so technically complicated that I will scare away all those press people.

And if we're lucky, I'll get the spot bonus and not that other thing.

Syndicated 2014-07-08 23:05:01 from apenwarr

The Curse of Smart People

A bit over 3 years ago, after working for many years at a series of startups (most of which I co-founded), I disappeared through a Certain Vortex to start working at a Certain Large Company which I won't mention here. But it's not really a secret; you can probably figure it out if you bing around a bit for my name.

Anyway, this big company that now employs me is rumoured to hire the smartest people in the world.

Question number one: how true is that?

Answer: I think it's really true. A suprisingly large fraction of the smartest programmers in the world *do* work here. In very large quantities. In fact, quantities so large that I wouldn't have thought that so many really smart people existed or could be centralized in one place, but trust me, they do and they can. That's pretty amazing.

Question number two: but I'm sure they hired some non-smart people too, right?

Answer: surprisingly infrequently. When I went for my job interview there, they set me up for a full day of interviewers (5 sessions plus lunch). I decided that I would ask a few questions of my own in these interviews, and try to guess how good the company is based on how many of the interviewers seemed clueless. My hypothesis was that there are always some bad apples in any medium to large company, so if the success rate was, say, 3 or 4 out of 5 interviewers being non-clueless, that's pretty good.

Well, they surprised me. I had 5 out of 5 non-clueless interviewers, and in fact, all of them were even better than non-clueless: they impressed me. If this was the average smartness of people around here, maybe the rumours were really true, and they really had something special going on.

(I later learned that my evil plan and/or information about my personality *may* have been leaked to the recruiters who *may* have intentionally set me up with especially clueful interviewers to avoid the problem, but this can neither be confirmed nor denied.)

Anyway, I continue to be amazed at the overall smartness of people at this place. Overall, very nearly everybody, across the board, surprises or impresses me with how smart they are.

Pretty great, right?

Yes.

But it's not perfect. Smart people have a problem, especially (although not only) when you put them in large groups. That problem is an ability to convincingly rationalize nearly anything.

Everybody rationalizes. We all want the world to be a particular way, and we all make mistakes, and we all want to be successful, and we all want to feel good about ourselves.

We all make decisions for emotional or intuitive reasons instead of rational ones. Some of us admit that. Some of us think using our emotions is better than being rational all the time. Some of us don't.

Smart people, computer types anyway, tend to come down on the side of people who don't like emotions. Programmers, who do logic for a living.

Here's the problem. Logic is a pretty powerful tool, but it only works if you give it good input. As the famous computer science maxim says, "garbage in, garbage out." If you know all the constraints and weights - with perfect precision - then you can use logic to find the perfect answer. But when you don't, which is always, there's a pretty good chance your logic will lead you very, very far astray.

Most people find this out pretty early on in life, because their logic is imperfect and fails them often. But really, really smart computer geek types may not ever find it out. They start off living in a bubble, they isolate themselves because socializing is unpleasant, and, if they get a good job straight out of school, they may never need to leave that bubble. To such people, it may appear that logic actually works, and that they are themselves logical creatures.

I guess I was lucky. I accidentally co-founded what turned into a pretty successful startup while still in school. Since I was co-running a company, I found out pretty fast that I was wrong about things and that the world didn't work as expected most of the time. This was a pretty unpleasant discovery, but I'm very glad I found it out earlier in life instead of later, because I might have wasted even more time otherwise.

Working at a large, successful company lets you keep your isolation. If you choose, you can just ignore all the inconvenient facts about the world. You can make decisions based on whatever input you choose. The success or failure of your project in the market is not really that important; what's important is whether it gets canceled or not, a decision which is at the whim of your boss's boss's boss's boss, who, as your only link to the unpleasantly unpredictable outside world, seems to choose projects quasi-randomly, and certainly without regard to the quality of your contribution.

It's a setup that makes it very easy to describe all your successes (project not canceled) in terms of your team's greatness, and all your failures (project canceled) in terms of other people's capriciousness. End users and profitability, for example, rarely enter into it. This project isn't supposed to be profitable; we benefit whenever people spend more time online. This project doesn't need to be profitable; we can use it to get more user data. Users are unhappy, but that's just because they're change averse. And so on.

What I have learned, working here, is that smart, successful people are cursed. The curse is confidence. It's confidence that comes from a lifetime of success after real success, an objectively great job, working at an objectively great company, making a measurably great salary, building products that get millions of users. You must be smart. In fact, you are smart. You can prove it.

Ironically, one of the biggest social problems currently reported at work is lack of confidence, also known as Impostor Syndrome. People with confidence try to help people fix their Impostor Syndrome, under the theory that they are in fact as smart as people say they are, and they just need to accept it.

But I think Impostor Syndrome is valuable. The people with Impostor Syndrome are the people who *aren't* sure that a logical proof of their smartness is sufficient. They're looking around them and finding something wrong, an intuitive sense that around here, logic does not always agree with reality, and the obviously right solution does not lead to obviously happy customers, and it's unsettling because maybe smartness isn't enough, and maybe if we don't feel like we know what we're doing, it's because we don't.

Impostor Syndrome is that voice inside you saying that not everything is as it seems, and it could all be lost in a moment. The people with the problem are the people who can't hear that voice.

Syndicated 2014-06-30 07:05:56 from apenwarr

Reflections on Marketing and Humanity

Today I have new evidence that the human brain is made up of multiple interoperating, loosely connected components. Because I was out buying dryer sheets and there's one with "Fresh Linen" scent. And while one part of my mind was saying, "That's rather tautological," another part was saying, "That's what I always wanted my linen to smell like!" So I bought it, and now you know which one wins.

In the same aisle I found a new variant of soap with the tagline "inspired by celtic rock salt." Now, inspiration can be a hard thing to pin down, but this soap contains nothing celtic and no salt. I'm not even sure there is such a thing as celtic rock salt, or if there is, that it differs in any way from other rock salt, or other salt for that matter. Moreover, the whole purpose of soap is to wash off the generally saltly sweaty smelly mess you produced naturally, so we'd probably criticize them if it *were* salty, for the same reason people criticize shampoos for stripping your hair of its natural oils only to sell the oils back to you in the form of conditioner. Also, how long has soap had an "Ingredients" section on the package? Why not a nutritional content section? And is it bad when the first ingredient is (literally) "soap"? But I bought it anyway, because Irish. Salt. Mmmm.

Finally, a note to you people who would argue that I'm overanalyzing this. You might define overanalyzing as analyzing beyond the point required to make a decision. Since the analysis figured not one bit into my purchasing decision, by that definition, any analysis at all would be considered overanalysis. And frankly, that just doesn't seem fair.

Syndicated 2013-06-30 19:38:22 from apenwarr - Business is Programming

28 Jun 2013 (updated 14 Jul 2014 at 07:02 UTC) »

2013-06-28

A quote from "The Trouble With Computers" about usability studies at Apple when they were developing the original Macintosh:

Apple interface guru Bruce Tognazzini tells this story. The in-box tutorial for novices, "Apple Presents... Apple," needed to know whether the machine it was on had a color monitor. He and his colleagues rejected the original design solution, "Are you using a color TV on the Apple?" because computer store customers might not know that they were using a monitor with the color turned off. So he tried putting up a color graphic and asking, "Is the picture above in color?" Twenty-five percent of test users didn't know; they thought maybe their color was turned off.

Then he tried a graphic with color named in their color, GREEN, BLUE, ORANGE, MAGENTA, and asked, "Are the words above in color?" Users with black and white or color monitors got it right. But luckily the designers tried a green-screen monitor too. No user got it right; they all thought green was a fine color.

Next he tried the same graphic but asked, "Are the words above in more than one color?" Half the green-screen users flunked, by missing the little word "in". Finally, "Do the words above appear in several different colors?"

Success.

That was what Apple did for a single throwaway UX question in a non-core part of the product - before its first release. It apparently took about 5 iterations of UX design followed up by UX research before they finally converged on the right answer.

The lesson I learned from that: usability studies are important. But you can't just take the recommendations of a usability study; you have to implement the recommendations, do another study, be prepared to be frustrated that the new version is just as bad as the old one, and do it all again. And again. If that's not how you're doing usability studies, you're doing it wrong.

Maybe I should re-buy that book. I gave mine away at some point. It's kind of indispensable as a tool for explaining software usability research, if only for its infamous "Can you *not* see the cow?" photo.

Syndicated 2013-06-28 03:47:19 (Updated 2014-07-14 07:02:06) from apenwarr - Business is Programming

Cheap Thrills

I've heard it said that you can just alternate between two UI themes once a week, and every time you switch, the new one will feel prettier, newer, and more exciting than the old one.

This is a natural tendency. The human mind is intrigued by change. That's where fashion comes from, and fads. It gives you a little burst of some chemical, maybe adrenaline (fear of the unknown?), or endorphins (appreciation of the unexpected?), or perhaps some other kind of juice I heard of somewhere but I don't really know what it does.

In tech, this kind of unlimited attraction to the unexpected is the main characteristic of the first phase of the Technology Adoption Lifecycle, the so-called "Innovators."

Perhaps people are happy to be included in the Innovator category. But Innovation isn't just doing something different for the sake of being different. Real innovation is the *willingness* to take the *risk* to do something different, because you know that difference is expensive, but that it will pay off in some way that more conservative sorts will fail to recognize until later.

In fashion, the end goal is to catch people's attention; if you do that, you are innovative. That's why fashion repeats itself every few years: because you can be innovative over and over again with the same ideas, rehashed forever.

In technology, we can hold you to a higher standard. Innovation requires difference, but it also requires a vision of usefulness. Change is expensive. Staying the same is cheap. Make it worth my while. Or if I'm an Innovator, or even an Early Adopter, at least give me a hint about how it's worth my while so I can exploit it while others are too afraid.

Every needless change creates expensive fragmentation. Microsoft ruled their market by being change averse. So did IBM. So did Intel. Even Apple. Whenever they forgot this, they stumbled.

Change aversion works because what makes a platform successful isn't so much the platform as the complementary products. For a phone, that means third-party power adapters, car chargers, headphones with integrated volume controls, alarm clocks with a connector to charge your phone *and* play your music at the same time. For a PC, it could be something as simple as maintaining the same power supply connector across many years' worth of models, so that anyone who standardizes on your brand will have an ever-growing investment in leftover power supplies plugged in wherever they might want them. For an operating system, it means keeping the same approximate style of UI for a long time, so that apps can learn to optimize for it, and a really great app made two years ago can keep on selling well, perhaps with bugfixes and new features but no need for rewrites, because it still looks like it's perfectly integrated into your OS experience. That sort of consistency allows developers to focus on quality instead of flavour, and produces an overall feeling of well-integratedness. It makes people feel like when they buy your thing, they're paying for quality. And yes, people - moving beyond the innovators into the more profitable market segments of the curve - will definitely pay for quality.

Real design genius lies in the ability to make something look pretty, and with gentle updates to keep it modern looking, without causing huge disruption to your whole ecosystem every couple of years. Following fashion trends, while not caring about disruption, does not require genius at all. All it requires is a factory in a third-world country and some photos of what you want to copy.

Ironically, even app developers mostly fail to recognize just how bad it is for them when a platform changes out from under them unnecessarily. Instead, they get excited by it. Finally, I get to rewrite that UI code I really hated, and while I'm there, I can fix all those interaction bugs I knew we had but could never justify repairing! Because now I *have* to rewrite it!

Redesigning things to match a moving target of a platform is really comforting, because it's a ready-made strategy for your company. The truth is, you don't have to think about what customers want, or how to make the workflow smoother, or how to eliminate one more click from that common operation, or how to fix that really annoying network bug that only happens 1 in 1000 times. Those bugs are hard; this feels like freedom. We'll just dedicate our team to "refreshing" the UI, again, for another few months, and nobody can complain because it's obviously necessary. And it is, obviously, necessary. Because your platform has screwed you. Your platform changed for no reason, and that's why your users can't have what they really need. They'll get a UI refresh instead.

And although they are less productive, they will love it. Because of endorphins, or sodium, or whatever.

And so you will feel good about yourself in the morning.

Syndicated 2013-06-12 07:35:30 from apenwarr - Business is Programming

9 Jun 2013 (updated 30 Jun 2014 at 06:02 UTC) »

A Modest Proposal for the Telephone Network

You might not realize it, but there's an imminent phone number shortage. It's been building up for a while, but the problem has been mitigated by people using "PBXes", which basically add a 4-5 digit extension to the end of your phone number to expand the available range. The problem with PBXes is they don't work right with caller id (it makes it look like a bunch of people near each other all have the same phone number) and you can't easily direct-dial PBX extensions from a phone's integrated address book, unless your phone has some kind of special "PBX penetration" technology. (PBX penetration is pretty well-understood, but not implemented widely.)

Even worse: it's no longer possible to route phone calls hierarchically by the first few digits. Nowadays any 10-digit U.S. phone number could be registered anywhere in the U.S. and area codes change all the time.

So here's my proposal. Let's fix this once and for all! We'll double the number of digits in a Canada/U.S. phone number from 10 to 20. No, wait, that might not be enough to do fully hierarchy-based call routing, let's make it 40 digits. But that could be too much typing, so instead of using decimal, we can add a few digits to your phone dialpad and let you use hexadecimal instead. Then it should only be 33 digits or so, with the same numbering capacity as 40 decimal digits! Awesome!

It'll still be kind of a pain to remember numbers that long, but don't worry about it, nobody actually dials directly by number anymore. We have phone directories for that. And modern smartphones can just autodial from hyperlinks on the web or in email. Or you can send vcards around with NFC or infrared or QR codes or something. Okay, those technologies aren't really perfect and there are a few remaining situations where people actually rely on the ability to remember and dial phone numbers by hand, but it really shouldn't be a problem most of the time and I'm sure phone directory technology will mature, because after all, it has to for my scheme to work.

Now, as to deployment. For a while, we're going to need to run two parallel phone networks, because old phones won't be able to support the new numbering scheme, and vice versa. There's an awful lot of phone software out there hardcoded to assume its local phone number will be a small number of digits that are decimal and not hex. Plus caller ID displays have a limited number of physical digits they can show. So at first, every new phone will be assigned both a short old-style phone number and a longer new-style phone number. Eventually all the old phones will be shut down and we can switch entirely to the new system. Until then, we'll have to maintain the old-style phone number compatibility on all devices because obviously a phone network doesn't make any sense if everybody can't dial everybody else.

Actually you only need to keep an old-style number if you want to receive *incoming* calls. As you know, not everybody really needs this, so it shouldn't be a big barrier to adoption. (Of course, now that I think of it, if that's true, maybe we can conserve numbers in the existing system by just not assigning a distinct number to phones that don't care to receive calls. And maybe charge extra if you want to be assigned a number. As a bonus, people without a routable phone number won't ever have to receive annoying unsolicited sales calls!)

For outgoing calls, we can have a "carrier-grade PBX" sort of system that basically maps from one numbering scheme to the other. Basically we'll reserve a special prefix in the new-style number space that you'd dial when you want to connect to an old-style phone. And then your new phone won't need to support the old system, even if not everyone has transitioned yet! I mean, unless you want to receive incoming calls.

...

Or, you know. We could just automate connecting through a PBX.

Syndicated 2013-06-09 16:08:21 (Updated 2014-06-30 06:02:03) from apenwarr - Business is Programming

628 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!