Older blog entries for apenwarr (starting at number 101)

More Compromises

I've written before about the importance of finding non-compromise solutions to problems. Well, here are a few more trick questions to ponder.

Who can you trust?

You will probably find that your answer to this question puts you in one of the following two groups. (You might be surprised to find that the other group really does exist, completely disagrees with your group, and is just as popular as yours.)

  1. Trust must be earned. You won't trust someone to do something correctly until they've demonstrated their ability to do it. Once they've earned your trust, you'll usually trust them automatically to do that job in the future. But that doesn't mean you automatically trust them to do a different job, and so on.

  2. Trust is transitive. If a person is in a position of authority, you trust them implicitly. If a person is trusted by a person with authority over you, then you trust that person as well. For example, someone who works for a manager that your own manager trusts would be trusted by you automatically, so you really don't have to worry about whether they'll do a good job or not; that's really someone else's problem to worry about, and you can safely assume they're handling it.

The second solution is short-term efficient. The first solution is long-term resilient.

I think I see the non-compromise solution to this one. Do you?

Brilliance vs. Repeatability

Another question that has come up recently has been the question of repeatable processes versus the flash of brilliance.

In many cases, there isn't much of a compromise between the two. A really great novel is never written using a repeatable process (although pretty good mass-produced junk fiction can be and is). Meanwhile, a widget factory runs on repeatable processes, and doesn't require much brilliance once it gets going. (Hopefully there was some brilliance in the original widget design.)

Software, however, just makes a mess of things.

You could summarize the mess as the never-ending conflict between "Software Engineering" and "Computer Science." Software Engineering claims that software can be created faster, better, and more reliably if you can just define repeatable processes and hire a bunch of code monkeys to implement those processes. For some things, like banking applications and warehouse management software, it works: well, at least it works better than trying to do the same thing without your repeatable processes.

Computer Science, on the other hand, is all about the flashes of brilliance and the "aha!" moments. (You might say the same about science in general, but most scientific research nowadays, outside of universities at least, is more about the repeatable processes than the brilliance.)

The really great stuff that we love about computing requires brilliance. The guy who figured out how to do the iPod user interface, or the person who realized that data structures could have functions encapsulated in them, or the person who invented the superposition operator in perl: those people are the ones who really make all our lives better, even as the rest of us try to repeatably write working programs in perl.

Unfortunately, it seems that two totally different social structures are needed to produce repeatability versus brilliant insight. A structure that encourages repeatability will stifle creative insight; but a structure that encourages creativity produces non-repeatable results, almost by definition.

I don't know the non-compromise answer to this one, but I do know the form of the solution: you want a process that repeatably produces brilliant results. If you could do that, you would be unstoppable.

But what on earth would you do with that much output?

3 Feb 2006 (updated 3 Feb 2006 at 08:20 UTC) »
Company Policy, Revisited

Now these guys have the right idea.

And more on design

The same guys have some really interesting writing on various design concepts, including non-software UI design. (That's just one article; browse around to find others.)

UPDATE: Oh man, this stuff is addictive. The Intel "Yes" compaign in Russia and "To be quite candid, it was the first time I was sitting in a car with the Settings menu."

1 Feb 2006 (updated 3 Feb 2006 at 03:37 UTC) »
It Takes Two to Fail

It struck me today how management success is kind of an OR operation. So many things in life are AND: if you do this and this properly, then it works! But if you miss that one step, then boom!

But management is different. On the one hand, if you're a good manager and you align things just the right way, you can take a reasonably good group of people and make them massively successful. (Example: most big companies.) But on the other hand, even if you suck like crazy at management, if the people you manage are really great, they can succeed despite the way you've kind of set them up to fail. And of course, there's every point in between those two extremes.

But what's interesting is that because of this effect, when you do manage to fail (or your project is late, or buggy, or whatever), then there's always someone to blame it on: the person doing the work. Because obviously the person doing the work didn't rise to the challenge, blah blah, and look, it sucks!

I think a proper manager's job is to reduce the challenge, so nobody will fail to rise to it. But maybe not eliminate the challenge entirely, or else it's too boring.

UPDATE: I didn't like the old title. Sorry.

21 Jan 2006 (updated 21 Jan 2006 at 05:02 UTC) »
Choices make things worse

My company just created a new pricing combination called the "small business protection package." The idea is that it includes basically all the features we sell, with an inflated number of user licenses, along with an extended warranty, all at a price that's slightly discounted from the total of all those things put together. Now, it turns out that our resellers and customers just love it, even though actually what they would have bought before was probably less stuff at a lower price.

You've seen this before. It's the fast food "combo" model. People just hate adding up a bunch of small prices and making a big one; it makes them worry that they're not getting a good deal. But when there's one "standard" package with a lot of stuff and a simple price and they save money versus buying things separately, then everything seems easy. (The "save money" thing is mostly just an illusion. They're buying virtual stuff - software licenses - that they wouldn't have bought before anyway, so when we sell more of it and then give a discount, it's mostly the same to us, and the customer often spends more money.)

Anyway, simplicity is worth paying for. Who knew?

On a similar note is a mistake I made a couple of years ago in introducing a concept I called "dial-a-vacation" for developers. The idea is that, on top of your basic vacation allowance, you could trade away part of your salary for an equivalent number of vacation days. It's basically like taking unpaid vacation, only spreading the "cost" of it throughout the year, so you don't go broke when you're actually taking your vacation. Since "normal" vacation is obviously mathematically the same, but less flexible, I thought this would obviously be a benefit.

It's not. People like getting more vacation days because it's an employment "perk" and it's "free." Trading away your salary for vacation days just hurts: you feel like you're losing something, not gaining something, even though technically it's equivalent.

Almost all job benefit programmes work on the same principle. You can give programmers a $20 t-shirt and raise morale way more than a $20 annual salary increase would do. You can buy a $2000 foosball table, spread among 20 people, and seem like a way cooler company than one that got everyone a $100 raise. Everyone wants eyeglasses added to our health insurance plan, even after we explain that eyeglasses aren't really insurance (either you just always need them, or you just don't) and so the "insurance" overhead is just giving money - money that could be your salary - away.

Just like all those examples, "free" vacation is worth a lot more happiness than the same amount in dollars. This literally means that you can be paid less and not have dial-a-vacation, and you'll be happier than if you get paid more and do get dial-a-vacation.

Psychology is strange. Oh well, live and learn.

And I won't even try to explain my plan for buying monthly subway passes in order to feel rich. But trust me, it's exactly the same idea.

Historical vs. Model-based Predictions

We've had some discussions at work lately about time predictions, and how we never seem to learn from our prediction mistakes. We have two major examples of this: first, our second-digit (4.x.0) releases tend to be about 4.5 months apart. And second, our manual QA regression tests (most tests are automated nowadays) take about 2 weeks to complete each cycle, with several cycles per release.

Those two numbers - 4.5 months and 2 weeks - are what I call historical predictions, that is, you simply measure how long an operation has taken in the past, and then predict that it will take that long in the future. As long as you aren't making major changes to your processes (eg. doing more/less beta testing, doubling the number of tests in a test cycle, etc), this is an extremely accurate method for prediction. Just do the same thing every time, and you'll predict the right timeline for next time. Since accurate predictions are extremely valuable for things like synchronizing advance marketing and sales/support training with your software release, it's a big benefit being able to predict so accurately.

But historical predictions have a big problem, which is that they're empty. All I can say is it'll take about 4.5 months to make the release - I can't say why it'll take so long, or what we'll be doing during that time. So it's very easy for people to negotiate you down in your predictions. "Come on! Can't you do it in only 4.0 months instead of 4.5? That's not much difference, and 4.5 months is too long!"

To have a serious discussion like this about reducing your timelines - which of course, is also beneficial, for lots of reasons - you can't use historical prediction. Trying to bargain with broad statistics is just expecting that wishing will make your dreams come true. I'd like it to take less than 4.5 months to make a release; that won't make it happen.

To make it actually happen, you have to use a totally different method, which I'll call model-based prediction. As you might guess from the name, the model-based system requires you to construct a detailed model of how your process works. Once you have that model, you can try changing parts of the model that take a lot of time, and see the difference it makes to the model timeline. If it makes the prediction look better, you can try that change and see if it works in real life.

Of course, successfully using that technique requires that your model actually be correct. Here's the fundamental problem: it's almost never correct. Worse, the model almost always includes a subset of what happens in real life, which means your model-based predictions will always predict less time than a historical prediction.

Now back to the discussion where someone is trying to bargain you down. Your model predicts it'll take 2.5 months to make a release. History says it'll take 4.5 months. You did change some stuff since last time; after all, you're constantly improving your release process. So when someone comes and wants you to promise 4.0 months instead of 4.5 months, you might plausibly believe that you really maybe did trim 2 weeks off the schedule; after all, the right answer must be between 2.5 months and 4.5 months somewhere. And you give in.

We make this mistake a lot. Now, I'm not going to feel too guilty about saying so, since I think pretty much every software company makes that mistake a lot. But that doesn't mean we shouldn't try to do something about it. Here's what I think is a better way:

  • Remember, your model-based prediction is worthless (ie. definitely wrong and horribly misleading) unless it predicts the same answer as your historical prediction. Tweak it until it does.

  • Each time you run the process, if your model-based prediction was wrong, look back at real life and note what parts you were wrong about, then update the model for next time. This might require taking more careful notes when you do things... maybe using some sort of... uh, schedulation system.

  • If you change your processes, start with your previous model (which must match with history!), then adjust it to match your new plan. Use the prediction from that. Remember: almost all changes to the model will not significantly affect the timeline. Most actions are not on the critical path. So you know your model is wrong if almost every change you make has some kind of significant effect. You can suspect your model is getting close if most changes don't have an effect.

  • If, after changing your processes, your model-based prediction was wrong (most common failure: it took just as long as history said it always does), then your model is wrong. Compare the new and old models, and figure out why your change was not actually on the critical path even though you thought it was. Try again.

I'm sure lots of people have written entire books about this sort of thing, but neither I, nor most programmers, and certainly not most managers, have read them. What they do instead is just ignore statistics completely and try random things, leading to a horrible pattern: "Darn it, you guys keep releasing stuff and it takes too long! We'd better drop all the weird parts of your process and do something more traditional." Then that doesn't improve the process, and it might even make things worse. But nobody will ever say, "Darn it, we're doing it just like everyone else, but it takes too long! We'd better try something randomly different!"

If you don't understand your underlying model and do what you do for a clear reason, eventually someone will talk you into doing things the formal, boring, and possibly painful and inefficient way, whether it actually helps or not. And you won't have any firepower to talk them out of it.

And that's why you can't just use historical predictions, no matter how accurate they might be. Of course, you can't just use model-based predictions either, because we know they're inaccurate. Instead, the one true answer is the one that has both models correct at the same time.

Assorted commentary

Markham/Richmond Hill still has the best bubble tea.

Blue Mountain is surprisingly non-bad, although I get the impression I must be kinda dumb, because everyone there kept telling me, "But you have better hills out near Montreal, don't you?" Yes, we do. But I'm not there, now am I? And the weather was nice.

Also, I needed an excuse to drive the Pontiac Pursuit some more. Mmm, pulsating. And I have to give it back soon. Sigh.

My failed attempt to defend Java

Flying bejeezus, pcolijn. There are some things I just didn't want to know. I quit.

Also, I think I'm getting a bit too technology dependent. I had to ask Google how to spell bejeezus.

Elections

No, I don't know who you should vote for. But I do know the best campaign tagline ever. "At least in Quebec, there's the Bloc." The rest of you poor saps don't really have much to choose from, do you?

On getting smarter

Since August sometime I've been a very different person. You might not have noticed. But now that I understand everything so clearly, it's very weird to keep talking to and dealing with people who don't understand, or who can't understand, or who refuse to understand. It's particularly eerie to argue with people who were so thoroughly brainwashed by the previous me that they actually thought I was right and now use my own fallacious arguments to explain why I'm wrong. It's like talking to a ghost.

But I'm not crazy. I'm sure of it. It's everyone else who's crazy. I may have to do something actually crazy just to make sure I can tell the difference.

Observations from my Trip to London

1. You can't say "Trip to London" without people assuming you mean London, England, even though I actually mean London, Ontario. For some reason this doesn't happen with Waterloo or even, irony of ironies, New England.

2. The Pontiac Pursuit is the first car I've driven in a long time that continually makes me think "awesome." Apparently due to my complete lack of media input lately, I'd never even heard of it before yesterday. Their web site even talks about its pulsating performance - as if it were a chicken, of all things! I wish I needed a car.

3. After two days spent essentially in tech support for someone else's product, I now hate programmers even more than ever.

3b. Also, Windows sucks much more than I usually give it credit for.

3c. So does everything else, particularly commercial database servers and anyone who has ever written "sleep(10)" for a "wait for operation to complete" function. I suspect those people are often one and the same.

Java constness and GC

pcolijn wrote about some things he likes/doesn't like in Java. I'm going to disagree with one from each category:

constness: I think you have your inheritence backwards. An immutable instance isn't a subclass of a mutable instance where some operations (gag) throw exceptions by surprise. No, in fact, a mutable instance extends an immutable one. In other words, start with the immutable interface, which only has getThis(), getThat(), etc. From that, derive an interface that adds setThis(), setThat(), etc. People require one of the two interfaces on their objects, and all is well, and you didn't really need const after all. The method you suggested - pretending to implement the mutable interface and then throwing exceptions - is like doing exactly the same thing in C++, and just as evil. On the other hand, in C++, a parent class can talk about a function with "const" whatever, while a child can implement the "non-const" whatever, and things will work sensibly. But if you think about it, that's just the same as what you can do in Java.

Now for the part about Java you do like: garbage collection. I'm actually a big fan of GC as a concept, because the idea of not crashing all the time because of randomly overwriting memory kind of appeals to me. However, I've learned two important things about GC:

1) Not having to explicitly delete things doesn't actually mean you don't have to explicitly think about object deletion; it just makes you think you don't have to think about it, which is much worse. Hence, where C++ programs tend to have explosions, Java programs (seemingly universally) have nasty memory leaks and no good way to find them (because they're not "leaks" in the usual sense; if nobody had a pointer to them anymore, they would be auto-deleted). They also have non-deterministic destructor behaviour, which is very restrictive. A GC language is good as long as people actually think about object lifecycles, but my experience is they don't, and so GC's don't solve anything. (If you think about object lifecycles to the extent that you have to, then you turn out not to need GC because your smart pointers will all do the right thing.)

2) I heard that Java object creation/deletion is so slow that people tend to use "pools" of object types that are created/deleted a lot, and explicitly place those objects into and out of the pools. You know what that is? It's bypassing the GC so you can get explicit memory allocation/deallocation. Snicker. Again, this is nothing against GC specifically (which can be quite fast), but it's definitely a sign that all is not right with the world.

31 Dec 2005 (updated 31 Dec 2005 at 20:28 UTC) »
UI Design and iPod Movie Conversion

guspaz is working on a Windows program called iPodDrop. I don't run Windows and don't need any such thing, but he was looking for suggestions on how to get more people to use it, since the alternatives apparently suck (which is not really very surprising).

Some suggestions:

First, drag and drop has largely been a UI failure. Dragging a document from one open folder to another folder - okay, makes sense. Dragging it from an open folder to a folder *icon*: suspicious, but okay. Dragging it from an open folder to a *program*: forget it, you've just blown your metaphor and lost 99% of your audience. The problem is that people don't take real-life documents and, say, smack them against a toaster to make them do things. When you have a toaster, you operate the toaster, not the toast.

Try adding an option to the Explorer right-click menu instead. (It's easier than you think.) The nice thing about right-click menus is that there's no real-life metaphor for them at all, which seems to upset Steve Jobs but means that people have an open slot in their minds to absorb the concept (eventually). Now that they've absorbed the idea that "right click = give me a list of actions to do on this object," adding new right-click menu items is the way to go. (Supporting drag-and-drop in *addition* to that is perfectly okay.) Right-click is a metaphor I'd actually love to extend to real life. Imagine right-clicking your bread and telling it to "toast." Saves me running around looking for my toaster.

Second, Windows users *hate* programs that have no GUI. Haven't you noticed? Millions of people will need to recode videos for their iPod, but maybe 0.1% of them want to use a command line program to do it. This goes beyond the badness of drag-and-drop; if you want users to be happy, you'll need to pop up a window with a few toggles, go and cancel buttons, and most importantly, a completion bar.

Third, as for getting more users for your software, you should see about getting it bundled with some spyware. Spyware is the new DirectX. People download and install spyware on their systems like crazy for some reason, and all you need to do is tag along by being included in the same installer package. Lots of other popular programs, especially ones dealing with "media sharing," are being distributed the same way.

Irony

What Unix does well isn't what people want.

-- Rob Pike

Trancendental Philosophical Musing

wlach: I see the difference between the two views of "idealism" you posted, but they don't seem incompatible, only different. That is, a particular ideal is an idea that (as a philosopher might say) can exist separate from reality; in fact, if ideals are "how things ought to be", reality often doesn't really enter the picture. But there's nothing stopping us from having an idea that is also real, or from having an ideal that can be achieved. The two things aren't identical (not all ideas are ideal), but perhaps one is a subset of the other (are all ideals ideas?).

Anyway, I would argue that the article I linked to earlier is at least relevant to the particular metaphysical goop I was spouting at the time. :)

Quality Assurance

We were terrible, but we thought we were great. It doesn't matter how terrible you are if you think you're great.

-- Yanni

I actually quietly attended the Desktop Architects conference in Portland, but avoided catching up on the mailing list about it until now. Thanks to Burgundavia for linking to the Linus thing. Here are some of my favourite quotes from the discussion.

Non-compromise

The majority of end-users want a simple printer dialog. In fact most people will just hit the Print button without changing any settings. These users are not 'idiots' they just have better things to do then futz around with printer settings. On the flip side I'm sure there are many pre-press publishers who want to tweak and change every setting. The two design goals do not have to be at odds with one another. A good design will satisfy both.

-- Gregory Raiz, Windows developer

Focus/Multifocus

To put it in mathematical terms: "The Intersection of all Majorities is the empty set", or its corollary: "The Union of even the smallest minorities is the universal set".

It's a total logical fallacy to think that the intersection of two majorities would still be a majority. It is pretty damn rare, in fact, because these things are absolutely not correlated.

-- Linus Torvalds, kernel developer

Amplify

When user interfaces means that something CANNOT BE DONE, it's not about "usable design" any more. At that point, it's about UNusable design.

Any Gnome people who argue that it's about "usability" have their heads up their asses so far that it's not funny. I've argued with them about this before, and I know others have too, and mostly given up.

"Usability" is an issue only if you can do something at all. But if you can't do the thing at all, it's pointless to talk about usability: the thing is BY DEFINITION not usable if it cannot be used for a specific task.

-- Linus Torvalds, again

Non-Simplify

If I have an overall point here, it's that all of us who are maintainers should be willing to make choices and take the heat, and that it breaks our ability to make good software if we start thinking "all things to all people," or "I can't do anything, so I'll just punt or bow to the flames." Either we have something to contribute due to our professional skills, or we don't. Users (like Linus) vote with their feet on whether we contributed the right things for them personally, or focused on someone else instead, or just failed to do anything useful for anyone at all.

If nobody uses my software, I want it to be my fault. And the same if they do use it. Why else would I bother trying to be better or worse at my profession?

-- Havoc Pennington, designer

Not Taking Things Too Seriously

Seriously, this has been entertaining. And I think Linus has made a lot of good points that certain people in the GNOME world should make an effort to take to heart. I found myself nodding rhythmically as I read most of his mails, even if he was being a big jerk half the time.

-- Nat Friedman, person who doesn't take things too seriously

92 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!