Older blog entries for apenwarr (starting at number 482)

20 Feb 2009 (updated 23 Feb 2009 at 00:06 UTC) »

Apparently, nobody writes tokenizers like this

Call me crazy, but I've never really seen the point of so-called "parser generators" and "lexical analyzer generators" in real life. Almost any file has syntax that's so simple, it's easy to just parse it yourself. And languages that are more complicated or have tight parser performance requirements - like C++ compilers or Ruby interpreters - tend to have hand-rolled parsers because the automatic parser generators can't do it.

So who benefits from automatic parser generators? I don't know. I feel like I'm missing something.

This feeling came up again the other day when I found I had to parse and produce some XML files at work - in Delphi. Having seen lots of advice in lots of places that "the first thing a new programmer does with XML is to try, and fail, to write his own XML parser," I was hesitant. Okay, I thought. Why not look into one of those well-known, fancy pants XML parsers that will surely solve my problem in two seconds flat?

Well, I looked into them. Much more than two seconds later, I emerged, horrified. How can you guys possibly make parsing a text file so complicated? Why, after adding your tool, does my project now seem more difficult than it did when I started?

I still don't know. Look guys, I really tried. But I just don't understand why I'd want to use a DTD. Or twelve layers of abstraction. Or the "structured" way you completely reject (with confusing error messages) almost-but-not-quite valid XML files, in clear violation of Jon Postel's robustness principle.

So I broke down and wrote an XML parser myself. In Delphi. In about 500 lines. In an afternoon. I'm sure I left out major portions of the XML spec, but you know what? It parses the file the customer sent me, and the Big Fancy Professional XML Library didn't, because it said the file was invalid.

I guess that makes me a clueless newbie.

But back to tokenizers

As an odd coincidence, someone I know was doing some (much less redundant) work on parsing a different file format at around the same time. As anyone who has done parsers should know, most parsers are divided into two main parts: lexical analysis (which I'll call "tokenizing") and parsing.

I agree with this distinction. Unfortunately, that seems to be where my formal education ends, because I just can't figure out why lexical analysis is supposed to be so difficult. Almost all the lexical analyzers I've seen have been state machines driven by a single main loop, with a whole bunch of if statements and/or a switch/case statement and/or function pointers and/or giant object inheritance hierarchies.

Sure enough, the person I was talking to was writing just such a tokenizer in python - with lambdas and all the rest.

The problem is I just don't understand why all that stuff should be necessary. Traditional lexical analysis seems to be based on the theory that you need to have a single outer main loop, or you'll be inefficient / redundant / impure. But what I think is that loop constructs are generally only a single line of code; it doesn't cost you anything to put loops in twelve different places. So that's what I did.

I suppose that makes me a newbie. But it works. And my code is more readable than his. In fact, when I showed him a copy, he was amazed at how simple it is. He actually called it brilliant. Seriously.

To be honest, I still feel like I must be missing something. And yet here we are.

...so without further ado, my XML tokenizer, in 62 lines of Pascal. For your convenience, I have highlighted the blasphemous parts.

function iswhite(c: char): boolean;
begin
   result := c in [' ', #10, #13, #9];
end;

function important(c: char): boolean; begin result := c in ['<', '>', '=', '/', '?', '"', '''']; end;

function next_token(s: string; var i: integer): TPwToken; var start, max: integer; begin start := i; max := length(s)+1; result.val := '';

if i >= max then begin result.ttype := ttEof; end else if important(s[i]) then begin if s[i] = '<' then begin result.ttype := ttLt; inc(i); end else if s[i] = '>' then begin result.ttype := ttGt; inc(i); end else if s[i] = '=' then begin result.ttype := ttEquals; inc(i); end else if s[i] = '/' then begin result.ttype := ttSlash; inc(i); end else if s[i] = '?' then begin result.ttype := ttQuestionMark; inc(i); end else if s[i] = '"' then begin result.ttype := ttString; inc(i); while (i<max) and (s[i] <> '"') do inc(i); inc(i); result.val := copy(s, start, i-start); end else if s[i] = '''' then begin result.ttype := ttString; inc(i); while (i<max) and (s[i] <> '''') do inc(i); inc(i); result.val := copy(s, start, i-start); end else begin assert(false, 'char isimportant but unrecognized?'); end; end else if iswhite(s[i]) then begin result.ttype := ttWhitespace; while (i<max) and iswhite(s[i]) do inc(i); result.val := copy(s, start, i-start); end else begin result.ttype := ttString; while (i<max) and (not important(s[i])) and (not iswhite(s[i])) do inc(i); result.val := copy(s, start, i-start); end; end;

Update (2009/02/22): Reddit discussion of this article.

Syndicated 2009-02-20 20:06:11 (Updated 2009-02-23 00:06:04) from apenwarr - Business is Programming

Storage so reliable you don't need backups

My friend from high school and one-time employer, David Slik, made a presentation about the company he founded and still works for. Bycast makes high-end clustered "cloud" storage systems that are apparently so reliable that some of their enterprise customers have stopped making backups altogether... after thoroughly testing Bycast's fault recovery mechanisms, of course.

Watch him claim this and other amazing things in his presentation about Bycast at the SNIA Cloud Storage Summit.

Syndicated 2009-02-14 22:56:13 from apenwarr - Business is Programming

The price of money in China

People keep posting articles like Why China Needs US Debt. I think most of us know enough to disregard random opinion pieces written by lobbyists, and most of us who read such an article will get a "wait, that can't be right" feeling. But what exactly is wrong? China obviously does need us, right? Or why would they trade with us?

I first started to understand the problem while I was watching the Canadian federal election debates and someone (it might have been the Green Party leader) said something like, "We have to cut back on our oil exports! It's killing Canada's manufacturing industry!"

...and I did a double take. Wait. What?

I had to look into it a bit before I understood. What was happening was that the increased oil prices were causing a flood of activity into Canada's Oil Sands projects, and thus a massive increase in oil exports. Increased exports were raising the value of the Canadian dollar (which, importantly, is not pegged to any other currency). A higher Canadian dollar makes it harder for people from other countries to buy Canadian stuff: not just oil, but anything. And unlike with oil, our other industries didn't have a massive natural (read: Canada's really big) competitive advantage. Which means that if our oil exports expand massively, it kills our manufacturing sector.

The success of one industry, unrelated except by trading in the same currency,(1) can harm another industry. And that realization, to me, was a new and important one.

Now, back to China. Their currency, in light of being pegged to the US currency, is essentially the same as the US currency. What does that mean? Success in exports in one sector (China manufacturing) can damage the market in another sector (US manufacturing) even if they manufacture totally different things, simply because the successful market artificially raises the prices of the unsuccessful market.

Now, pegging your currency can be kind of expensive. China does it by stockpiling truckloads of US dollars. Well, more precisely, they buy US debt, which is essentially the same thing. What this really means is that China takes much of the profit from its exports and mails them back to the US (as "debt"), so that the US can afford to buy more Chinese stuff.

In the article I linked to above, the claim is that China needs US debt to keep increasing, because there's simply nothing else in the world big enough to spend all those US dollars on. And that's true, in a sense, if you believe that money has intrinsic value. Of course, China is smart enough to know that it doesn't.

...which is where it gets even stranger.

Even though China knows money is worthless, they keep shipping their perfectly valuable manufactured goods to us in exchange for worthless pieces of paper.(2) How dumb is that?

Not dumb. Brilliant.

Our whole theory of economics is based on two axioms, one of which is that human wants are unlimited. But we're starting to figure out that's not really true. As a society, we're slowly realizing that more consumption doesn't lead to more happiness. So what does?

For a lot of people, maybe the secret to daily happiness is just this: a stable job and the feeling that you're doing it well and helping society.

By exporting stuff to us by the crapload - and "oh darn, poor us, we're such victims" denominating their wealth in US dollars - they ensure that they have jobs and happiness. We're the helpless, unproductive, soulless consumers.

Call it victory by superior philosophy.(3)

Footnotes

(1) Of course, our manufacturing industry also uses a lot of energy, and high energy prices are bad for them too. But that's true for everyone's manufacturing industry, so it's not automatically a competitive disadvantage.

(2) Thought experiment: imagine China as a black box with inputs and outputs. From the point of view of China, sending us useful goods (which we'll use up and then dump in our landfills) is a lot like just taking those goods and dumping them in the ocean. As far as China is concerned, nothing would be very different if all the ships just sank before they arrived here.

(3) It's a strange war, though: you don't have to worry about them invading us if they win. What would they steal? Our consumers?

Syndicated 2009-02-14 21:25:23 from apenwarr - Business is Programming

Reading your email is not a matter of *luck*

I just got a notification from Air Canada telling me that my flight had been changed. Okay, whatever, flights change sometimes, although I note that Air Canada flights from Toronto to Thunder Bay change a little bit more often than normal levels of sanity might imply.

But what annoyed me about their email was this:

    Unfortunately, replies to this e-mail will not be read.

Oh, really? Unfortunately? It isn't unfortunate at all. Unfortunate is something like this: "Unfortunately, we had to change your flight." You know, to explain away bad luck, whether it be bad luck in flight scheduling technicalities or bad luck in choosing your management team.

What they really mean is, "Because we don't like you and don't care about the quality of our customer service and feel that our emails to you are much more important than the opposite, replies to this email will not be read." It would also be accurate to continue with, "And if you try to contact us by phone, we'll make you wait on hold for half an hour and then, unfortunately, drop the call because we and/or our communications devices are incompetent."

(Every time I complain about Air Canada, I feel bad, because Westjet is much better in every way. I don't know if they read their emails either, but I do know that they answer the phone. Unfortunately, there's no sane way for me to get from London, Ontario, to Thunder Bay, Ontario via Westjet.)

Syndicated 2009-02-11 17:07:44 from apenwarr - Business is Programming

Setting up a default/global mail account in database mail

Hi, Google. You kind of failed to help me out earlier when I was asking about "how to set a global mail profile for database mail in Microsoft SQL 2005." Here's what I wish you had said:

First of all, "Database mail" ("DBMail" or "Sysmail") is not the same as "SQL mail" ("SQLMail"). They're both stupid and overly complex, but DBMail is newer and slightly less stupid.

SQLMail uses an installed MAPI provider on your system to send mail, which means you need such a thing, possibly Outlook. DBMail apparently ignores your MAPI provider entirely. So if you find an article that says you need to install Outlook first, just ignore it; it's not true.

First, enable dbmail:

sp_configure 'Database Mail XPs', 1

RECONFIGURE

Then, create a dbmail account and profile:

EXECUTE msdb.dbo.sysmail_add_account_sp
    @account_name = 'TestAcct',
    @description = 'Mail account for use by all database users.',
    @email_address = 'test@example.com',
    @display_name = 'Test Server',
    @mailserver_name = 'smtp.example.com'

EXECUTE msdb.dbo.sysmail_add_profile_sp @profile_name = 'TestProf', @description = 'Profile used for administrative mail.'

EXECUTE msdb.dbo.sysmail_add_profileaccount_sp @profile_name = 'TestProf', @account_name = 'TestAcct', @sequence_number = 1

Next, you can set that dbmail profile as the "default profile" ("global profile") for all users (ie. the "public" group):

EXECUTE msdb.dbo.sysmail_add_principalprofile_sp
    @principal_name = 'public',
    @profile_name = 'TestProf',
    @is_default = 1

And finally, try sending a test message:

EXECUTE msdb.dbo.sp_send_dbmail
    @recipients='test@example.com',
    @subject='test',
    @body='test'

And may I never have to look this up again.

Syndicated 2009-02-04 20:15:30 from apenwarr - Business is Programming

Some brief responses to my critics

As you've probably noticed by now, my journal doesn't allow comments. This is for two reasons: first, because the software it uses is kinda basic (I like it that way!) and simply doesn't support them; and second, because it takes a lot of time to delete spam and stupid comments, and this increases the threshold of entry into the discussion.

Now, that doesn't mean I don't "enjoy" the comments occasionally. Sometimes my stuff gets picked up on syndicated sites and collects some discussion. Just so nobody gets the idea that I don't care, I'll summarize a few of my responses.

...

ext2resize sucks, but apparently so do I, and also a lot of other things too
(original) (reddit discussion)

Oh, right!! Geez, I'm such an idiot. I obviously forgot to read the man page, which is how I missed the "--dont-destroy-all-my-data" option.

...

A little secret that will make the world fall apart
(original) (ycombinator discussion)

Dear "bring back the gold standard" people: Gold is also intrinsically worthless. I know this is true because I don't own any and I don't care, and if I had some, it would affect my life precisely not at all. The gold standard worked because people believed it would work, just like any other monetary system.

Also, being rare does not make you valuable. OMG! Smallpox is rare!! Sign me up!

...

Tracking an entire Windows system inside Git
(original) (ycombinator discussion) (reddit discussion)

It seems nobody was particularly able to criticize this article because they were stunned by my insanity. I aspire to this level of achievement in all my writing.

(But to the guy on reddit who thinks NTFS is faster than ext3: I use them both on a daily basis. I didn't do any pro-style fancy-pants double-blind study benchmarks, but... try copying 1000 small files sometime. The one that does it more than twice as slowly loses. I'm just saying.)

Thanks to everyone who writes to me or on the web about my articles, as usual. You're all great. No, that doesn't mean I'll be enabling comments.

Syndicated 2009-01-29 18:27:04 from apenwarr - Business is Programming

21 Jan 2009 (updated 11 Feb 2009 at 18:03 UTC) »

Tracking an entire Windows sytem inside Git

I am often accused, sometimes by myself, of being a complete nutcase. This is one of those times.

"But wait!" I say to myself. "Sure, I might look crazy, and I might be crazy, but don't you at least agree that there might be a point to all this?"

I look back at myself suspiciously. "And if there is?"

"Well, that would mean..."

"Never! Don't even say it! It's all nonsense! You're just trying to make me go along with another one of your insane schemes!"

...

Ahem.

Anyway, yes, I did it. I put all of Windows under git version control.

You see, I installed Windows 98 inside Win4lin Terminal Server (the old Win4lin, before the useless qemu-based "Win4lin Pro" came out). To do it, I had to downgrade my Linux kernel to 2.6.12.4, the last one that Win4lin ever made a patch for. But that's no big deal; it's an old machine anyway. It works fine with the old kernel, even in Debian Etch.

Now, Win4lin (the old one, the only one that matters) has the little-known but extremely useful property that it shares its filesystem directly with the Linux host system. That is, unlike VMware and other "pure" virtualization systems that use "disk images," the files in your Win4lin system map exactly to files in a subdirectory on your Linux system, usually ~/win. So there are files like ~/win/autoexec.bat, ~/win/windows/explorer.exe, and so on.

In the olden days, this was nice primarily because it meant the virtual Windows system didn't need to have its own disk cache. Also because Linux's disk cache and filesystem are fantastically more efficient than anything in Windows, by a very large margin. (Trust me, I've done comparisons. I'm sure other people have too. Maybe they just don't publish the results because they don't look believable enough.) Oh, and of course, you can access files on your Linux system without using Samba, which means things go way faster.

So those are all reason enough to use Win4lin. Or were, in the olden days. Nowadays, Windows 98 is looking a bit old, and the old Win4lin doesn't support Windows NT-based systems (like 2000, XP, and Vista). So to tolerate the limitations of Windows 98, you need a pretty good reason.

This week I found that reason: git!

I've been working on a project that requires me to develop plugins that are backwards compatible with old versions of MS Office, perhaps as far back as Office 97. I also need to test with all the newer versions: 2000, XP, 2003, and 2007. So here's the thing: all those versions, except 2007, work just fine on Windows 98, and Microsoft is really good at backward compatibility. So if I make a plugin for Office 97 on Windows 98, it should run with (close to) no problems on newer platforms. I should be able to just do a cursory check every now and then to make sure.

So, I thought, win4lin should be a good system to check all the old versions on. Then if I throw in a VMware with XP and one with Vista, I should be all set.

After setting it all up (which was admittedly a bit painful), I realized just how efficient Windows 98 is... at least compared to later versions. Did you know a base install of Win98 is less than 100 megs? Why, I have bigger source trees than that lying around nowadays.

...source trees... hmmm...

I had to try it, of course. I went into ~/win, typed "git init", and "git add .", and "git commit". Ta da, a working git repository with my fresh Win98 install.

Then I created separate branches, one for each version of Office, and installed them one by one. And now I can easily test new versions of my plugin: "git checkout office2000; win" or "git checkout office97sr2; win".

Now, the final trick will be to get this whole system running inside VMware. If that works, then the major limitation on this setup - the old kernel that will surely be missing a necessary driver eventually - goes away. I'll be able to use this setup forever to test Office plugins up to Office 2003.

Unfortunately, I can't advise you to try to duplicate my setup. I happened to have a valid Windows 98 license and a valid Win4lin license, neither of which you can buy anymore, and a collection of valid MS Office licenses acquired over time (including the most recent ones via MSDN).

In fact, the rarer Windows 98 licenses become, the more distinctive my amazing setup will make me. Bow down to my power, lowly normal people!

...

And all this makes me think of something I should add to my thresholds list: the day when a Windows XP install is "small enough" to put under version control.

In other news, ReactOS is looking surprisingly promising lately.

If I get a virus, I can 'git revert' it.

Syndicated 2009-01-19 23:16:57 (Updated 2009-02-11 18:03:19) from apenwarr - Business is Programming

19 Jan 2009 (updated 19 Jan 2009 at 23:03 UTC) »

Smartcards, PINs, cryptography, and open standards

dcoombs asks a question about how PINs are used in the fancy new smartcard-enabled Visas vs. Mastercards.

Specifically, he notes that you can change your Visa PIN over the phone, which suggests that the PIN is stored on your bank's servers, not on the card itself. (He also notes that you don't have to store it on the card either; you can encrypt the signing key on the card, so the PIN is never stored at all, anywhere.)

As it happens, I've had some occasion to look into credit card payments in the past. (I do work at a banking software company, after all.) So while I didn't know the answer to the question, I knew where to look.

Where to look is EMVCo, the Europay Mastercard Visa Company, which publishes the EMV Payments Specification. Conveniently for our purposes, you can actually download that very specification from that very link, and learn more than you ever wanted to know about the communication protocol used in payment cards.

Now, the spec is long and boring, so I used the magic of full-text search to find what I was looking for. I alert you to section 5.2.6 of Common Payment Application Specification v1 Dec 2005.pdf (oh yes!), which discusses the various "Cardholder Verification Methods (CVMs)" that are used to... verify cardholders.

From this section, you discover the terms "offline PIN" and "online PIN," which turn out to be what you might expect. Each card identifies its preferred CVMs. The former one means that the card checks the PIN by itself; the latter means that the PIN gets checked by the bank. It appears that your card could require multiple CVMs, although I was too lazy to read in enough detail to be sure of that.

So anyway, the "insecure" method dcoombs describes as being used by his Visa can definitely exist. But I guess we already knew that because it exists.

More interesting is the "more secure" method (offline PIN) presumably used by his Mastercard. The real question is: are they really using offline PINs, or do they just not let you change your PIN over the phone? I don't think we can tell, unless we construct a terminal according to the specs and ask our terminal to read the CVM list from the card :) So we don't really know if Mastercard is "more secure" than Visa; they just don't make it obvious. On the other hand, the spec says they could be "more secure" if they wanted; that feature exists too.

Now, I've been "quoting" the terms "more secure" and "insecure" above. The reason is that I suspect both methods are perfectly fine, and (as we'd hope!) vastly better for security than the old magstripe systems.

The key feature of a smart card is not actually that it keeps your PIN secure. Banks, I suspect, have rightly observed that keeping your PIN super-secret is not really going to happen. There are just too many ways to steal it.

For example, a common form of credit card fraud nowadays is to have fake card readers where they swipe the card and you enter the PIN, and it records the PIN and card number before forwarding it on to the "real" reader device that does the transaction. There is no way to prevent such a system from stealing your PIN; the only option would be to carry around your own keypad for entering your PIN, because you know that keypad isn't hacked... but nobody wants to do that, so forget it.

The other common way to steal your PIN is to watch you type it into a bank machine. Trust me, you're not as secret as you think you are. Or even if you are, the next guy won't be.

So let's accept that your PIN is really not that secure. What can we do?

Well, we can make it really hard to steal your credit card number. This is what smartcards do. As far as I know, the only way to steal the encryption key directly out of such a card is to do some awfully weird stuff to the card (X-rays, super-slow low voltage analysis, etc). Nobody in a corner store or restaurant is going to get away with doing that stuff to your card without you noticing, so you're pretty darn safe. When your card authorizes a transaction, it generates an authorization key for only that one transaction; it never reveals the card number itself, so a card reader machine can't steal it.

You could reverse-engineer your own card, but it wouldn't accomplish anything; if you really need to copy your own card, just ask your bank for another copy. (This problem is why the original "as good as cash" smart card idea wasn't so great. They carry around money and do transactions without the help of a bank - which means that if you can hack your own card, you have a license to print e-money. You don't want to give people incentives like that.)

So the reality is that as long as you don't tell your PIN to everyone, then the probability that someone both knows your PIN and steals your physical card (since they can't copy it) is extremely low.

The remaining question is whether it's secure to let people change their PIN over the phone. Well, nothing on the phone is very secure. But interestingly, even that isn't a big deal; they still need your physical card to make a transaction. They can steal your physical card and go change the PIN over the phone; in that case, they'll need to confirm some personal information. That seems like the most likely attack vector, but it only works if they manage to steal your physical card, which you'll probably notice pretty fast.

Also note that if all this analysis turns out to be wrong, they can just issue a new card that demands offline PIN and disables online PIN. Or vice versa, if it turns out there's something wrong with the offline PIN implementation(1) but online PIN is secure after all.

All in all, I think they did a pretty good job of it.(2)

Footnotes

(1) I can think of one way that offline PIN would turn out to be less secure than online: remember, a PIN is typically only four digits. Four digit passwords are stunningly insecure, protected only by the fact that these systems will shut down if you guess wrong more than n times, where n is a small number like five. But if you steal and hack someone's card, you can read out the key directly, and simply try decrypting it with every possible PIN (all 10000 of them); there's no lockout feature. Even if your PIN isn't "stored on the card," it's still as good as there. You're potentially better off having the card in one physical location and the PIN in another.

(2) On the other hand, did you know that EMV (smartcard) support is optional in the fancy new contactless cards? Basically, EMV support is independent of contactless support. You can have either, neither, or both. Contactless payments are a great idea, but without EMV too, people could actually copy your credit card by passing a reader near your wallet. Crazy. I don't know for sure if this was ever deployed, but if the standard exists, I guess it was; if you have a contactless card (like Mastercard Paypass) without a smartcard reader on it, it's probably this insecure kind. Disclaimer: I am not an expert on this, I just skimmed some standards. Anybody who can confirm/deny, please send me an email.

Update (2009/01/18): Adrian wrote to say that he's tried PC Financial Mastercard and Washington Mutual Mastercard. Both have Mastercard PayPass (the contactless payment system) but no smart card. So that's a lovely security update.

Update (2009/01/18): ppatters wrote to note that various methods (X-rays, low voltage, cold, etc) that used to work will nowadays trigger self-shutdown sequences as an anti-reverse-engineering measure. The question then is: what's more likely, that someone will find a new method that still works on smart cards, or that someone will break through your bank's firewall and steal a list of PINs? Beats me.

Syndicated 2009-01-19 17:59:42 (Updated 2009-01-19 23:03:46) from apenwarr - Business is Programming

16 Jan 2009 (updated 11 Feb 2009 at 18:03 UTC) »

Avery's strongest ever book recommendation

I mean it.

You really should read The Collapse of Globalism and the Reinvention of the World, by John Ralston Saul.

I've been a fan of John Ralston Saul for about 10 years, since I saw him speak at the University of Waterloo. I picked up this particular book a few weeks ago at Chapters on clearance discount for $10. It seems to be very unpopular; it only has one review on Amazon, and that one looks kind of fake.

And that's too bad. The book's only fault is that it sticks to the facts, backs them up really well with more facts, lots of references, and a good bibliography, and isn't sensationalist in the least. It puts things in an excellent historical perspective, pointing out interesting truths like:

  • This is not the time in history where the borders have been the most open (the colonialist days had lots of "free trade" inside the huge empires);

  • Global economic growth rates have been much less in the last 30 years (with globalization) than the preceding 25 (with lots of border controls);

  • Political nationalism is on the rise despite the huge drop in trade tariffs;

  • China and India have been doing pretty well and pulling up the global average growth rate; entire continents (ie. Africa) have had negative growth in the last 30 years compared to positive growth before that;

  • Governments still have the power to control corporations, but they've given up that power willingly;

  • Countries (eg. Malaysia) that closed their borders and pegged their currencies have been more successful than countries that didn't;

  • Hedge funds are out of control and making a terrible, very risky mess by undermining the financial system (this was back in 2006!);

  • Globalism is already dead and declining rapidly, but with little fanfare (the WTO protests reached their height years ago, and for good reason);

  • Terrorism and guerilla warfare have to exist because conflict still exists but normal warfare is obsolete.
And best of all, he makes all of the above points way more clearly than I ever could.

I admit it, the book is pretty dense and hard to read. I read another business book right after this one, and it had about the same number of pages and the same font size, but took about 20% as long to slog through.

But if you actually want to understand the economy and what's been going on in the world, it's worth it.

Syndicated 2009-01-14 21:43:45 (Updated 2009-02-11 18:03:19) from apenwarr - Business is Programming

473 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!