Older blog entries for mx (starting at number 80)

6 Oct 2004 (updated 6 Oct 2004 at 20:08 UTC) »
Heavy requirements suck - I figured out last month that software development processes are a tool for managing the business of building software. Processes should really be orthogonal to the act of envisioning and crafting software, as when processes are coupled to the act of building software they trip up the focus on the end goal -- that of building quality software. Not that process themselves are bad, but they are an unbearable burden when tightly-coupled or overweight. In fact, a heavy set of processes attached to a software project will result in bad software, and likely project failure.

The problem is that the business of developing software is difficult. Processes are applied to mitigate the various risks, with the goal of optimizing the business and controlling various software qualities. The business of making software a business, though, makes it difficult to make the software itself (it gets in the way, obscuring the goal). This of course requires a careful balance between being a business and making software.

The concept of agile development is to decouple the process as much as possible. Use the good bits, but allow the crafting of the software to be done with minimal impediments. Agile development, and orthogonal processes require that the business can trust those crafting the software.

I think this makes the prospect of outsourcing development more difficult, as trust is harder to find. Interfacing two companies usually requires a great deal of business process, in addition to any mechanisms required to interface disparit software teams and product thinkers. All that process tends to get tangled into the craft of building things, to the point that software products become the output of the process.

It is a terrible thing for software, when it becomes the bastard child of business concerns. Applying techniques that are focused on business ideals, like symetrical decomposition (which makes traceability easier), makes it dificult to see the how particular features are meaningful to the end vision. In fact, the vision is usually lost in the retarded depths of detail.

Software development is complicated, and records of domain (and design) decomposition are useful -- but these things need to reflect the vision and passion of the application concept. Decomposing a problem using a process like the Unified Rational Process results in thousands of artifacts that all look the same. Any concept of importance is lost, passion is whitewashed away, and vision is lost.

There needs to be some balance towards making a good product, rather than meeting the busiess need of completing every point of decomposition (to prove completeness of a phase of a project). The whole point of decomposition is so that you can recompose the ideas into something that resembles the initial concept, so the further away from the idea the decomposition brings you, the more work you need to do to get back to it.

This is one of the reasons that the approach of Open Source Software is so productive: business concerns are removed from development. Many OSS projects apply various processes to their efforts, but they're generally agile, and used to improve various aspects of what they do ... and aren't often heavy.

Why I Hate US Politics

  1. A country is *much* more than its economy and collective greed
  2. A country is more than its liberations, war crimes, and power-mongering
  3. A country is more than its rich, corrupt, greedy elite
  4. A country is more than its sometimes-deserved scars
  5. One country is no more important than another (or all others combined)

I wish I wasn't such an optimist.

I don't write often enough, either here or at my always-in-development weblog. Maybe if I finish the software behind it, I'll write more often ;-)

Outsourced Hardware - More failures at work, this time with our main UPS. It has a dead cell, which will likely cost us over $1k CAD (arg). At least we know now what was causing the server reboots. It took a week of weird failures for us to catch it, as the UPS wasn't logging the problems.

So we're looking seriously at outsourcing our hardware. Not our IT, of course, but we're looking to rent rackspace and hardware somewhere (like at dreamhost or rackspace) ... as our fleet is aging, and we're more interested in writing great software.

Hardware/hosting has become a commodity, and the companies that are good at it are *really* good. I've worked with one host for 5+ years now, and they've got their stuff together, and they understand how it works. Our own IT guys will take years to get where most good vhost admins are at, in terms of planning hardware/network/configuration stuff.

It helps to work at the larger scale that the vhosts manage ... we only have ~30 servers here (and each running only a few services), which could really be replaced by a fraction of that. Most vhost farms have a dozens servers running thousands of users each ... which requires actually knowing what you're doing, or learning it. I've seen the difference between vhost farms who know their stuff and those who don't (and it ain't pretty).

BenderBlog - I'm getting closer with the current version of bender ... it's running my test site, and I'm hoping to switch it live soon. It won't be complete, of course, but it will do enough on the front-end to allow people to view the weblog, projects, linklog and articles. The back-end will still require some twiddling, but that's ok for now.

The docutils-style parser (magicmarkup) is coming along well too, and is part of the bender prototype. I ended up with a data-driven recursive-decent parser, despite aiming for something a lot simpler. The parser needed the context and depth (in the dom) to properly represent various types documents.

The data-drive part was fun too, and a future version will be able to load parts of it's parse tree on-demand (startup, or based on special document parts). This will allow the parser to be trained and customized on the fly, something that a lot of the plain-text -> formatted output parsers seem to miss. I'm hoping to support html, xml (for rss), and ps (or latex) for the first version. So far it's limited to html though.

The html renderer for magic-markup is hacked, but it is almost complete enough to use ... except for the special blocks (like meta data). I'll have to add a layer to the dom to handle meta-data, and document sections better (like comments, notes, etc.). Maybe on the weekend.

Hack, slash, refactor - Prototyping complete, working systems, based on first-order design is my current-favorite approach. Documenting a pass of requirements, decomposing the architectre/design (focus on capacity), and prototyping the initial design gives you something to prove a system. It validates requirements, design, capacity and architectural problems, and it makes it easier to complete remaining requirements/design. And, it results in something you can use.

27 Aug 2004 (updated 27 Aug 2004 at 04:42 UTC) »
Failures Galore - Our hardware is starting to show its age at work. That, and something fishy is going on.

We've had several system failures in the last few months, some of them obviously related to our ancient systems (6+ now), and some just plainly weird. As there have been so many failures, the net effect is much worse than we'd like.

For example, our backup system has been intermittently failing (a loose wire in a drive caddy), which has thinned our 2-week backup set (a few backups are missing). One of our primary webservers has had some nic instability, which also reduces the hit-rate of the backup system, and we're running low on off-site drives and have missed a few off-sites recently. It's a recipe for disaster ... one that came to a head this week.

So we have a flakey/shallow backup, some failing NICs (still don't know why), and then we start to see some strange drive failures. Two systems this week showed some flakey behaviour, and we scanned the systems and swapped out the suspect hardware. On reboot the systems are dead; the raid-arrays have no mbr or partition table, and the rest of the data on the drives is severly borked.

The logs and clam scans for the systems are clean, and the custom tripwire scripts didn't detect an intrusion, but it's still the likely cause. The only clue that would hint otherwise is that the systems are so old, leaving so many possible points of failure ... except that it happened to two systems in two weeks. I couldn't find enough data on the drives to prove anything either way, and none of our scanners caught anythign obvious. But we're looking for rootkits on the rest of the servers, and we've cycled passwords/locks/keycodes for the entire building.

The kicker, of course, is that we didn't have recent backups for either system, as the backup system has its own problems. We had the primary partitions imaged (dd + netcat rule), but a lot of data has to be built from crumbs around the network. Nothing critical was lost, but it's still pain.

What are we going to change?

1. Failed backups will result in someone making the backup by hand. Ignoring a broken backup will result in failure, eventually.

2. We're moving our main production sites to a managed farm. We write software, we don't manage servers. I've been pulling this IT group out of a hole that it may never get out of. Too few people, too little experience, too little time to fix it all.

3. Buying better hardware does not guarantee success. The previous incarnations of this IT group believed that spending more money meant better uptime. It just isn't true anymore ... these servers are dual-cpu, multi-nic, multi-raid array machines ($10k CDN), and they fail every 1-3 years. And the failures are often hard, despite the raid arrays. I'd kill for cheaper hardware, where I could swap in a new machine at will. Instead we're stuck troubleshooting old, expensive hardware. Replacing drives costs us more than makes sense, as there is an obligation to continue using the high-end SCSI stuff, despite it not making sense anymore.

One reality I'm learning is that legacy is always a problem. The principle I take from that is that decisions need to be as orthogonal as possible, to make changes in the future easier. Smaller, simpler, fewer, cheeper.

bender Lives - My pet blogging tool is shaping up. I've written a lot of requirements, and worked on some design, and am part way through converting my site data (my site is the prototype). I've written most of the components now, at least in basic form, and am testing various pieces of functionality.

The next release will likely only contain a few of the UI-CGIs, so I can get my new site up. I'm really tired of the current backend uses textpattern, which looked good at first ... but doesn't look like it's intended to remain free forever. I used to use blosxom, which was good -- but I found it had too few built-ins (absolutely everything was a plugin). Bender will be a lot like blosxom, but will contain the essentials as part of the core, things like configuration/meta, a text-backend, some auto-markup stuff, and a basic webmin interface (as well as a command-line tools).

And this is one of the reasons why I think diversity in software is good. A great project like blosxom gets people thinking, dreaming of how it could be better. The result is better software, as it's the stuff of our dreams.

zeenix - It was a weird epiphany for me that day, based on a whole bunch of reading that came to a head. It's one of those things that should be obvious, but I've been dense and naive. The "So" was a cheep trick, or my inability with the language. But thanks for the kudos.

So capitolism is a sort of organized greed, where freedom is found in financial independence. When the number of principles in a financial undertaking is limited, and there are real people in control, the system works well. As the number principles grows, and the endevor is an amoral quasi-citizen (a corporation), the system optimises for greed efficiently. Corporations do not value society, except as a means of production and consumption.

Corporations grow by nature, and only have a conscience when it fuels growth -- at least on average. There are exceptions to the norm, as in nature, but the good companies stand little chance against well-optimised money making machines in the long run.

Business is applied greed, though we're generally fine with it as long as we gain from it. As long as the majority gains minimally, the whole system works. Alternatively, as long as the majority is enslaved (even if under marginal freedom), the whole system works. Our world is generally a balance of these two approaches. And we're generally fine with it if we have our computers, big-screen TVs, and other middle-class wealth.

Just like me writing this on my stupid laptop, thinking about how globalisation is killing the little software company I'm at, and how we should really be fucking happy to be eating ... rather than worried that we won't be able to buy our yuppie shit.

prozac : I agree about the basic anti-office approach for documentation. I've been using the text/style/processor approach for several years now, publishing to web, postscript, or pdf using several different tools. None, of course, are easy enough for normal people to use, but it's the right general approach. One of my projects is related to this, though work seems to trump its progress regularly. Good commentary though, thanks.

I started watching The Corporation last night. It's a fairly balanced Canadian documentary about how corporations came to be the odd entity that they are. The history itself is worth it, to see how the current mess of off-shore slavery was caused through the imancipation of local slaves a bit more than a century ago. Well, it wasn't just the 14th amendment, there may have been some optimisation on the greed axis too.

And it's officially summer here: it hit 36C (>100F) last week, and us igloo-bearing canucks aren't quite used to it. Few people here have air-conditioning, as things are generally reasonable around here. Damned heat. I walk to work daily, about 3.5km, so I'm nice and warm after both trips.

My pet project is stalled due to the heat and post double-time burnout. I'm letting my brain rest a bit before I push too hard on the next release. What was once a link logger is becomming more of a Blosxom replacement (somewhere between Blosxom and Wordpress). I want something with a simple backend, a GPL or Artistic License, and some way to deal with very different types of pages (to fit how I want my sites). It's a good diversion from my usual device driver/protocol development.

26 Jul 2004 (updated 26 Jul 2004 at 21:16 UTC) »

Things I know today:

I should not feel guilty for taking time to think at work. (The worker-bee ethic is a pile of rat droppings)

perldoc is really cool.

Gnome/Gnu/Linux are a full replacement for Win32/Pain.

Working for other people can be difficult -- especially when those people are working for other people. These days I've been doing a lot of contract work, some of it for a company that does contract work. I enjoy most of what I do, but sometimes clients can be difficult, especially when they're the livelyhood of a large group of co-workers.

We should really turn down one of our current contracts. It's going to be close to impossible to succeed on the contract, as the customer has found a pattern of changing direction (and forcing their contracts to comply). The problem is that we *can't* say no, or a bunch of people don't work. Unfortunately, I think that I'm going to have to say no myself, as I don't believe in working under those conditions, where failure is likely, and being part of playing the bitch. Not my cup of tea.

So it's back to the drawing board, again.

I missed the daily readof the Advo-diaries, which are always worth the time.

Work is interesting: writing requirements for one contract, and developing serial drivers for the other, taking me from half time to double time. Gotta take the work when it comes.

Non-work stuff is fun too. I've been neglecting the weblog a bit, but have a new layout and backend ready to go. Now I just need the time.

Part of the new site back-end is a set of tools that tie javascript bookmarklets and cgis together. You can really do some neat stuff, and it helps to have survived on half-baked tools for many years. I've been stewing on usability points for a long time, and now am finding really simple ways of making web apps easier to use. I'll post more about how that stuff is working soon.

71 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!