I spent most of the week rebuilding a test right to simulate a customer network problem. It has a strange topology, so scaling it to a reasonable testing size was a challenge. The resulting mountain of embedded PCs, simulation PCs and networking equipment dissipates a few hundred watts of heat, and in one of the hottest weeks of the year. Then there's the noise, from dozens of tiny fans doing their best to save the many CPUs from meltdown. Hardware scales so poorly.
I think this is the reason I ended up loving software so much -- the total disconnect from the complexities and limitations of the real world. Not that software is simple, but inter-connecting 10 items is only marginally simpler than with 10,000 items. Now imagine connecting 10,000 network devices? Hardware complexity scales at some inverse proportion to software complexity.
The setup I've been working on simulates a handful of discrete networks, each with a few levels of hubs, switches, and routers. I have no idea why hubs play into this mix, but they do. It's been a long time since I've seen a hub too, since switch prices have been so reasonable in recent years. And factoring the limitations of hubs into the bandwidth capacity is really quite a job too, as it has been a while since I thought about what a limited networking device they are. The difference in bandwidth can be incredible, a few orders of magnitude more than would seem obvious. That depends on where the hub is, of course.
I was also reminded how broad the field of networking is. I ran into several technical aspects of networking that I had never considered before, and I've set up several networks in the past. Not that I'm a network guy, but I know a few things. Luckily we have a good admin in our other office, so my dumb questions were answered quickly. Experts are important to an organization, as is knowing who they are.
It was also interesting to go back in time to Windows NT4 and 98. My potpourri of simulation PCs include a few 98-laden laptops, and some NT4-ridden 'embedded' units. All of the actual test hardware is 2k or better, but the simulation stuff is old. Another testament to my very short memory: network configuration has really improved in newer versions of Windows. Microsoft really does improve things. This all would have been easier to set up, of course, with BSD or some Linux variant. But that's probably because I prefer that approach to network configuration, not because it is simpler. Nothing is hidden, it is all there in plain site, in plain-old-text, and in well documented form. I really do hate it when things are obscured.
The network setup ended up taking much longer than it should of. We have piles of hardware and cables, organized in our store rooms, and the many test rigs around the office. But, a recent 100mph project derailed most of our meticulous organization, and mixed up the piles of RJ45 ethernet cabling with the piles of RJ45 serial-line cabling (not to mention the various types of cross-overs). I know you're not supposed to use RJ45 for serial-cabling, but it ends up everyone in this industry does -- and there is no standard pinnout amongst the vendors we buy serial equipment from. So, much of my week was spent finding cables that were actually useful for ethernet networks. A small example of why standards are good, and how sometimes the users of something are more aware of what is useful than the standards bodies are. RJ45/CAT5 make for damned convinient serial cables. They look a lot like ethernet cables too.
This sort of hardware setup isn't actually complicated, but it is certainly tedious if done properly. Each step of a setup needs to be validate, at the granularity of individual PCs and connections. Add to that the OS and software setup, and a week disappears into the ether.