I've decoupled the IO stream away from the server to the point now that I only have to supply some vague simulcrum of the Engine's interface and the event subsystem to be able to test all the IO subsystems in standalone test harnesses.
I am removing all the player/stream login code from the LoginSession class and the Engine's LoginSessionAuthEvent handler and moving it to the Engine's PlayerLoginEvent handler for consolidation. This means that the IOEventStream will have to be frozen for a second when it's handed via the PlayerLoginEvent back to the Engine to prevent IOEvents from being lost in a race to re-assign the socket from the stream in the event of a reconnect. I added pause() and unpause() to the stream class to facilitate this.
The IOEventStream is still a CPU hog. It's a hog because it has no way of knowing if its filters have pending events, and so must poll them in a timely manner. If the interval is adjusted too high, response to the shell becomes sluggish. This can be fixed by arranging for some kind of signalling mechanism for filters so that they can wake the stream when they have events pending, and the stream can run the events through the loop and then sleep until there's more to be done. Perhaps mutexen or condition variables or a semaphore could be used to do this. More experiments are needed.