Two days ago, the dev team got a heads-up on some major bugs that need immediate attention. A list was given out what the error was, reported by the actual users or customers using the program. These items were considered very critical that $THE_BOSS gave the signal to go ahead and fix them even if it takes a minor architectural change.
On another note, the number of hits seem to be increasing. Some on the inquiries were converted to sales, showing that it may take some time before that conversion number would reach break-even.
jdybnis' lost page
Fortunately, I kept a copy of it.
This commentary on fundamental OS research is pretty amusing. The author motivates his discussion with some silly statistics like: the time to read the entire hard disk has gone from <1 minute in 1990 to about an hour in 2003. Going from there to demanding more CS research is like demanding better transportation technology because it took your grandfather 10 minutes to walk to school, and you have to sit through a 40 minute bus ride.
Then there is his list of of areas where people should do more research. Leave it to a kernel hacker to think a page replacement algorithm is a fundamental area of research. Let me tell you, operating systems is one of the least fundamental areas of computer science research, and the Making your computer faster (because the ratio of memory to cpu has changed once again) side of os research is some of the most transitory of that.
This piece did make me think of something I wrote while taking an into class on Operating Systems. Here is my solution to the swapping problem. It could be titled We Don't Need Another Page Replacement Algorithm.
Disk i/o is such an expensive operation these days that it can render interactive applications unusable, and for batch processes it can be the sole determining factor of throughput. This implies that we want to avoid disk i/o as much as possible. And when disk i/o is absolutely necessary, we want to give applications complete control over how it happens, so that they can be tuned to minimize it.
I propose that it would be better to enforce hard limits on the physical memory usage of each process, rather than the current abstraction in which each process thinks it has the entire virtual address space. This would work as so. When a process requests memory from the system, it is always granted physical memory. If the process has surpassed its hard limit, the memory request will fail and the process has three options: it can cease to function, it can make do without the additional memory, or it can explicitly request that some of its pages be swapped out in exchange for the new memory. If the process tries to access data that has been swapped out of physical memory it will be forced to deal with the page fault itself. Again the process will have three options: it can cease to function, it can cancel the request, or it can instruct the operating system to swap out some other data to make room. The benefit of this would be that each process could be guaranteed that it would always be resident in memory. With current abundance of RAM it is reasonable to assume that ALL the processes running on a machine can fit entirely in memory at once. The exception, which I will address later, is when an unusually large number of processes are running at once. The downside of this system is the increased work for the application programmer. But I argue that this complexity is essential to the applications, and will be gladly embraced by the programmers. In cases where an application's working set can be larger than the available physical memory, the performance of the application will depend primarily on the careful management of disk i/o. Many of the applications that face this problem, such as large databases and high resolution image/video manipulation, already subvert the operating system's normal memory management services.
I have been intentionally vague on how the system decides on which pages get swapped out when a process requests more memory than it has been allotted. There is a trade off between simplicity and degree of control for the application programmer. One option is to use a traditional page replacement algorithm (LRU, MRU, etc.), but on a per-process basis. This is can be compleatly transparent to the application, but the application could also select which algorithm to use, or even provide their own. The next level of programer control comes from allowing the process to allocate memory in pools. The memory in each pool is grouped together on the same pages. Then the process can select which pool gets swapped out. The two approaches can even be used together, and the application can specify a page replacement algorithm for each pool.
In the case when an unusually large working set, and the system is faced with too many processes to keep in memory, most current systems fail spectacularly. Not only does the Nth process cease to function, but all processes grind to a halt when the system starts swapping. I have seen this behavior on systems ranging from desktop machines to high availability servers. Usually the solution to this problem is for a user to intercede and manually kill off the "least essential" processes, or the "Pig". Clearly it would be better if the system avoids going into such a state in the first place. The system I proposed would refuse to start a process in the first place if it does not have the physical memory available to support it.