Where's GNU Rope today?

Posted 15 May 2003 at 20:07 UTC by cbbrowne Share This

One of the "hottest things" of 1998, this system that would "grovel through" object code, reordering it to improve locality of reference, has never seen the light of day.

This was a binary rewriter by Nat Friedman. His grope paper was one of the more interesting things presented at the 1998 ALS conference.

The basic idea: You run a profiler against a running application, which collects statistics on what functions get executed. Then you reorder components in the object file based on the notion that frequently-executed code should be put together in "hot spots" tht nicely fit into caches, and let infrequently-executed code fall to the edges.

Notable references:

There are similar tools for SGI and Sun that provide some degree of speedup of applications. Anything that can make instruction caches more effective can be helpful, particularly on small systems (handhelds?).

It would be nice to see some sort of phoenix emerge from the ashes. Even if the result involves recreating it from scratch, the learning (particularly in the second reference) would allow a second attempt to be done more easily than the first.


Does GCC 3.3 have this feature?, posted 15 May 2003 at 22:25 UTC by gord » (Master)

Is this related (from the GCC 3.3 features page)?

* Jan Hubicka, SuSE Labs, has contributed a new superblock formation pass enabled using -ftracer. This pass simplifies the control flow of functions allowing other optimizations to do better job.

He also contributed the function reordering pass (-freorder-functions) to optimize function placement using profile feedback.

good idea, not revolutionary, posted 15 May 2003 at 22:42 UTC by splork » (Master)

This is a good idea but is hardly revolutionary in 1998. I recall seeing DOS utilities that did this years before that as well as occasionally hearing of windows tools that do it now. A quick websearch turns up a link to IBMs FDPR project doing such things today on several platforms though it doesn't appear they've productized it for anything other than AIX.

Reordering code layout in executables based on run-time profiling can leads to two types of speedups by examining:

  1. the order that code+data are paged in from disk [leading to faster startup times due to reduced disk seeks]
  2. locality of reference (as rope presumably did) to optimize cache and memory bandwidth use.

Operating systems can actually be made to do (a) behind the scenes by purposefully fragmenting on-disk file layout based on profiling during startup. (i believe WinXP may do something along those lines; i'm not the right person to ask)

How hard is this?, posted 22 May 2003 at 16:43 UTC by Nelson » (Journeyer)

I emailed Nat to ask how this was going or went. No response. How difficult to do this? It doesn't sound like it's too difficult. I've used objcopy to rebuild shared libraries to remove unneeded components. Just thinking about it and sketching some ideas on a napkin, this sounds like it could be really easy to do with properly compiled applications. Is there a difficult problem I'm not seeing?

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

X
Share this page