19 Nov 2003 monkeyiq   » (Journeyer)

It appears that what I've heard and also discovered / confirmed is that most of the inference engines are written with the goal that they are to be used in big OWL type systems and run as servers exposing a WSDLish interface for clients to use. I'm thinking that maybe this is why inference engine stuff is usually hacked into custom C/C++ code when used in other apps (like libferris).

There is inference stuff buried in libferris at the moment (a rose by any other name), some of which would be a little nicer to have as explicit rules but I don't see that the VFS should depend on a inference engine server running someplace. The communication would bug it down a great deal anyway. Having to expose entire chunks of the libferris data model as an OWL file and choof it off to a inference server to get back a little fragement to merge into the main data model again seems very expensive.

This is a shame as a formal inference engine could also handle constraint violation esp. in the case where the constraint was violated based on nested inference.

The hunt goes on. (any links that do what I've outlined about would be nice. #rdfig on freenode).

Latest blog entries     Older blog entries

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!