Introspector Project refocused on DotGNU(.NET) and RDF(Semantic Web)
Posted 8 Aug 2003 at 07:57 UTC by mdupont
The Introspector project has after having an seemily endless set of goals as realigned/refocused itself with a new set of goals that are achievable. After more than a year of research into the semantic web and the dotgnu system, I have concluded that the main goals of the introspector can be reached much more quickly with much easier goals.
The original goal of the introspector was the extraction of metadata from the GCC. The DotNet system presents you with much more metadata then you could ever want.
The dotgnu/pnet system is GPLed and has tools to be able to disassemble, to assemble and run C/IL binaries from DotNet.
RDF is the cornerstone of the semantic web, Redland is an great library for processing RDF.
The new introspector module will allow you to convert your IL code into RDF for semantic markup and also to be able to assemble RDF back into dot net binaries.
The later versions will also allow the tracing of the execution of your programs in rdf.
These features will unite the semantic web and the dotnet world. Programs and Executions can be treated as data, Data can be treated as Logical statements and fed into proof engines.
Also, you will be able to transform your rdf and xml files into the Introspector rdf for translation into binaries.
The end result will also be the ability to semantically mark up IL code for converting it into a new language, opening up a new world of semantic progamming to DotNet.
See the simple plan here
See the Original kick off here
See the introspecto wiki here
Ashley winters and I have produced what is the first prototype for a
gcc ontology. It describes the facts extracted from the gcc using the
please review and comment
The PNET/C will follow.
The Dotgnu Project has in an unfair discrimintory act banned me from the project.
This will only raise the priority of the introspector fork.
The first step will be a dumper for treecc that will emit the treecc data into an RDF ontology.
The second step will be a dumper for the treecc object that will dump any object into rdf using that ontology.
Then I will use this on the various treecc tools in the pnet.
There should be a way to support a direct redland interface into treecc. Taking a give rdf ontology and creating a treecc description of that.
Being able to transform any treecc object into a set of redland statements or loading the redland statements into that treecc object.
see treecc :
The rule based code generation that rhys speaks of will be made available by the introspector interface into the treecc.
"""The system is not necessarily complete. We'd like to experiment with rule-based code generation techniques. At present, optimizers and code generators must be written by hand, as operations on node types.
A rule-based system would make it easier to build clever optimizers as a set of pattern matching directives. Operations are already a special class of pattern matcher, but they don't have any back-tracking and retry capabilities."""
The eulersharp and cwm tools are just two rules engines that can be used to implement this on the introspector framework.
And it's something I've kind of wanted for a long time. It would be nice to have code editors for Linux that did as good a job of showing you the C++ class hierarchy and things as code editors for Windows do.
I am on a roll, thanks to deltab for corrections
here are start the perl internals and bashdb
TreeCC parse.c in n3 format converted by the gcc introspector
669k of n3
26646 nodes of data
Progress made, posted 25 Aug 2003 at 10:22 UTC by mdupont »
Have made good progress on the treecc ontology over the weekend
The next step will be transfom the strings into rdf names for the ontology
The cwm builtins are helping out!
The DotGnu(tm) team has asked me to make sure that users of the introspector patches to the DotGnu(tm) system are not confused with being supported by DotGnu(tm).
They do not me to post bug reports about problems that any user might have, I think this is very strange and unfair to the other users of DotGNU(tm).
They insisted that I must not use any of the terms "DotGNU",
"Portable.NET", or "pnet" in ways that would give users the
impression that they can get any kind of support from the DotGNU (TM) project for my competing derivative work.
Even if the bugs are the same, they dont want to support the usage of my patch.
Ok, well I have been thinking about a name to make sure the users dont
confuse the two : I have settled on the GNU tradition of a recursive
INDG = INDG's not DotGNU
INPN = INPN's not Portable Net
The "I" could also be the "introspector"
of course we have the obvious but ugly :
IWNDGWS Webservice's not DotGNU Webservice.
not pretty, but very effective, there is no one who will ever confuse
these really ugly acronyms with the DotGnu Webservice servicemarks or the DotGNU trademarks.
From : Rhys Weatherley email@example.com
James Michael DuPont wrote:
> I dont know how expressive the CIL is yet, I am just
> poking in the dark. The mono-project seems to have a
> disassembler going.....
As does pnet - "ildasm".
> I have read your essay on the aspect oriented
> programming and the implementation of the parser. You
> presented some very interesting ideas.
> It should be possible to create more aspects of the
> trees such as reflection, serialisation,
> visualisation, persistance and transformation
That would be great. The trick is finding the right
"syntactic/semantic style" to use to make it work
well with the rest of treecc. Aspect-orientation is
tricky: it's very easy to fall back to OO thinking.
I'm definitely open to suggestions.
> > There are also difficult issues: how to support
> > reflection, for example.
> Now we are talking!
> That is what I have been thinking about alot.
> The gcc supports reflection in the gcj, and
> the mono has an interesting module that encodes the
> meta information into a data structure for the
> run-time. All written in c.
Since both pnet and gcc are part of the GNU Project,
and Mono is not, it would be advisable to do this using
pnet components where possible. Metadata support
is core to pnet, and has been present since day 1.
> The encoding of this meta-data back into the target
> program will have to be as small as possible. In one
> way it is like accessing a code browser database, but
> the data is linked into the executable.
Pnet's metadata system represents everything using
an interlinked series of data structures like ILClass,
ILField, ILMethod, etc. Normally these structures are
created at runtime, but it wouldn't be difficult to write
a C program which flattens them into a browser database
to be linked with the executable.
Another possibility is to use CIL itself for the metadata,
with native functions for the method bodies. i.e. create
an extracted version of the metadata section, which is
dynamically loaded by pnet's metadata library when the
application starts up. This metadata is embedded into
the application as a const data blob. This will be more
compact than a set of flattened data structures.
I have simplified my plan.
The treecc is not needed for the ILDASM tool. I will create the ontolgy for all the needed data structures for the DoGnut(tm) Pnet(tm) header files first.
Of course the gcc::introspector ontology extractor tool is reusable for this purpose, so the time was not that wasted.
TreeCC will come back to be more important for the CSCC interface.
This will speed up the the project by reducing the goals!
Good progress is being made,
I have now started the extracting of the c code into n3.
In fact I have found that n3 will be a good programming language,
This will allow automatic pattern recognition and transformations of code.
Here is a
Detailed mail with examples.
My current research is into transforming all the printf calls of the ildasm that emits IL code into this form. That will allow a replacement of all the printf statements in DotGnu with RDf emission statements.
more to come.