Free softwares: second generation
Posted 29 Dec 2000 at 13:53 UTC by pliant
I see a contradiction between the free softwares phylosophie that stands
that software development is shared among it's users (and is marvelously
expressed in the GNU general public license) one one side, and the very
conservative design of the leader free softwares that prevent to
customize them in a resonably short amount of time on the other
In other words, is cloning standard industrial applications still
compatible with the free softwares phylosophie ?
My personal answer is not any more. The reason is that things have
changed since the early days of GNU foundation.
At that time, the closed behaviour of software industry started to slow
down softwares improvement because external good ideas did not get in
anymore, and bug tracking was not possible anymore. Then came the genial
GPL license and it was solved in the first place.
Nowdays, a second huge constrain has appeared: the softwares get so huge
that nobody except a few full time hackers can dig in them, so an
advanced end user looses the hability to customize, not because of
closed status of the software, but because too fat.
So, the next question is why do modern softwares get so fat ?
My answer is because they use too conservative design and rely on too
conservative development tools.
Because the FSF has been so efficient in it's early days that nobody
could do without the impressive set of Unix clone tools they
... and now that free softwares ackers are linked with software
industry, nobody wants or can start from scratch any more.
Well, it's just what I did.
I've build an entire system that runs with no external software ... or
nearly so (it requires a Linux kernel because driving the thousands of
different hardware peaces that you can find in a PC cannot be solved by
an elegant design: you need the raw power of thousands of hackers and
testers, but it requires nothing else).
I hope you will find the design of this new system impressive:
It's an HTTP, FTP, SMTP, POP3 server; it's using an internal database
for configuration files and the public patching mechanism, and the all
fits in 1 MB, with most servers beeing in the very resonable one
thousand lines of code range because the all thing is written in a brand
new language: Pliant, that provides raw efficiency (it's as fast as C),
expression power (it can generate code on the fly) and flexibility (it's
a dynamic compiler) all at once.
You can have a look at the first computer running FullPliant at
also it's still an under test system.
Eh?, posted 29 Dec 2000 at 20:17 UTC by dan »
What does designing for customizability have to do with starting from
scratch? Emacs is not hard to customize, and it runs on top of Unix
(or GNU, or whatever else).
What do you mean by "typed lisp", anyway? Do you require that all types
are declared? Sounds kind of icky if so ... what advantage would you
get from it?
Re: Eh ?, posted 29 Dec 2000 at 22:25 UTC by pliant »
There where two very special things in Lisp.
First there is the execution model with lists as the only way to build
objects that enabled to provide a simple garbadge collector. In this
area, Pliant rather uses a C like execution model (for low level
reasons that you may or may not agree with depending on the application
field you are targetting)
Second, Lisp programs are encoded in lists, so it's
possible to make computations on programs, and to make rewrittings, so
Lisp is a nice code generator. Pliant does this also, but instead of a
list, a program is encoded in an expression, which is basically the same
... but is typed. It means that while rewritting, you can query the type
of the result that each subexpression is computing. This enables type
checking at code generation step.
My personnal assertion is that if you study programs that can be written
so better in Lisp than in C, then you discover that most of the time,
it's because they use the code generation which is possible because a
Lisp program is a list.
Well, if you start from scratch and take some time to think before
coding, then you can make things much simpler.
The 'emacs' example is probably a bad one because building a
customizable text editor is very easy. It's because of the nature of a
text editor: it works on a very simple structure (a set of lines) that
you will probably never need to change for accepting extensions, and
hooking is trivial through assigning keystokes.
Rather look at Pliant HTTP server or database engine, and think about
building one of these using a classical development environment. The
problem to be solved is that you both need low level efficiency and on
the fly code generation.
The dynamic compiler issue is also very important: Pliant can do
at compile time, which is also execution time, and this can reduce
the size of many programs. If you look at Pliant source tree compared to
classical free softwares, then you will find that the quantity of glue
been drastically reduced.
That seems to be an interesting idea. Can you elaborate more on that
Are you saying that somehow you have managed to find a way of making
Pliant a combination of Compiler/Interpreter? A hybrid translator?
And the code that Pliant is emitting, is it machine code? or VM byte
code? or interpreted directly from source?
Dynamic compiler, posted 29 Dec 2000 at 23:09 UTC by pliant »
Absolutely, Pliant is a true compiler. It generates native processor
code, but since in Pliant, compile time is also execution time (it's a
dynamic compiler), it can
do everything an interpreter can do.
When a new function is compiled, it is directly installed in the memory
as a Pliant object with type 'Function', so it can be executed since it
contains a set of native processor instructions, but can also be freed
at a later point.
This feature is highly used by the Pliant HTTP server for providing
dynamic pages that execute at raw speed, but can be changed on the fly
without restarting the server.
You can think about it the other way round: A Pliant program executes
within the compiler, so it shares datas with the compiler, can submit it
code in the middle of the execution, and get back pointers to the new
It goes even futher: a Pliant type is also a Pliant object, so a program
scan it to discover what fields it contains, if it has such or such
method, assign it properties and so on. This is what the database engine
does, and you probably start to see how powerfull it can be, so how I
managed to build a whole self contained system within a single megabyte
Dynamic Compiler, posted 30 Dec 2000 at 00:01 UTC by nymia »
I think I'm still confused on what you meant by this statement:
"It generates native processor code, but since in Pliant, compile time
is also execution time (it's a dynamic compiler), it can do everything
an interpreter can do."
The issue with that is you're going to have problems with the linker.
Since you're emitting machine code on-the-fly while intrepreting the
source stream, external references will have to be sacrificed. Unless
Pliant has its own dynamic symbol table for them.
Since Pliant is a true compiler, how does it handle linking and
loading of objects? Is it proprietary or standard?
When you mentioned this statement:
"When a new function is compiled, it is directly installed in the
memory as a Pliant object with type 'Function', so it can be executed
since it contains a set of native processor instructions, but can also
be freed at a later point."
It gave me an idea that Pliant is an environment (should I say similar
to Java?) where it can jump straight into a memory location containing
the starting point of a stream of machine codes. Later, when it
finishes execution, it then returns on a Pliant environment. I'm
guessing that functions are translated into machine codes, but then, it
can only be loaded and executed inside Pliant. Is that correct? That
could probably explain why compiled objects are small because it is
handled in a Pliant environment.
Anyway, Pliant is an interesting implementation and I'm sure I'll see
how it was implemented. Thanks.
Linker, posted 30 Dec 2000 at 01:09 UTC by pliant »
Pliant uses no linker. It builds code in the memory.
So, from Pliant, you can load an exernal DLL and execute some of it's
but you cannot load a Pliant function from another application.
All you could do is load Pliant, then ask it to compile some code, and
it to provide you the entry point of some function.
You seem to have been programming for too long, so you start thinking
development tools are natural (blind compiler, followed by a linker).
They are not. They used to be consistent when computers had very few
memory and where very slow but they aren't any more because nodays we
need flexibility and we can get it with no execution speed cost using a
Also there is a way in Pliant to prevent to recompile everything each
time a program starts, but it's rather like a core dump, and it's always
optional. Pliant puts all
the process pages in a file, and will be abble to reload the all thing
very fast next time (in facts, it's even much faster then loading a set
I hate to bust your bubble, but this sounds like FORTH to me. 8-)
It's an old idea from the 70's, and it's a great idea. I infact still
use FORTH for lots of things.
If I gather from what you are saying, then "function" becomes "word" and
your library becomes the equivilant of FORTH's vocabulary.
Again, this is nothing groundbreaking, but i am glad to see it put into
Same as Forth, posted 30 Dec 2000 at 17:13 UTC by pliant »
If you don't go deap in details, then many new systems look just the
old ones. But when you start using these for true production, you can
I think you know where I'm coming from.
Anyway, about the linker, I think you may have some ideas about static
and dynamic linking with respect to intermediate languages. Have you
ever thought of developing something that would allow language A to
call a function written in language B? The reason why I brought this up
is because all language development seem to focus more on syntax,
semantics and runtime. But, there seems to be little interest on
development that provide languages using non-standard symbols to
participate. For example, programming languages using non-english
syntax would be able to call english and other symbolic languages as
well. One candidate for it could be quadruples or triples when
generating assembly instructions.
You may say COM-like solutions would be the way to go, but I think
they're too bulky and bloated in my opinion.
Another would be the creation of an object format containing a list of
entry points and names written in ascii. That would make linking,
loading and probably relocation a lot simpler. It if ever becomes real,
it would look like a file containing readable and non-readable parts.
Though I'm not sure if anything mentioned here are new, does it sound
Overall, I think your on-the-fly construction of machine codes in
memory is a good implementation because it may provide a solution to
what I'm trying to figure out.