Older blog entries for nuncanada (starting at number 12)

Congratulations to John Cox (niceguyeddie)
Xaraya's Leader got a very deserved interview on php|architect commenting on the (in?)famous nukes past and future.

About me:
I dropped off from my math undergraduate course missing 5 classes to finish, couldnt stand the lack of purposefulnes, and the boring differential equations classes were the finnishing incentive i needed.
I really like puzzles, but i dont see much point in math as a job, it would be like a being a professional chess player... It's interesting, but what's the point?
Worth, computers are soon going to be the best chess players around and not that latter will also be the best mathematicians...

Code doesnt want only to be open, it wants to be modular.

Open Source Movement is missing better tools to allow code reuse. We have been mostly trying to copy what's out there in closed source. In the closed source environment, core reuse is a secondary problem, while it's crucial when you have open source.

Right now languages seems to be heading towards problem area specialization (perl, php, prolog...). That's one way to quickly advance into the given area, still wouldnt be better if all area specific languages were just local dialects of a mother language?

Some interoperation between languages is going on between php/java, this has also existed with c/with almost anything. This seems a trend to divide work between languages, so you can use the best tool for each particular job.

I think this is the future of programming. The usual saying 'Use the best tool for each job' will meant more than it did before. Where we will have problems segments being solved by different languages.

Now that comes the part about code being modular: if the particular problem local language is 'simple' enough so that its programs can be trivially be evaluated to equivalence classes then this 'i think my way is best' attitude will be overthrown. The problem is constructing such languages/a mother tongue compiler...

I kept reading papers about Open Source, 'The Political Economy of Open Source Software' explained to me what i was thinking (or i think i was) about oss being more than gift economies, besides a lot of other subjects over oss motivations, organization etc. It's an awesome paper, the best i have read so far about general oss economics.

Reading it makes me think that all 'infrastructure' software sooner or later will be oss, the possibility of commercial profit will remain only on suprastructure (especialized or niche) software...

Researching upon Open Source, i found something that seems important to understanding it but is left in the locker room by the linux zealots:


Innovation? New? No, it's just another copy of the same old stuff.

OLD stuff.

Compare program development on Linux with Microsoft Visual Studio or one of the IBM Java/web toolkits. Linux's success may indeed be the single strongest argument for my thesis: The excitement generated by a clone of a decadesold operating system demonstrates the void that the systems software research community has failed to fill.

Besides, Linux's cleverness is not in the software, but in the development model, hardly a triumph of academic CS (especially software engineering) by any measure.

from System Research is Irrelevant by Rob Pike

20 Oct 2003 (updated 20 Oct 2003 at 23:13 UTC) »

Success in Open Source: Technical or Social Merits?

The open source community regards itself as a 'meritocracy', a system where status is based on merits. But what kind of merits? Technical? Between two competing open-source projects, which one is expected to 'survive', the one with most features, or the one better designed?

PHP-Nuke's still deep roots among the user base seems to imply better architecture is not that important. Its competitors, Postnuke and Xoops, are better designed (and soon to appear Xaraya, even more imo) but still doesnt seem to have reach the number of Google indexed pages (how to compare user bases in open source software? Can we trust download statistics?). One of the possible reasons for that could be that the user base consists mainly of non-programmers which can't evaluate the code base.

But another example seems to imply that is not the case: Open Source Java Object/Relational Persistency Libraries: OJB vs Hibernate

In many comments ( 1, 2), the dispute is portrait as OJB being the design guided (which might make it less practical) and Hibernate the functionality driven, popular library.

One of the outstanding features of OJB is its Criteria API, it provides a clean abstraction to SQL-like queries in contrast to Hibernate SQL alike: HQL...

If developers were really interested (and informed) in the technical matter of which is the best way to abstract SQL, they would choose OJB's approach instead of Hibernate's. Still Hibernate is more popular, and recently implemented the Criteria API too.

Although i am tackling about a particular issue among the many ones for Object/Relations Persistency Libraries (the one i know most about), i believe it to be more or less how the rest works, given the very different philosophy behind the projects:

Hibernate's "What matters is delivering useful functionality in a timely manner" and OJB's dedication to design principles.

Regarding the question asked by myself, i do believe Hibernate has the better approach towards a succesful open source project (social merits)!

The problem with this approach is that usually later the addition of certain functionality gets impossible or too human resource intensive to be feasible, obligating architectural changes (*Nukes). Once down that road, there are 2 options:

To keep adding just the functionality you are able to or to make the architectural changes which might reduce the functionality you have until you are able old stuff comply with the new system, makes it counter-productive to add new features as these may have to be redone, steal time away from adding feature towards remaking the architecture (which might take a long time). And in case of libraries/frameworks it might alienate the user base if the supported API is going to change.

A successful project would need to deal with this phase very well once it is needed.

10 Oct 2003 (updated 17 Oct 2003 at 12:20 UTC) »

I gotta say i finally got deeply convinced over John Lim's post on his weblog 'Is Java more scalable than PHP?'

What it got me to wonder is how then, Java became the de facto standard for big corporations web applications?

The idea behind developing java was 'Write once, run everywhere'. But for those with lots of money (big CMSes for example), it's hardly a problem if a solution can work only in very specific servers, probably a lot more money will be spent in development... So the major moto behind Java doesnt help much in this case...

Besides, Java solutions are built to share, which makes them really hard to scale, adding more complexity on top of that.

PHP seems to take the right attack at this problem, letting the sharing part occur in databases, or 3rd party permanency servers... So what is done in PHP can scale linearly...

I think Java is much more suited for what was its original intention, to run everywhere. For example a nice editor on the web, where you would need to have big text files in memory (autosave, automatic grammar checks etc), where php solutions probably would be a bunch of hacks or impossible...

I wonder if the reason for Java in corporate web services isnt as a quote from Djisktra "Complexity sells better and the market pulls in this direction"


Declarative Languages seem to me as the future of computing languages. Somethings in Prolog for example, they are as simple as defining what you want. Optimizing is a hell, but i would like to see the a language which separated logic from control. So control (types/better algorithm) would be a 2nd phase in contructing the program, as these dont depend on the underlying logic anyways....

PHP deeply needs some kind of persistent server

For bigger applications this becomes a necessity, the resources (memory and cycles) spent only for loading the code is already a barrier. Splitting up functions and methods (exactly! decoupling methods from their respective classes), so they are loaded only when necessary is the best way out without a persistent server.

Still the best solution would be a persistent server, for things like handling the database connection, loading up translations into memory for efficiency, handling most of what is called 'core' in php application servers out there...

There is Vulcan´s Logic SRM, still the problem is that we need something supported from the PHP main extensions, to assure every configuration out there will have support for this, or else we would have to develop for 2 different platforms which wouldnt work probably...
20 May 2003 (updated 20 May 2003 at 14:22 UTC) »
Quoting Dijkstra:

"Complexity sells better and the market pulls in this direction" ... "It is time to unmask computing community as a Secret Society for the Creation and Preservation of Artificial Complexity.

And then we have the software engineers, who only mention formal methods in order to throw suspicion on them. In short, we should not expect too much support from the computing community at large.

And from the mathematical community I have learned not to expect too much support either, as informality is the hallmark of the Mathematical Guild, whose members --like poor programmers -- derive their intelectual excitement from not quite knowing what they are doing and prefer to be thrilled by marvel of the human mind (in particular their own ones). For them, the Dream of Leibniz is a Nightmare.

In summary, we are on our own."

17 May 2003 (updated 17 May 2003 at 20:52 UTC) »
When Gödel proved his incompleteness theorems about all of the main logicist theories at the time, it showed that the traditional theories which were believed to be good foundations for mathematics would not be able even to handle arithmetics.
This led most to think, as quoted from this paper, "Godel's incompleteness result showed the impossibility of Hilbert's program of reducing mathematics to logic and by implication the impossibility of reducing computing to logic.".
This statement is wrong and the paper has some other visible errors too, still that's how mathematicians felt and still feel today.
There is still some researchers arguing that Gödel's results do not prove Hilbert's program to be impossible.
But i think that the only way now to convince others is not arguing wether it is possible or not. It is to find out a theory which doesn't fall prey to Gödel results...

3 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!