A modest proposal: integrating "desktop" technologies into the base GNU system

Posted 29 Nov 2000 at 20:47 UTC by RyanMuldoon Share This

A number of useful libraries have been developed for GNOME. However, some of them would become even more powerful if integrated to the base GNU system. I wrote this article to briefly examine what could be possible if this were done. It is mainly meant to spark some discussion as to whether this would be a worthwhile endeavour.

The GNU system represents an enormous accomplishment: combined with the Linux kernel, it provides a complete Free computing environment. It encompasses basic file utilities, compilers, shells, server daemons, and desktop components. The GNU system has been developed over nearly 20 years. In that time, a great deal has happened in terms of general computing technologies. Arguably, the most interesting advancements have been made in the desktop component of the GNU system - GNOME.

One of GNOME?s primary goals is to create a development framework that makes the programmer?s life easier. This is done in two ways: by providing libraries for conveniently adding various features and, more recently, by creating a set of components that are easily interchanged in larger applications. This goal should be one of the GNU platform in general. For Free platforms to really succeed, there needs to be a compelling technical reason for both users and developers to choose them over proprietary solutions. The key is integration: both at the application level and the services level.

The GNU platform should consider several of the base GNOME libraries for GNU-wide usage. The most obvious are Glib, gconf, gnome-vfs, and bonobo/OAF. None of these libraries require GNOME as a whole to be installed, and any X requirements are planned to be eliminated.

Glib 2.0 will provide an object system, portable data types, generally useful data structures, and other functionality. No doubt much of this is duplicated numerous times in various applications of the GNU system. Consolidating it into a single library will benefit all of them: not only will it lower memory usage, but it provides a single location for code improvement that benefits a great range of applications and utilities. GConf provides a simple API for application configuration storage, abstracting the back-end storage method. Employing this in GNU programs will allow the consolidation of configuration methods for programs. Not only would this provide the ability to greatly simplify the /etc configuration mess, and the array of dotfiles in home directories as Apple?s OS X was able to do, but it would allow for generic configuration tools like Linuxconf to be much more successful.

Even just the integration of Glib and Gnome-VFS would be a huge win for the GNU platform, allowing programs to take a URI as a parameter rather than just a filename would immediately make it ?Internet-aware.? Even small things, like being able to run ?wc? on a web page, or grepping it for a regular expression would suddenly become one-liners. Scripting would become much more powerful, making simple things simple for beginning programmers. Coupled with the integration of search capabilities, the GNU platform would be able to cleanly integrate remote documents with local ones. For example, it would become possible to search for a file on napster or gnutella as if it were already a local file.

Adding support for the aforementioned technologies would create similar benefits. The integration of GConf at the system level would allow for a much cleaner configuration system for the whole GNU operating environment. Bonobo and OAF are likely to be the most complicated of the GNOME technologies, but they have the potential for huge advantages as well. A convenient component technology is useful for almost any nontrivial program - not just GUI applications. With the component architecture of bonobo and OAF, it would be possible to create more powerful server daemons. A bonobo version of Apache could load and unload components to handle PHP, perl, or anything else as needed, without having to recompile. Upgrading software could, in some instances, be reduced to updating some of the components of the application rather than the whole thing.

The power of these new technologies is enormous. What is important to consider is at which level of the GNU platform they should reside. The lower down in the hierarchy, the greater the general benefit to the system. While these technologies were developed for the desktop, they have been designed with robustness in mind. It seems like a waste to simply use them at the desktop level when more can be done to improve the GNU system as a whole.


Reservations., posted 29 Nov 2000 at 23:21 UTC by egnor » (Journeyer)

Counting the number of words in a Web page is already a one-liner!

wget -O- http://advogato.org/ | wc -w

Now, I do have similar concerns about the disconnection between "the GNOME operating system" and "the (GNU) UNIX operating system". Specifically, I would very much like to avoid the situation present in Windows, which has several different "filesystem" layers, and the user is unable to predict whether any given "file" will be accessible to a particular application. (Some of them handle .lnk shortcuts, some don't; many handle URLs, most don't; some see the Desktop as the root of the hierarchy, others see drives like C: at the root of the hierarchy... it's a confusing mess.)

However, I'd like to see people think clearly before blithely crufting features onto the Unix model. Specifically, I'd like to make GUIs more like Unix, rather than making Unix more like a GUI.

Unix is built on the concept of simple tools that can be flexibly combined; I'm not at all sure that adding GNOME-VFS to glibc (or whatever other form of integration you might propose) advances that mission. It certainly makes the core of Unix more *complex*, and that's almost never a good thing. (Imagine auditing some program for security when any call to fopen() could cause an HTTP connection to any remote host, depending on the "filename" string!)

Now, there are some things which lie outside the ability of Unix' model to gracefully handle. (A GUI cannot be readily modeled with pipes, for example; there are too many back-and-forth interactions between too many objects.) I would like to see some thoughtful exploration of a model that maintains the flexibility and modularity that makes Unix great while expanding its scope of coverage. Plan 9 is very interesting in this regard, and I'm told the HURD also makes some effort in this direction.

I fear, however, that throwing a bunch of "magic features" and "object models" into the core operating system will simply muddy the conceptual waters even more than they are already, and that we will have traded the long-term viability of our operating system in exchange for a few whizzy features.

The basic Unix model remains intact (even if people mock it regularly) even though, 25 years ago, its designers had no concept of the kinds of programming tasks we'd be facing now. Will the basic GNOME model (if indeed there is one) survive the next 25 years and endure an equally large number of unthinkable revolutions in computing? Perhaps. If not, do we really want to make it a core part of "the GNU operating system" and encourage everyone to use it instead?

reply to egnor, posted 30 Nov 2000 at 00:14 UTC by RyanMuldoon » (Journeyer)

On the whole, I completely agree with your comments. (I was also waiting for someone to bring up a wget one-liner.....except that I was thinking more in general terms....it would work for more than just web pages. ) The UNIX model has definitely shown itself to be well-architectured. However, my primary concern is that if the base is left untouched, to be able to continue to innovate new features/protocols/etc, we would be forced to add more and more layers of complexity. This leaves the programmer with many different programming environments to cope with, and also harms the user, in that behavior that can be expected at the GUI level does not exist at the shell level. One thing that Apple did with OS X was create a new environment on top of BSD. This nicely abstracts the UNIX core away from the programmer, offering him/her a convenient programming environment. It also gave Apple the opportunity to clean up the configuration system to be all in XML files. This helps config programs standardize on one format, while it allows users tweak the files by hand. These seem like smart moves. I very much want to see Free systems succeed in the long run. To do that, the UNIX model needs to be evaluated for what works, and what may need changing. I don't want systems like GNOME or KDE end up being kludges hacked onto UNIX. If this ends up being the case, performance will suffer, and the OS will become more convoluted, not less so.

parallel thoughts, posted 30 Nov 2000 at 01:53 UTC by apgarcia » (Journeyer)

this article has much the same gist as a speech by miguel de icaza: http://www.helixcode.com/~miguel/bongo-bong.html

Irrational reaction, posted 30 Nov 2000 at 05:38 UTC by egnor » (Journeyer)

I apologize in advance because this doesn't make very much sense.

Unix is full of nasty evil ugly warts, and yet I like it an awful lot. Windows is incredibly more capable in many respects, offering vastly more sophisticated tools to the programmer and supporting a smorgasboard of wonderful applications for the end user, and yet I find it absolutely despicable. (Hold the flames, please.) This has nothing to do with open source; I felt this way even when I was working on commercial Unix systems.

I don't think I'm the only one that feels this way, either (or I wouldn't bother saying anything). Why? I think the simple answers ("Windows crashes more") are wrong, but I don't know what the right answers are.

When I read articles like Miguel de Icaza's (cited above -- thanks), I feel like he wants to make Unix more like Windows in all sorts of ways I can't describe but I know I don't like. When he spells his vision of a wonderfully integrated future full of code-sharing and CORBA and Bonobo and GUIs and XML, I feel an indescribable sense of dread. At the same time, I can't disagree with anything he's saying; those old Unix "edit the text file and HUP the daemon" interfaces *are* terrible.

I think it has to do with complexity more than anything else. In a "classic" Unix system, you know what the "rules" are. Programs are processes; you can start them and you can kill them; they can read files, which exist in the filesystem. You can read and write those files yourself, you can tar them up and move them out or delete them. If you've killed a process, the associated subsystem is dead. If you've moved all the configuration files from one machine to another, then you expect the processes on the other machine to behave the same way. Processes communicate in a limited number of ways, and it's possible to understand all of them.

The GNOME dream world is a nightmare of complexity by comparison (as is a modern Windows system). "Subsystems" are often implemented with shared libraries loaded by half the processes on the system. Processes use mysterious CORBA interfaces to talk to each other in ways you can't even begin to understand. If I kill a process, what ramifications does that have? Who knows. Will my entire desktop become unstable? Quite possible. Where exactly is configuration information stored? In some mystery database, perhaps.

Now, I can hear the screaming already. "No, it's not a mystery database, it's a bunch of perfectly readable XML files." "No, it's not just Windows! Windows isn't nearly this unified." "This all runs on *top* of the existing system, we're not taking away anything from you."

But that's not the point. The point is that I can no longer separate the components that make up a working system. My terrible fear is that I'll end up in a Windows-like state where things are always subtly broken in some unique way that I can't quite understand and can't fix without reinstalling the world, where the front end that I see on my screen is almost totally unrelated to the actual programs I see running when I type "ps".

(By the way, Apple's use of XML "configuration files" pays only lip service to XML; they're actually just name-value pairs that happen to use an XML schema for storage. It's really quite sad.)

But hey, I'm probably just an old fogie, too set in my CLI-ridden ways to change. I imagine most of you won't even understand what I'm saying, let alone agree...

Well, sure, posted 30 Nov 2000 at 06:19 UTC by hp » (Master)

GLib, GConf, Bonobo, etc. are already designed to be used outside GNOME. GLib predates GNOME substantially. In the case of GLib it's not widely used simply because people don't know about it or they are hard-core old C hackers and don't want to learn. GLib 2.0 introduces some stuff that's much harder to replicate on your own (e.g. g_spawn_*, unicode handling, object system), and removes some common complaints such as out-of-memory handling, so while avoiding GLib 1.2 was questionable, avoiding 2.0 will be flat-out boneheaded unless your app has very, very special requirements. Almost all newly-written C code should use GLib or equivalent. Though I don't know of an equivalent that's as good.

For GConf, Bonobo, etc. the lack of usage is mostly caused by the alpha nature of the libraries, they are simply not stable yet.

another reply....., posted 30 Nov 2000 at 06:39 UTC by RyanMuldoon » (Journeyer)

First, a quick comment to hp: I'm glad that you agree, but I was pretty sure that you, and other GNOME folks would. The people I was trying to address are those that either just use GNOME idly, or those that actually maintain some of the core GNU programs. Gnome-vfs, bonobo, GConf, et al. are in early stages, but could probably benefit from input from a wider range of potential developers. It seems smarter to build in needed functionality now, than tack it on later.

Egnor: I still mostly agree with you. ;-) I like UNIX and all its quirks. I love the simplicity. However, I don't know how much longer the basic concepts can continue. A lot of what the GNOME folks are developing now is really just UNIX taken to next logical step. Rather than everything being a file, everything is a URI. Network transparency could be more complete. Rather than just having small programs that do one thing well that you script together using pipes, we could have components with well-defined interfaces. This brings (more powerful) piping into the GUI world as well as the CLI world.

As for configuration, I would love to see some more consistency and simplicity. While I am no big fan of mystery databases, I am not a fan of sendmail.cf either. I also hate the fact that my configuration is scattered across many directories, all over the place, each file with its own config language. It is just annoying. While OS X's schema may not be ideal, that doesn't mean they aren't headed in the right direction.

Basically, my feeling is that while I like UNIX and all of its flaws, I love the idea of Free software taking computing to the next level. I choose to use Free software because I like the ethical framework behind it, and I find that it suits my needs well. I do give up a little to be able to do that though - Windows has Free software beaten in several areas. I want to see Free software end up being the technological leader. I want it to drive easy, powerful desktops and servers. I don't think that such a thing can happen if we stick to the traditional UNIX model.

Gnome libraries are bad, mmm-kay, posted 30 Nov 2000 at 06:45 UTC by aaronl » (Master)

Let's face the fact that Gnome is simply a poor clone of Microsoft Windows. That's fine. Let people who really want such an abomination use Gnome. One good thing about the Gnome libraries is that you don't have to use them. I have no Gnome libraries installed on my system. With integration into the GNU system, this would have to change (or more likely, I would switch to FreeBSD). This would take away flexibility from users who want a compatable and customizable Unix system.

glib: First off, glib is NOT a gnome library. It is simply a reimplementation of basic ANSI libc functions and on top of that is an extra dependency.

gnome-vfs: And speaking of Unix, doing this would completely violate all Unix philosophy. As a Unix user I want the system to take my commands litterally, not start interpreting things as URLs. If I want to operate on a file from a HTTP server, I will use a pipe from wget. Not only would this be very confusing and illogical for advanced Unix users, but it would probably make the system Posix-incompatable. I think this is a Bad Thing. Right Thing: use EFS under Emacs.

gconf: I don't really understand the need for this. I hear it's an implementation of a registry. What's wrong with tranditional Unix configuration files? Well, a lot. But parsing a configuration file is trivial; why do we need a library to to do this? And above all, why should such a library be standard? If I could choose a standard library for configuration files, it would be a scheme interpreter, to provide for extensibility. But AFAIK GConf's file format is much less powerful than a programming language.

CORBA: See egnor's second post. Unix is based on simplicity. Many "subsystems" communicating in unknown ways is not as simple as a pipe with line-delimited text data!

Bonobo: See http://www.advogato.org/person/aaronl/diary.html?start=57

I was very suprized by Migel de Icaza's fundamental misunderstanding of shared libraries in the recent essay presented. He criticized Apache and Samba for not using any non-standard libraries. So, they must be duplicating code, right? Wrong! Samba serves SMB. Apache serves HTTP. They contain code to do this, and do it using the standard socket functions in libc. What code should they share? Miguel seems to think that most code should go in shared libraries. I disagree and think that applications should be simple and not overlap in functionality (at least on Unix). So, what an application actually does should be in its code. If all applications do essensially different things, why should they be sharing large amounts of code?

Hello? McFly??, posted 30 Nov 2000 at 07:32 UTC by bratsche » (Master)

aaronl continues in his usual clueless style, I see. While I really don't agree with RyanMuldoon's article, I can't help but point out why aaronl's points are completely stupid. This was not the first time I have seen him point out the fact that he really doesn't understand something he's bashing. He has no idea what he's talking about, but he's hell-bent on bad-mouthing GNOME with his foolishness. If he's going to disagree with Ryan, that's fine with me, but he should at least be correct with what he is saying.

Glib: The so-called "re-implementation of ANSI libc" is actually a feature for allowing us, the application developers, to write applications portably across Unix machines. Who wants to deal with portability crap in their apps? However, Glib is not limited to that, and hp has already explained above how Glib is really cool and stuff.

gnome-vfs: I honestly have no idea what aaronl's problem is with gnome-vfs. He isn't very clear on what is so bad about it. How is EFS really so much better? And why does he think that advanced UNIX would be confused by this?

gconf: And here he admits that he doesn't understand what it's used for. If he would bother to read docs, he would quickly understand how GConf differs from a registry. GConf can tell all running applications when a certain value changes somewhere. This service works across the network, affecting all login sessions for a single user. Does aaronl's traditional Unix config file system do this? He refers to GConf's "file format", so he obviously has no idea about how the backend is totally replaceable.

Bonobo: I think I've responded to this message once, long ago, in one of my diary entries.

aaronl, please support your opinions with more than your own misconceptions of software you've just decided not to like for one reason or another. You parade around and proudly tell everyone how you don't have any GNOME libraries installed. You seem like the kind of person who might go, "Hey, cool.. this is a great XML library. Oh, shit, GNOME uses it?? I need to get rid of it!!" You demonstrate with each post that you haven't the foggiest notion of what any of GNOME's libraries do, and I know this is not the first time someone has told you this. Therefore, I think what you're doing is rather unethical; I think you should either research what you want to say, or you should just stop spreading slander and false information about GNOME.

What is integrated?, posted 30 Nov 2000 at 08:53 UTC by bagder » (Master)

    even more powerful if integrated to the base GNU system

AFAIK, most linux distributions today comes with the option to install GNOME. I figure in most cases that means you get all the GNOME libs and hundreds of utilities installed. Isn't that "integrated in the GNU system"?

In exactly what way should they be "more powerfully" integrated? Are you saying they should be in the kernel?

We write portable unix programs these days, no matter where you put your gnome libs, "integrated to the base GNU system" or not, we have to write configure scripts and similar to detect them. I want my programs to run on non-GNU systems as well.

I really don't see how any user will gain much by getting gnome more "integrated" than they are today.

Please enlighten me, why can't we just use the libraries as we do today?

Feel free to bash unix and the unix way of doing things, but if you change that way too much it isn't unix anymore. We must not forget the unix philosophy of keeping things simple and that each tool does one thing. Turning unix into non-unix will not benefit unix. I'm not saying we can't improve it, I'm just saying we need to watch out.

GNOME is not just a GUI, posted 30 Nov 2000 at 11:01 UTC by dirtyrat » (Journeyer)

I too don't agree fully with RyanMuldoon's article. I can't comment on Bonobo since I've never played with it, but I can't see what problems anyone would have with glib and gconf.

glib has some fantastic stuff in there: date arithmetic, linked lists, data types of guaranteed size, the list is long and full. Have a look in /usr/lib/glib.h if you don't believe me!

gconf too is a well thought out and powerful tool; I'm writing a program of which a large contributor to its power is its parser - the last thing I want to do is write another parser to read its config files! I'm not saying that gconf is right for every app, but it is second to none for the majority of programs that simply need to store the user's preferences. I don't see the problem with its nature as a registry either: just because gconf implements a registry and Windows has a registry which is seriously flawed does not imply that gconf has inherited Windows' problems.

I do have reservations about hooking gnome-vfs into libc, however: it sounds like a recipe for all kinds of `undefined behaviour' =). Not that I'm saying it couldn't be done, but I suspect that it would be much less of a trivial task than glib and gconf.

All in all, I'm not sure if it matters whether glib and gconf are implemented into libc. It is more a question of educating programmers to the fact that GNOME is more than 'just a GUI', that it encompasses all these APIs that give functionality at minimal cost to the programmer.

Why reinvent the wheel?

GLib, please, God damn it, use it ! (and the rest as appropriate also), posted 30 Nov 2000 at 11:35 UTC by hadess » (Master)

While I don't agree RyanMuldoon on all the aspects of his article, I must say that I'm deeply annoyed that people aren't using GLib more. Just a couple of days ago, I had to patch OMTA, a nice single-user mail server, to make it work on PPC. And even more recently, this week-end in fact, I had to fix the Metatheme from GNOME to work on PPC as well. What the ! If even GNOME applications aren't using GLib where is the world going ?
On the nice side, the Rio500 utilities are using GLib for glueing the rio500 driver and the user-land code. The result is that I didn't have to use any special tricks or #ifdef's for my walk500 program to work equally well on PPC, and x86. I like that a lot, and not having to reinvent the wheel for every person wanting linked-lists, double linked-lists, and other CS class fanciness, really closes the discussion.

Thanks bratsche for making aaronl shut his mouth. It seems he's got the knowledge of a Slashdot reader, no offense to them. The part that's even more worrying, is that he is a contributor to (from what I can see on his personal page) many GTK+/GNOME related projects, and doesn't even know about the technologies he is using, or will be using. I don't claim to be a good programmer, but at least I know what I'm talking about... more than you Aaron.

Going back to the biggest misunderstanding of Aaron (him again), GConf is just a library to access the configurations. It permits to have the configuration medium separated from the actual configuration. I just used the gnome-conf library for the first time this week-end (part of libgnome, it is the ancestor of GConf), and it is easy to use: "Read the boolean from that key, OK, now write this string in that key, sync, done". And the events capabilities is handy, no more polling. KDE has a similar configuration mechanism, just that it is bound to KDE or the Qt libs, AFAIK. So GConf is a "Good Thing" (tm)

Ryan, you forgot popt ;P

Bonobo for Dummies (tm), posted 30 Nov 2000 at 12:46 UTC by Iain » (Master)

Iain's Law #1: Anyone who says "Bonobo is against the unix way" is to be shot.

Yes, that way are small programs which communicate with each other through the magic '|' operator. Like so:

eog myporn.jpg | gnumeric

Oh wait...hmmm, how does '|' work in a graphical environment? "Badly" is the answer, I feel. How should eog present the data to gnumeric? As raw rgb data? As jpg data? As an image? Enter Bonobo and CORBA.

Bonobo and CORBA are the '|' operators of the GUI world.

Why should gnumeric have the code to understand jpgs? Why should this code then be duplicated in eog, nautilus, evolution and any other program that wishes to load jpgs? Hold on...is this not against "The unix way"?

And finally: After all aaronl's ranting about GNOME being bloated, he is now advocating that all programs should contain their own code. Well, I suppose things have finally come a complete circle since March 9th 2000

Yes do it!, posted 30 Nov 2000 at 14:32 UTC by proclus » (Master)

We are trying to port the Gnome libraries and Applications to Darwin X11. Some of the problems that we face would not be problems if Gnome were intergrated into the GNU base. You can see what we are doing at this link.

http://gnu-darwin.sourceforge.net

proclus

pipes vs. CORBA, posted 30 Nov 2000 at 14:55 UTC by cmm » (Journeyer)

geez, people.

of course pipes are the Unix Way. they fit together very well with the philosophy that gave us the C language.

here are the parallels: when C compiles a program, it throws out all the type information. the compiler knows lots of things, but chooses to not pass any of them on. this all is done, of course, in the name of Optimization, regardless whether you ask for it or not.

and the Unix people regard this insane eagerness to lose as much information as possible as a Good Thing.

same with pipes: you turn some data that may have some internal structure into a stupid stream of bytes, so that the other program has to interpret it. which is kinda painful and a pointless loss of cycles if the data is not just text. naturally the Unix people think it's a good thing. they are used to the pain. it makes them cool and manly and stuff.

glib and apr, posted 30 Nov 2000 at 15:23 UTC by lkcl » (Master)

there is a thread on lists.samba-tng.org in which the use of glib was mentioned. i tried to use glib for virgule's socket methods, and ripped it out within 20 minutes once i discovered that the means to do a select, in order to detect whether there was data outstanding on the socket (with a timeout of zero) did not exist.

glib i think is a neat and convenient object-orientated system that unfortunately hides too much from the developer.

yes, we need the unicode libraries, but Unicode16 not Unicode64.

for anyone who is considering doing portable hard-core network services, i recommend that you investigate the apr (apache runtime library) instead. it is not as unix-portable-complete as the samba codebase, however funnily enough it's pretty close, _and_ it does beos and win32 (which the samba codebase does not)

apache and samba, posted 30 Nov 2000 at 15:34 UTC by lkcl » (Master)

aaron,

i have been working with the 320,000 line samba codebase since it was only 60,000 lines, six years ago. it has grown into a monster of epic proportions.

i recently investigated the apache 2.0 and 1.3 codebase because i was working with virgule.

i was ... somewhat suprised to see that yes, they too have an ap_psprintf when samba has an slprintf. they too have socket-wrapping methods, signal-wrapping methods and, get this: they too have an initgroups() wrapper for broken posix systems that don't have initgroups()!

so yes, there is a large amount of duplication, and both the samba _and_ the apache codebase could benefit from codesharing: the apr is cleaner but the samba codebase goes further [wraps all system calls with a prefix sys_.. and uses autoconf to return either dummy info or a mapping to an alternative call, e.g. sys_wait calls wait4 for NEXTSTEP os].

etc.

p.s. please ignore that other guy's negative, personal remarks. hopefully next time he will criticise only what you say instead of being embarrasing.

The problem is not UNIX, it's how libraries get added in the standard library., posted 30 Nov 2000 at 16:20 UTC by nymia » (Master)

It seems that the fault is Unix again. While I do agree Unix does have a share of it (actually, it's only 1%) , I still see the problem of integrating recently created libraries into the core the problem of computing itself. Every platform, whether it's Unix, Linux, Windows, Macs, BeOS, AS400s, AIX and Mainframes are burdened by this low-tech'ish mechanical and manual way of handling libraries. To me, the problem lies in how translators, compilers, linkers and loaders were built. They were built at the time when hardware was very simple, in an environment were desktops were just a figment of imagination. What they did was they created a faster machine for these tools and left everything as it were.

A good documented example of the low-tech approach of handling libraries was greatly illustrated by Bjarne Stroustrup's immediate first release, to quote:

"To my mind, there really is only one contender for the title of the Worst Misktake: Release 1.0 and my first edition [Stroustrup, 1986] should have been delayed until a larger library including some fundamental classes such as singly and doubly linked lists, an associative array class, a range-checked array class, and a simple string class could have been included. The absence of those led to everybody reinventing the wheel and have an unnecessary diversity in the most fundamental classes. It also led to serious diversion of effort. In an attempt to build such fundamental classes themselves, far too many new programmers started dabbling with the 'advanced' features necessary to construct good foundation classes before they had mastered the basics of C++. Also, much effort went into the techniques and tools to deal with libraries inherently flawed by the lack of template support"

Thanks for the lively discussion.

GNOME: Not just for Unix. XP-compatibility considered beneficial, posted 30 Nov 2000 at 16:21 UTC by ztf » (Apprentice)

proclus, it would be interesting to know specifically how more "integration" would help out with your Darwin- GNOME project. (That sounds fun, although I can guarantee I won't have a chance to help. But maybe more details would spark some interest from people who can and will help.)

Cross-platform (XP) toolkits that "wrap" the underlying system calls are incredibly useful. This is shown by the fact that everyone who writes cross-platform software ends up using or writing their own wrapper library. See lkcl's comments about apr and samba. Glib also does this, as does ACE (although as a C++ class library). Netscape did this too. And back when I worked on proprietary network protocol stacks, we ended up rolling our own "wrapped" environment as well.

One of the productive threads, IMNSHO, to come out of the recent flamage on the gnome-office list about how/whether AbiWord is a "real" GNOME application was the discussion of which GNU and GNOME libraries did and did not provide sufficient cross-platform usability for the Abi developer's XP goals. [Upshot -- gettext and libxml could use some love on Win32, Glib/GTK+ will not be considered usable on Win32 until GTK+ looks native there, maybe some other points I've forgotten.]

Personally, I'd love to see GNU/GNOME become, not just the Linux/Unix desktop and development environment of choice, but the cross- platform application development environment of choice. There's a long way to go before that can happen, however.

I'm also unclear on what "integration" is recommended for these libraries into a base GNU system, beyond educating developers on the wonderful goodies that glib, libxml, GConf, etc. offer and encouraging coders to reach for g_strdup() instead of strdup() .

Enough rambling from me ...

reply, posted 30 Nov 2000 at 16:44 UTC by aaronl » (Master)

GLib: The one thing GLib provides that might actually be useful is linked lists. However, these are trivial to implement on your own, and doing so will only add a few lines of code. However, my reservations about GLib are its monopolistic tendancies that make it anti-portability. Most people would say that an int should be an int, but GLib defines its whole set of base types! Many of them are simply typedeffed to the standard types. I want libraries that coexist with the language standard, not try to replace it. Things like g_malloc, g_free, and most of the string functions seem extremely unnecessary and are just reimplemenations of standard functions that make your code uglier (g_ g_ g_ g_ g_) and make your program less portable from GLib.

gnome-vfs: I'm not saying that gnome-vfs is bad (although it probably is), I am just saying that it does not make sense to make standard, simple calls like fopen() use it by default. Doing so would cause compatability nightmares and create massively non-standard operations in the base system.

Scriptability and error conditions, posted 30 Nov 2000 at 17:05 UTC by dan » (Master)

A few points, mostly independent of each other

1) Iain's CORBA-is-pipes-with-types example "eog foo.jpg | gnumeric" is good. I ask: how can I achieve that using corba (or whatever Appropriate GNOME Technology) in as simple and succint a manner as the pipeline shown? I don't want to have to cut and paste 200 lines of Python to connect these applications together. Pipes are very simple. So am I. I like pipes

2) gnome-vfs, for example, introduces many new and exciting ways for filesystem calls to fail (your tar file was not actually a tar file. your nameserver has disappeared. There is a routing loop between you and the site at the other end. The ftp server at the other end is full and won't let you in. Your dialup ISP actually has nothing wrong with it but is being incredibly slow). How does an application cope if it's not expecting these things to go wrong? Note that some of these are actually policy issues: how long do you wait before timing out? do you retry? how do you report the error (stderr is not going to help you if it's going directly to ~/.xsession-errors).

Frankly, most programs are already bad enough at coping with full disks, removed removable media and stuck NFS servers that I don't hold out much hope for this being done properly.

3) Bootstrapping: give it some thought. You don't want your startup scripts depending on these neato technologies before the necessary support services for same are in place. But you probably don't want to cater for this event by silently fall back to the old ways of doing things, or it'll be massively painful to diagnose what happened if your nameserver falls over during normal operation either. Especially if you have no error reporting - see point 2

Think, McFly, what did I tell you??, posted 30 Nov 2000 at 17:09 UTC by bratsche » (Master)

GLib is much more useful than merely linked lists. It does multi-threading very well and fairly portably, it does trees and hash tables. 2.0 provides string handling functions for Unicode/UTF-8, it provides a new object system similar to GTK's that includes the notion of 'interfaces' like those in Java, and various newer smaller additions.

I mostly agree with what you have to say about not including gnome-vfs in system calls. I'm just disagreeing now with the way in which you present yourself and your arguments. You take the smallest little opportunities to try to jab things you don't know, like in your first sentence under gnome-vfs. This is exactly what I think is unethical about you.

glib is more than the sum of it's parts, posted 30 Nov 2000 at 17:11 UTC by jlbec » (Master)

lkcl:

I'm interested in your comment that glib needed to be ripped out for lack of a select() method. First, did you not find the GSource stuff to your liking? AFAICT, that happily does poll() on any descriptor, with controllable timeouts.

However, that's not the part I find important. I myself haven't used GMain/GSource yet, continuing to hand-code my main loops. Call it habit. That doesn't remove its extreme usefulness.

I wasn't all that big on glib myself either. One more dependancy. Then one day I glanced at the code for some of the string functions, just to see if there was anything I could glean from it. Their g_strconcat() and my d_strconcat() were pretty much identical, modulo auto variable names. I've used glib ever since.

It's not that I'm particularly brilliant at strconcat() code. It's just that there is a well established way to do many of these convenience tasks. So why recode it? If you are a pretty decent programmer, you're just going to write the same code.

To the comment of 'glib as libc', aaronl would do well to remember all the autoconf tests that exist to see which POSIX/ANSI/GNU/XOPEN/etc functions are there. There's usually a few missing. With glib, all the functions are always there. And I haven't yet really encountered a platform where glib doesn't build. I used it on AIX. Tor is using it on win32. I think that BeOS is about to be fully merged, and someone is hacking at VMS.

The other stuff, yes 'n no. I can buy the argument that a URL aware fopen() is a nice security complexity. I can see how Bonobo/CORBA can be complex and world-hiding (while I also see how mind-bogglingly useful they can be). I've been part of the Gnome world for a while now, and I still have to ask which process I can kill, and which is really just 'start over' time.

But there is really no excuse for missing glib. For those of you who say "Yeah, I can build it in XYZ, but it doesn't ship by default," what do you think gets it shipping? Your app, which everyone uses. Then they call the maker of XYZ, and say "Ship glib, dammit!"

g_free, portability, etc, posted 30 Nov 2000 at 17:19 UTC by nullity » (Master)

aaronl,

The reason people advocate glib is primarily portability and maintainability. Whatever other benefits it provides only save work for the person writing the initial code. In a codebase of any significant size, maintainability becomes a critical feature. One common "maintenance" aspect is portability. Another is resilience (e.g. changing parts of the codebase should cause minimuum breakage in other segments, and only in expected ways).

One simple example of glib improving portability is wrapped types. Some types are arguably a waste (gchar comes to mind), but others differ significantly from platform to platform. Glib allows me to maintain consistency. C doesn't specify what the size of int is...which can be good because you want to be able to leverage your particular hardware platform. But sometimes you really do want a 32-bit integer...not 64, exactly 32. Things like this are *natural* with glib, which means that programmers don't have to think about it. Even if they are cautious most people will naturally produce some un-portable code. Glib removes many, if not all, of these instances in standard programming. By way of example the Nautilus, Medusa and a large portion of the gnome-vfs codebase were ported to Solaris running on UltraSparc hardware by one little me in less than a week of off and on work. Getting them to compile without glib probably wouldn't have been too hard, but they probably would have contained many irritating, hard to track down runtime problems.

g_free, g_printf, etc are not meaningless re-implementations. If you browse the glib documentation you'll find it explicitly states their differences from their libc equivalents. Some are more significant than others, but the general gist is that they improve resilience. For example by preventing segfaults in certain conditions where their libc kindred would turn belly up.

I'd also like to throw in a kind word for assertions. g_assert makes it trivial to add...well...assertions, but without degrading the speed of your final distributed program, and literring your users with potential debug messages (and you can leave all the debug code in place). They are wonderful. Yes, I could implement them without glib, but here's the deal. I think you'll find a strong corellation between programs the use assertions and programs written with glib. People are just plain lazy, even programmers (particularly programmers). There's no point in re-inventing the wheel every day.

g_malloc() is a good thing, posted 30 Nov 2000 at 17:29 UTC by jlbec » (Master)

aaronl claims that glib is a reinvention. Yes 'n no. But a lot of the g_?* stuff is actually a good thing.

How many programs have you seen the configure script check whether they specify "u_int32", "uint32", or "u_int32_t" for a 32-bit type? I've seen a lot. Here, it is "guint32". Even on win32, it is "guint32". Many folks have argued that while "guint32" is useful, some of the other g* types are not (gchar, maybe). Not an invalid point. The current glib mentality is that since you have "gint32", why not "gint", or "gchar". Consistency. The fact is that you can happily use "int" and "gint32" to get what you want. Glib won't care. You still gain the benefit of "gint32" being universal with glib.

And how many programs have a my_malloc() that is:

my_malloc()
{
    foo = malloc();
    if (foo == NULL)
        handle_breakage();
}
A lot, I'd bet. So g_malloc() is nice because it happily provides a handle_breakage() for you. And in glib 2.0, it even allows you to specify what to do upon handle_breakage().

And there is g_new(), which macros out the casting/counting you normally code. Which is cleaner to read?

ptr = g_new(gchar, 2048);
--
ptr = (char *)malloc(2048 * sizeof(char));
now, a C programmer understands both of them immediately. But one is easier to type, easier to read, and just plain less messy. The work still gets done.

But wait, there's more! How about this?

ptr = g_new0(gchar, 2048);
--
ptr = (char *)malloc(2048 * sizeof(char));
if (ptr == NULL)
    handle_breakage();
memset(ptr, 0, 2048);
Now it is a lot less typing. And we all know what both code snips mean.

These are not the only examples. Convenience and portability are good things, especially for low level tasks. How about the date routines? We all love ctime(), right? Expanding strings and arrays? File slurping? It's there. You've done it once, why write it again? And if you do have a better implementation, submit it. I guarantee they'll take it.

Some questions, a little more senseless rambling, problems with Bonobo (CORBA in general?)., posted 30 Nov 2000 at 17:30 UTC by egnor » (Journeyer)

1. Others have asked as well; what do you mean by "integration"? Most (GNU/)Linux distributions include some version of the GNOME libraries, and developers are free to use (or ignore) those libraries, just as they are free to use (or ignore) any other library that comes with the system. As far as I can tell, you're suggesting a concerted effort to convince all application authors to use the GNOME libraries. One exception may be something like GNOME-VFS, which I find actually harmful if it is not seamlessly ubiquitous (but I'm also scared of making it seamlessly ubiquitous in its current state).

2. I'm sure this is a FAQ somewhere, but why is an O-O framework built on the C language better than just using C++? Why is g_list better than STL's list<>? Shouldn't we just be encouraging people to make the leap to C++ (now that it's well-supported) rather than, um, reinventing the wheel and failing to share code? (Oh boy, I can see the flamethrowers igniting from here.)

3. Is there good documentation for GLib somewhere? The "reference manual" at developer.gnome.org doesn't count; I'm talking about something equivalent to the man pages which exist for standard C library functions.

...

I think part of my objections stem from the notion that, at some level, the OS ought to provide a minimal basis set which allows applications to coexist and interoperate while leaving as much implementation freedom as possible. For example, X lets me run GNOME applications, KDE applications, Motif applications and even SDL applications side-by-side. You might laugh at the use of "X" and "minimal" in the same paragraph, but compared to (e.g.) the Windows GDI, X really is an elegant, minimal protocol for accessing and sharing video display hardware.

GNOME, on the other hand, has a tendency to mix together the interfaces needed for interoperability with lots of implementation support. The mixing is often not optional; in many ways, if I want to write a GNOME application, I have to do things the GNOME way. Put another way, the GNOME libraries have a tendency to dictate policy (mostly in implicit ways, not explicit ways).

Miguel's paper claims this is a good thing, and I think it is in fact a fine thing for a desktop suite of applications. I don't think it's a good thing for an operating system.

...

Lots of people claim that Bonobo is "Unix for objects", which is a perfectly reasonable point of view (aaronl aside). However, I see two very important ways in which Bonobo (and CORBA in general) is inferior to the classic Unix composition model:

A. Bonobo components are not protected from each other and cannot be identified by OS-level tools. They frequently run in the same address space (necessary for performance?), have very "high-bandwidth" and complex interfaces that are hard to audit for robustness in the face of misbehavior, and generally tend to intertwine applications with each other.

Example: If I'm reading e-mail in 'mutt' and I pipe an attachment to some external command for processing, and that command crashes, the mailer will simply say "oh, looks like your external command terminated abnormally" and continue with life. If I'm reading e-mail in the GNOME mailer (what is the GNOME mailer, these days?) and the embedded component which is displaying my attachment crashes, the entire mail system (and possibly even the whole desktop!) is now dead or unstable.

(Crashing is not the only problem, though it's the most severe. Processes can misbehave in other ways; they can hang, they can use lots of CPU or memory, they can simply generate bogus output. These things can be diagnosed easily when they're standalone executables and Unix processes, but not when they're shared library "plugins".)

B. As a user, it's very hard to interact directly with a component to see how it works, figure out what it can do and generally "play with it". This is easy with a command-line program; I can run it with no arguments to see if it prints a usage message, I can try various combinations of parameters, I can type input on the keyboard, I can read the output (which is usually text) on the screen (using 'less' if there's lots of it).

I think people tend to underestimate just how important this is. This is what lets me feel "connected" to my system; I've run each of the little bits myself. I have a feel for whether they're fast or slow, how reliable they are, whether the interface is well-designed, what the input and output formats are, and so forth. Without sitting down and writing code (and the basic code needed to even get started in a GNOME/CORBA/Bonobo world is pretty nontrivial), I can't get anything like that sense from a component. Even after I've written code, I still feel like I'm "one level removed" from the software I'm "using". Even if I do write code, I have to pull up some kind of documentation somewhere; I can't just say "hmm, I wonder what this is" and run it, the way I can some neat-looking utility.

For novice users, not having to interact directly with the low-level components that make up their system is good. For experienced users, it's death. Ryan Muldoon talks about the ability to create one-liners that access Web pages; in pure Bonobo system, there are no one-liners at all!

If both of these problems can be solved in a component system that's minimal and sensible (Do we really need objects, or just closures?), then I can embrace it wholeheartedly as the Successor of '|'. Until then, I'll have to watch in fear.

OO in C, posted 30 Nov 2000 at 18:12 UTC by jlbec » (Master)

engor:

Some well thought comments about CORBA/Bonobo issues. I'll skip those.

OO in C is no different than OO in C++ or Java or ObjC or Perl or anything. It's just more verbose, and you have to handle more low level stuff. It is still beneficial where OO makes sense, and dumb where it doesn't.

I don't want to make language wars. I don't think C++ is inherently evil. But it is not "well supported." If you mean "well supported on Linux," Linux is not the entirety of my world, nor is it for many others. There is a (very) minimal subset of C++ that you can use in an almost portable way. C is widely understood as far as portability tasks (not that it is easy).

Also, a lot of folks (myself among them) find that inheritence stuff in Glib/GTK+ OO is much easier to understand/hunt than in C++. This may just be a familiarity issue, though the guy behind me has spent years with C++ and still feels that way.

Then there is the mangling issue. Often, minor revisions of the same compiler on the same platform cannot link objects together. Complete recompile time. This is to say nothing of the 3 or 4 C++ mangling strategies in widespread use across different platforms. A C library is the easiest to wrap for use by multiple bindings. Witness the GTK+ bindings for Perl/Python/C++/guile/you name it.

The upshot is that the initial developers felt the advantages of C were good for them. They also felt that OO design worked well for GUI work. And here we are. This isn't to say that C++ is bad. Heck, they could have done it in Smalltalk. The LISPers would happily tell you the advantages of the functional world. But it was done, and done well, in C.

Side note, posted 30 Nov 2000 at 19:45 UTC by lilo » (Master)

This article and the replies illustrate something a nice aspect of the Advogato community. I've seen this before. "Apprentice" posts article, gets very thoughtful replies. It seems as if certain folks go out of their way to glean what's useful and productive from an article and make useful comments. When the author is a new contributor, this is a very good thing.

We have our usual percentage of flaming but, on balance, response to these articles says something very positive about Advogato. Good show.

Interesting discussion...., posted 30 Nov 2000 at 19:56 UTC by RyanMuldoon » (Journeyer)

I'd really like to thank everyone for their comments on the article. I have been thinking about this idea for a while now, and it is great to get feedback.

To respond to some of the comments, I'll do a point by point:

"Integration": What I meant by integration was by no means inserting things into any kernel, or even libc. Those are pretty basic, and are probably best being left so. It is arguable where something like gnome-vfs should go to see most benefits, but I would definitely like more programs to be able to use it (or something like it) for file handling. Why? Because when I actually deal with files, more than half the time they aren't normal files sitting on my hard drive. It is kind of dumb that I have to think about all the different programs I need to use to get the functionality I want. I just want the functionality.

bonobo vs. pipes: I completely agree with what Iain said. Bonobo is really just a more modern implementation of pipes. Text streams can't really represent a chunk of a spreadheet, or an image, or a movie that nicely. You have to code more functionality into all of your programs to be able to handle more data types, and whatever else. Components, I think, allow us to return to the real UNIX mindset of "one app, one function." It just replaces "app" with "component." Components have the added benefit of a clear and well-defined interface (via IDL), so you know what they can do. That way, I can concentrate on functionality, not which app I need to give me that. There was a comment earlier saying that it is nice to know what everything does on your computer: I agree. It is nice. But I also don't want to have to hunt around to find out which program can let me count words. For the benefit of aaronl, I'll use the example of Napster: when I want to find a song, I that is how I should use my computer. Use a general "search" facility, and tell it I am looking for a song, and give it a name. I shouldn't have to know whether or not it is on my computer, and if it isn't, use something like napster. Why should I have to teach myself to do things the computer's way? I should make my computer work my way. So, I say implement napster as a component, so I can use its functionality in whatever way I see fit.

GConf: It seems to me that most people agree that UNIX handles config files really grossly. They are everywhere, and each one does it differently. That is just annoying. To me, GConf solves this problem nicely. It has a simple API, and allows for multiple storage backends. It can also notify applications of configuration changes. This seems like a great thing to use generally.

The UNIX Way: I'll say it again. I really like UNIX. It makes sense to me. It gives a nice simple model to work with. However, I think that this simple model needs to be extended in a clean, simple way in order for it to continue to succeed. We deal with more than just text these days. We also have a bunch of transfer protocols that shouldn't just be used by one application. It seems like we should make these generally available. It seems to me that utilities like "wc" and "grep" are right in line with components. They just do things with only text. Components can handle more in a similar fashion.

more on bonobo vs. pipes: I think that pipes are great. They have a ton of power. But, as Iain said, pipes can't handle images, movies, etc. Similarly, I can't just have one random component interact with another random component, because they may not make sence together. That is why you have well-defined interfaces for the components. Also, as for the "there are no one-liners with bonobo" comment - that's true. Writing an actual bonobo component will be more than one line. But I was talking about scripting, and using existing components. That would give us some nice one-liners that can use a ton of components at a time. This to me is smart. I'm a pretty lazy guy. I want to be able to get a lot done with minimal effort. Really, I am just being selfish here - I want this component architecture, and other nice technologies to be used across the board so I can do my work more efficiently. I want to be able to leverage other people's work to my advantage when I program. It makes my life easier. It seems to me like this is a good goal to advance towards.

Re: Components, Interfaces, Pipes, posted 30 Nov 2000 at 21:24 UTC by nymia » (Master)

While it is true that components and interfaces work well, in fact, pretty well as they provide a lot Good Things (tm) to the designer and the programmers as well. However, I still have some doubts if it will survive in the coming years. One thing I don't like with it is the presence of the interface itself, as it somewhat sets a standard protocol on how two components communicate. Plus, it forces the object to "remember" what method to call, resulting to confusion on the part of the designers and implementers. That is probably the reason why UML and CRC stuff are widely used for controlling and managing complexity. Overall, interfaces are good provided N is small. But when N gets big it becomes unmanageable and grossly huge in terms of Lines of Code (LOC).

From what I see, the model that will come next is about interface-less objects. All objects will have its own handler and will receive only one message object where all information are stored. In this model, all components, say N, can send any messages to N+1 up to N+M components. To give an example, here at work, we have just designed a framework where all objects handle its own messages irregardless of where it came from. It has its own handler in figuring out what to do. It's only job is to process the message only, it doesn't care where it came from. Period. In that sense, it behaves like a Unix pipe.

In summary, I think interfaces are good, but in my book, it is only considered a "stop-gap", a quick-and-dirty solution for applications where N objects are small.

Question for those who know Bonobo, posted 30 Nov 2000 at 23:19 UTC by sab39 » (Master)

How hard would it be to create a (suite of?) shell command(s) that provided easy interaction with CORBA objects and bonobo components?

I imagine being able to write a shell script that looks like:

#!/bin/bash

PID=`bonobo_new Gnumeric::Spreadsheet` bonobo_invoke $PID set_cell_value 1 1 13 bonobo_invoke $PID save_as thirteen.gnumeric bonobo_delete $PID

Or how about a bonobo_ps command that would list existing objects along with their "pid" (really "component ids" but "cid" sounds wrong... bid perhaps?) so that then you could invoke methods on them with bonobo_invoke?

Or, how about bonobo_apropos Gnumeric::Spreadsheet which would list all available methods, ideally along with a textual description if the IDL files contain such a thing.

I guess one important question here is whether there even exists a human-readable name for components (such as my example of Gnumeric::Spreadsheet), and whether there does exist a unique identifier for a component instance that can be rendered in text (and accessed from separate processes). If so, this shouldn't be hard to do.

If done, this would perhaps alleviate some of the concerns of people regarding the "immediacy" of bonobo - it's exactly as immediate as other shell commands, once you know how to use bonobo_new and bonobo_invoke.

There seems to be misconceptions about components...(and some other random replies that didn't fit under that title.), posted 30 Nov 2000 at 23:55 UTC by Iain » (Master)

The first shall be last and the last first :)

sab39: That is planned. Or at least a program (say one called bonobo-client) is planned that basically does the same.

nymia:"One thing I don't like with it is the presence of the interface itself, as it somewhat sets a standard protocol on how two components communicate. Plus, it forces the object to "remember" what method to call, resulting to confusion on the part of the designers and implementers.".
And this is different from shared library interfaces/C++ class interfaces...

lilo: Yes, indeed.

egnor: "Bonobo components are not protected from each other and cannot be identified by OS-level tools.".
What does that mean? I can use ps and discover what components are running (if they're running as out-of-process components). I'd imagine there's some way to find out what shared library a program has dlopened, if the component is inprocess.

" Example: If I'm reading e-mail in 'mutt' and I pipe an attachment to some external command for processing, and that command crashes, the mailer will simply say "oh, looks like your external command terminated abnormally" and continue with life. If I'm reading e-mail in the GNOME mailer (what is the GNOME mailer, these days?) and the embedded component which is displaying my attachment crashes, the entire mail system (and possibly even the whole desktop!) is now dead or unstable."
Now, I guess you're talking about Evolution here, because I don't know of any other gnome mailer that uses Bonobo. But currently, if a component crashes, then it doesn't really matter too much, only the bits of evolution that depend on that component don't work (obviously). My entire mail system is not unusable if the addressbook crashes, I just can't add an address to an outgoing email, but I can still read my mail. The rest of the program is fine, the rest of the desktop is not touched.

" As a user, it's very hard to interact directly with a component to see how it works, figure out what it can do and generally "play with it"."
Now this seems to be your major miconception. A component is not a full program. It is not intended to be run as a stand alone. It is intended to be used by a larger program somehow. If as a programmer you want to know how it works, you read the IDLs of the interfaces the component supports, if you, as a user, want to find out what it does, then you run a program that uses it, up pops the GUI (if it is a GUI component) and you get to push all the pretty buttons and see what they do. If it is not a gui component, then you can treat it like a more powerful shared library, and you don't really need to worry about it at all.

dan An eog component showing foo.jpg would be graphical, so it would be "piped" into a graphical program. Your simple and succint manner in that case would be "Click insert object, select EOG image viewer component, select image you want to view". As you brought up python, I think you might mean code wise, so here's the C code to embed an EOG image component.

GtkWidget *image = bonobo_widget_new_control
("OAFIID:eog_image_viewer:a30dc90b-a68f-4ef8-a257-d2f8ab7e6c9f",
CORBA_OBJECT_NIL);
gtk_container_add (GTK_CONTAINER (mycontainerwidget), image);
gtk_widget_show_all (container);

I hope you find that succint enough. Getting an image into that component would be about 5 lines more. No 200 lines of cut and paste code.

aaronl: And then we're back to more drivel from mr l. Hoohum.

clean up gnome, then we'll talk! :), posted 1 Dec 2000 at 00:22 UTC by xtifr » (Journeyer)

Well, mostly I think I agree with egnor (although, unlike him, I do understand and appreciate why C++ is not The Answer -- since I rarely use C *or* C++ directly any more, I'm very much aware of the fact that C++ does not have a public object model, by design).

I also don't use Gnome (although unlike aaronl, I neither hate it nor refuse to install any of its components). It seems to me that it has a long way to go before it matches the cleanliness, elegance, simplicity, and most of all, the modularity of Unix.

I think there's too much all-integrated-into-one mentality among the core gnome folks. Rather than "integrate gnome into gnu", I would rather see "gnome refactored, and useful core functionality moved to a lower level." Then I and egon (and even aaronl) could benefit from new functionality without having to install useless junk like X. :-)

Gnome is, IMO, long overdue for refactoring. For example, there seem to be nice widgets which could easily be used with an otherwise pure gtk (not gnome) app, but which are (according to a post I saw from havoc pennington) not separated out in the hopes of forcing people to use gnome. This is, IMO, appalling and offensive -- so much so that if I wanted to use an "integrated desktop environment", I'd be seriously tempted to use KDE instead.

I also like being able to set up fairly minimal systems, esp. for firewalls and DMZ machines. The more "integration" of all this fancy stuff, the less secure and straightforward my systems become. I don't like. I don't mind adding layers to fancy servers or workstations, but I still want my minimal systems, dammit!

Some specific points: "The integration of GConf at the system level would allow for a much cleaner configuration system for the whole GNU operating environment." Ugh! I find a small collection of easy-to-edit text files to be the easiest and simplest way to configure things. It's true that there are monsters like sendmail.cf, but I solve that issue by using a decent MTA, rather than the catastrophe that is sendmail.

"A bonobo version of Apache could load and unload components to handle PHP, perl, or anything else as needed, without having to recompile." -- wow, you mean it could do what it already does, except with a lot more overhead? What a, er, stunning notion! :-)

"Upgrading software could, in some instances, be reduced to updating some of the components of the application rather than the whole thing." -- well, yes, and we don't need bonobo or anything like that. Just make main() call a library function to do all the work, then you can just upgrade the library, not the tiny wrapper. :-) We already have shared libraries, and I doubt if using other technologies in place of shared libraries will provide any serious benefits here (except in certain cases, in which case, fine, go for it). The problem with this sort of thing (whether using shared libraries or not) is version mismatch.

"The power of these new technologies is enormous." Well, perhaps, but they're new and experimental still, and we haven't necessarily shaken out all the bugs or other issues. (Egon's example of a component crashing and taking down a whole set of apps is quite pertinent here.)

I think the real problem I have with both Gnome and KDE is that they're rushing forward to try to offer all this new junk right away, and they lack a lot of the modularity that has made Unix so successful over the years. A modular design allows for more evolutionary progress, rather than planned-by-committee, this-is-the-one-true-way style of artificial progress. This takes longer, but I'd rather have a great system than one that's designed to impress the clueless.

I guess basically what it comes down to is that I would be more sanguine about this proposal if it were a series of proposals for specific enhancements not tied to any specific project (like gnome) rather than an all encompassing "everyone start doing stuff *my* way, dammit" proposal from folks who are designing systems that I'm not particularly impressed with the design of so far.

(As for glibc, I've been writing portable software for twenty years, and haven't found it to be a burden, and I know of dozens of libraries that provide similar functionality, and don't see any reason to use one tied to gnome unless I'm creating gnome software -- and don't hold your breath on that.:-)

erm, posted 1 Dec 2000 at 00:42 UTC by Iain » (Master)

I think there's too much all-integrated-into-one mentality among the core gnome folks. Rather than "integrate gnome into gnu", I would rather see "gnome refactored, and useful core functionality moved to a lower level." Then I and egon (and even aaronl) could benefit from new functionality without having to install useless junk like X. :-)

Wasn't that what was being suggested? A sort of "What is the core functionality" discussion.

wisdom i impart unto thee, posted 1 Dec 2000 at 01:00 UTC by apgarcia » (Journeyer)

is there such a thing as 'the standard gnu system'? is there such a thing as 'the standard emacs environment'? ok, so you mean the least common divisor. even so, the spirit of the gnu system is such that nothing is sacred, miguel's opinions on policies notwithstanding. change whatever you want to change. remove whatever you want to remove. add whatever you want to add. such is the freedom, nay, the right, to write software. which leads to my other point: the best way to find out is to try it.

Shared objects, Interfaces, Etc, posted 1 Dec 2000 at 02:59 UTC by nymia » (Master)

My reply is purely for discussion purposes only. I don't intend to start anything that will lead to nothing.

About lain's comment, I agree that shared objects (DLLs) have their place and it definitely can be implemented as solutions. DLLs are good and they are useful, but, they have limitations too.

What I'm submitting is not how objects are activated, whether in-process or out-of -process, rather, it is how objects send, recieve and handle messages. In our design, we made all object methods private. Only one method is public and that's where all messages come in. These messages are then validated by a handler that is only known by the object. Also, messages are packaged in an object, insuring all messages depart and arrive in a uniform way. However, we relaxed some of the rules and allowed some methods available to other objects.

In this model, what we were trying to establish is a system where objects communicate using messages. An object is not aware of what other objects can do, it just sends a message telling the object to do something. If an object receives a message it cannot handle, it just returns without doing anything. On the other hand, if it recognizes a message, it then tries to do what it was asked to do and then then return something to the sender. In short, this model behaves similarly to Unix pipes, it accepts data in a uniform way and sends its output in a uniform way also.

The model is very simple and untested. We do not know what will come out from it. All we know is that the previous model our client were using burned a lot of money, wasted a lot of resources and did not scale well. (Which only means the model is perfect for making money). With the new model, we are expecting our applications to be modular, scaleable and cost-effective.

In summary again, interfaces are good only when N number of objects are small. But when N gets big, that's only time to switch to another model that is capable of handling a large N. Our experience with the interface-based model only made applications complex and unmanageable, requiring processes and tools that costs lots of money. I'm not sure if integrating gnome into GNU is a Good Thing(tm), but, the experience we just went through should give an idea where Gnome will be heading

Impressive feedback, posted 1 Dec 2000 at 03:33 UTC by tja » (Journeyer)

Impressive amount of discussion. Not quite as much as Jonathon Swift got from his similarly-titled article, but impressive nonetheless ;-).

Another round of comments, posted 1 Dec 2000 at 05:26 UTC by RyanMuldoon » (Journeyer)

Wow. I really didn't expect this many comments. And for the most part, they have been great to read and think about. So, here goes my next round of comments....

xtifr: I am far from being a "core" gnome hacker. I'm just an aspiring gnome hacker, really. So what I wrote in the article is only what I have been thinking about, while trying to learn about the GNOME 1.4/2.0 development platform for a program I am working on. I just found so much of it so well thought-out, that I thought that a lot of it would be useful elsewhere. Also, as has been pointed out, things like glib predate GNOME by a good bit. And none of the libraries I mention require GNOME or X - they are just things that GNOME uses. I really don't agree with you on the "gnome isn't modular" point - GNOME is really very modular. Everything is separated out into different libraries/programs as is logical. And they want to make it even more so by using bonobo components, to isolate small reusable chunks of functionality. That *is* the UNIX way, to me at least. But I think that we agree - this should be (and was meant to be) a discussion on what technologies should be considered "core." I made some suggestions, and tried to make some examples of why they would be useful outside of just GNOME. I also really don't buy into the whole "overhead" argument that people bring up. People are fine with not writing in assembly, even though it has less overhead than a higher- level language. Why? convenience. People will even use GTK+ rather than rolling their own toolkit. But why does the step to using some generally-useful GNOME stuff become bloat? Do you really want to write your own imaging model? Your own printing system? Your own plugin system? Your own configuration system? Why not use a nice library that does all of that for you? It saves you time, and reduces bug potential in your code. And as improvements are made to GNOME, your app gets them for free. That seems like a win-win situation to me. The real overhead that I see is all of the duplication of functionality in the UNIX world. If someone has done it before, why do it again? It just wastes everyone's time, and uses more computer resources.

apgarcia: I think that there is a reasonable working idea of what the GNU development platform is. You're basically right - it is what is common on all "GNU" systems. My point is that if we want to make a platform that is nice to program with, we need some consistency. And users like consistency within their computing environment. I certainly am not trying to force anything down people's throats (unlike the claims from many a poster on linuxtoday.com...) - I wrote the article because it seems like it was a worthwhile issue to discuss. If you look at projects like Inti, that Havoc Pennington is working on, which is a C++ development environment that makes a sane, consistent development environment for someone wanting to write a linux application, you have to be drawn to it. Why? Because it is easy to lean, and you can get stuff done quickly. That is why I (and many others) really like Java. Not only is it nice and clean, but it gives me a consistent set of APIs to use for everything I need to write a powerful application. Most of the time I can guess what the API is going to be, since it is so consistent. If we can do something like this with the GNU system, you get all of these advantages, but you can still write in whatever language you want. That is what I have really been wanting to see happen. So I wrote the article to see what others thought.

nymia: I wish I had something good to say to your comments. What you propose sounds pretty interesting, but I don't really understand how it works. How does an object know what kind of data it can handle? How do the objects know which other objects they can interact with? With only 1 public access point, it seems like it would either be trivial to program for, or really hard (depending on the answers to my questions above). I think that my math/cs/philosophy concentration has been making me concentrate on the fundamentals of logic, and language semantics. That is probably why component models appeal to me: IDL defines exactly what I can do with each component. It is also why I like the general UNIX model - it flows nicely, like a language. If you could describe a little more about your object model, I'd be interested in reading about it.

GLib and gnome-vfs, posted 1 Dec 2000 at 13:16 UTC by jmason » (Master)

GLib is fantastic -- a great general-purpose library. I agree with hp that amost all newly-written C code should use GLib or equivalent. It's about time there's a well-defined, well-supported free library on UNIX that provides all the things we usually have to reinvent ourselves.

gnome-vfs however is just the wrong way to do it IMHO. As egnor pointed out, if 50% of your apps can open http:// URLs, and the other 50% cannot, we're in Windows-land -- and this is a very bad thing.

I would love to see gnome-vfs' functionality rolled into a filesystem, so all apps can access URIs as files... it's been done before to varying degrees, with WWFS and others.

I've actually been considering hacking AVFS, specifically avfscoda, to use gnome-vfs or kio, or some cadaver code (for DAV), to access URIs -- it's still a concept though, I haven't even looked at the code. ;)

The only problem I can foresee is some tricky code to handle the long waits that an internet filesystem will impose, which are an order of magnitude longer than yer average local filesystem; and possibilities of causing security problems. The latter problem will take some thinking about to make sure it's not an issue.

glib select, posted 1 Dec 2000 at 13:22 UTC by lkcl » (Master)

glbec,

it's not that select() needs to be ripped out of glib, it's that i needed to read data and stop very quickly if no data was available from a tcp socket: i added the glib code immediately i found the g_net, and removed it immediately i found that there was no non-blocking read possible with the TCP socket class.

[takes look at glib.h, finding some GIOchannel functions and stuff, hmm... interesting. think that was what i was missing.] rethink time! thx!

p.s. i love advogato :)

Glib and portability, posted 1 Dec 2000 at 14:17 UTC by mlsm » (Journeyer)

From what I've seen of it (I haven't really used it for anything significant) glib is a really nice, well thought out library. It gives you, the programmer, a nice, platform-independent interface to a lot of common functionality that standard C doesn't.

So, why don't I use glib? It seems like it solves many real problems, but the reality is, right now, it doesn't.

There are two main reasons I don't use glib. The first is size. Glib isn't huge, but it's certainly not small (and I suspect it'll be much bigger with version 2). The one significant bit of free software I've written ends up with a binary size of somewhere around 250-300kB (2/3 of which is a bunch of big static tables. Moving these to a more compact external representation would save most of that).

Adding glib to that increases the size by 50% - and the reality is that you do have to include it with current systems (apart from on linux, and some other unixes - and even there, it's an extra dependency). Adding that much extra just doesn't seem worth it when I'd maybe use 5-10% of what the library provides. I'm not sure how to solve this problem - the only way would be to split glib into several seperate parts. That's not a good solution, because most people will want many or all of these parts.

The second problem is bigger. Portability. Glib is nice, but the reality is that right now, it doesn't exist for all that many platforms. Porting to a new platform is possible - but because glib implements so much more than what I need, it's actually easier to implement my own portability layer. My program currently runs (it probably works on other unixes, but I haven't used it on them or heard from people who have) on linux, solaris, freebsd, win32, macosX, OS/2, and probably others (probably even DOS, with a few trivial additions). When I last looked, glib was doing pretty well here - but I'm not sure about OS/2 or macosX. Sure, not a lot of users there, but why ignore them?

What could we do about this? A simpler portability layer would be a good start. Things like platform-independent types (g_int, etc) are easy. Threading, process creation, dynamic loading - these are things that actually require significant effort to port. Perhaps this stuff should be seperated out, so that a minimal and highly portable glib could be used?

Suggestions on what solutions we could use here are most welcome. I'd love to use glib, but right now, it doesn't seem too practical for the sort of thing that really needs portability. It's great for things that merely need portability across unixes.

Work arounds for unix, posted 2 Dec 2000 at 14:02 UTC by listen » (Journeyer)

Wow, great discussion!

Anyway... it seems to me that a lot of what is done by various bits of gnome are "workarounds" for limitations in unix/linux, that should eventually be fixed. The two most obvious are gnome-vfs & gconf.

gnome-vfs should really be doable with userspace filesystems. Currently it is all a bit mashed up, and people do funny things like userspace nfs servers or coda servers and mount these, etc, another one is podfuk. This is done reasonably well in Plan 9 & Hurd. As linux is moving towards per process namespaces (which will almost certainly end up as per user namespaces due to the login program) it will become much easier for users to mount arbitrary stuff. Still there is all the automounting to worry about, and whether to use uris or allow something like /http/org/advogato/www/ as well. This has got to violate the spirit of either unix or the w3c in some way! The security issues are pretty tough. But effectively, even now any call to fopen with a filename from an untrusted source can get any information you care to name, due to the dual unix evils of the global filesystem & anyone being able to do ports over 1024. Fixing this kind of borkedness is going to take a long time...

gconf is a more interesting one. The limitations in the file system it attempts to solve seem to be: notification, very small files, and documentation. (Users having multiple backend sources is solved by union mount and userspace filesystems).

Reiserfs attempts to solve the small files thing. Also I think long term they want a new api more convenient than open/read/write/close for very small files. (Think single bytes or strings.) Fam/Imon is a sicko solution for notification, though I think that a nice one is possible. Eg the event interface recently discussed on linux-kernel would solve this nicely. But thats not going to be mainstream for another two years (linux 2.6/3.0).

Documentation, I dunno. This seems like it should be a convention rather than a hard and fast rule... and conventions are easy enough to come by. Making them stick is the hard thing.

The real problem would be getting all this stuff into every platform that gnome wants to run on. Its really not going to happen. So we need the fully userspace/ not-in-libc solution anyway, and just use these apis as a pass through when the real functionality is there. Ie sorta like Glib for getting ansi C stuff.

Bonobo I think is great, and I think the GObject idl compiler (mango) will make it even more compelling. (No CORBA_Objects). One thing I think that really should happen in gnome is that more apis should be presented solely as corba interfaces, rather than corba interfaces wrapped up in a C library. This would allow much easier bindings to any random language. In fact, it would allow most things to be used with no extra bindings whatsoever. This is one of the best things about the Microsoft COM way of doing things. I think the main reason that this doesn't happen is the pain of using CORBA in C, hopefully this will be eased by mango.

An immodest proposal..., posted 3 Dec 2000 at 03:20 UTC by adubey » (Journeyer)

Hi,

I've been tempted to write a post, but until now I've held out. But now it seems like every other user is adding to the discussion, so I might do so as well ;)

There is one implicit assumption going around here... let me illustrate with a story that might belong on soc.history.what-if...

In the late 60's, the US government thought giving IBM competition would be _good_ and didn't decree that AT&T couldn't make money in computing. Multics didn't ship as late, and Ken Thompson didn't make his own OS (and even if he did, he couldn't release the source...). Dennis Ritchie made a successor to BCPL, but since it was not the "systems" language of a popular operating system, it ended up only being using in some embedded real-time systems (such as the telephone switches of his employer). A grad student at Berkeley made a super-efficient Algol-68 compiler for the PDP-11. Because of this, Algol-68 won out over PL/1 in the so-called "language wars"... then in the early 80's, the US DoD made "Algol-80" the standard defence contracting language once it "borrowed" some features from it's cheif competitor in the bidding process - Ada.

Then, in a temporal anomoly in 2000, a book called the "C" programming language appearred on every programmers desk. Programmers were aghast. There were some nice things about "C" - it used "{" and "}" as short cuts for "begin" and "end" - a possible timesaver - but typing "begin" and "end" bothered so many people that the designers of Algol-80 made their compiler check for whitespace, and now those wordy control structures were only options. This language "C" seemed like a major step backwards. It had:

  • Word-sized ints: surely the designers knew this would lead to portability problems! Why should the language change from machine to machine?
  • Weak typing and lack of paramterized types: It was long since known there were three ways to make a "new" operator typesafe - parameterized types, overloading or throw typing information away. This "C" language used to last (the least favoured one).
  • Weak support for arrays and strings: Arrays did exist, but couldn't be resized like in Algol. Some people felt this might lead people to pick a big array size, and assume it would be enough to store any data that was fed to it -- and this might have lead to many security exploits if the "C" language was ever used to, say, make an operating system - but no one could be sure...
  • No ADT support: you could do it, but the compiler didn't enforce it. Not safe, not safe
  • No garbage collection: when microcomputers were starting to get popular, surely they would have been too slow for the GC technology available then, and a language without GC may have been useful in that case, but thankfully MULTICS terminals won out over microcomputers (MULTICSnet is simply too useful ;)
  • Lack of first-class functions: glib did support "callbacks" in numerous places, but they just don't have the elegance of true first-class functions
  • No datastructures library: One of the nicest things about Algol-80 was the inclusion of a standard set of ADTs. This reduced the amount of re-work each programmer had to do, but this "C" language seemed to be missing such a beast.

    While computer scientists tried to make sense of why anyone would use this "C" language, physistics were desperately trying to find out what caused the temporal anomaly.

    Within a few years, a new field had been born: "temporal anthropology". It surfaced that eventually a group of people called "GNOME hackers" (whatever that was) also saw the problems with C. But rather try to fix the language, they made a library called "glib" which hacked over the problems.

    Now, some of the problems - like no datastructure library - was adequately solved with this "glib" program (even if it was awkward without polymorphic types). But what of the rest of it? (ie weak typing, etc). Why not just make a better C? Temporal anthropologists also learned that, in the year 2000, there was a movement to include "glib" as part of an operating system.

    How foolish! While it would be wise for "C" programmers to use these utilities, why would the users of other languages be punished? While the library may exist on every machine (perhaps a good idea considering how cheap RAM and HD space is, even in their timeline), if it became part of the "OS", that would mean that low-level OS functions could and would use these libraries.

    That means that higher-level languages would be forced into being compatible with a set of crude routines in a backwards language rather than using more elegant solutions.

    To their releif, they found that while glib did become quite popular with C programmers, inertia kept glib out of the OS interfaces. Whew! Even better, in the year **** a language called ***** started becoming quite popular - not only did it improve many of C's failings, but it was (as expected) faster (this is important to note, because, you see, while _we_ know that high-level languages are faster because the compiler can make more optimizations, these people put _so_much_effort_ into fine-tuning their 'C' compilers that high-level languages actually *where* slower in their timeline due to lack of manpower... and so they foolishly thought that 'C' would always be faster than higher-level languages... sheesh!).

    In the end once it became clear that **** would become a standard language, operating systems started using **** as their "systems language" rather than 'C'. Humanity (in all it's timelines) still has hope!

    (My apologies for making this into an Anti-C rant... I really didn't mean to do that - it just kind of came out that way! Sorry! Also, sorry for the narrator-shift... took too long to write anyways so I don't feel like fixing it - thanks for the interesting discussion so far ;)

Replies, posted 3 Dec 2000 at 05:06 UTC by hp » (Master)

xtifr: I think the comments about refactoring don't really apply to GLib and GConf, both of which have quite sensible dependencies. Pango is also nicely separated from GTK+ and reusable in any app that need to do i18n.

mlsm: For size, the solution is to statically link, if you're shipping with GLib anyway. Then you get exactly the code size you're actually using. Granted it doesn't work on all platforms. Also I would point out that these size savings for your program in the short term come at the expense of size savings in general in the long term, because apps share less stuff. And in general I would put developer time ahead of on-disk code size (in-memory size is a different matter).

For portability, GLib 1.2 should work on Windows and Unices. If you're worried about Mac or BeOS and so on, no it probably doesn't work. 1.2 does provide a portable dynamic loader interface, something you mentioned. 2.0 provides a portable threads layer among other things (in general 2.0 is much more compellingly useful, especially if you use the Unicode stuff). But it does still lack some useful features it could have.

An often-overlooked benefit of GLib is the main loop, which is very difficult code to duplicate well and useful in any event-driven type of program, such as daemons. The GConf daemon uses it to nice effect.

aaronl - creating a new slashdot? :-), posted 4 Dec 2000 at 14:44 UTC by gbowland » (Journeyer)

aaronl seems to be a particularly bad example of the slashdot mentality. Beholden of command-line-interfaces (not that there is anything wrong with them) for no real reason, he tries to cling to a 'leet' group of Unix hackers. Or something. He is adept at criticising projects with little knowledge of their design, goals or implementation.

The truth is - glib is one of the most useful libraries I've seen. Who wants to write their own trivial string handling routines? Who wants to implement g_strdup_printf and other utility functions when they have been done for you, and work? If everyone writes their own tiny little implementations of everything, you get errors, bloated binaries and wasted time.

Adding gnome-vfs support to selected programs is a great idea. You could then then stream MP3s over FTP, NFS, HTTP, from a tar file, whatever. And if something is missing, it's not hard to write your own method.

Ever noticed how many programs contain their own HTTP handling code? xmms, etc do. Libraries such as gnome-vfs avoid this and are therefore a Good thing.

(Offtopic!), posted 4 Dec 2000 at 16:51 UTC by egnor » (Journeyer)

Once upon a time, when we didn't like someone, we called them a Nazi. Then that got old, so when we didn't like someone, we called them an AOL user. Apparently that, too, is old; now, when we don't like someone, we compare them to Slashdot users.

Until you see aaronI talking about hot grits, drawing penis birds, or calling RMS a goat fucker, I don't think the comparison is entirely fair...

(ObOnTopic: You chose a bad example, since MP3 players can easily fit into the pipe paradigm and simply accept data on standard input. If you want a program to accept an HTTP URL, I want the same program to accept a command to run, perhaps just by typing "wget -O- http://foo/ |" at the filename prompt. That way I can use "mywget" or a shell script wrapper to wget or "scp" or a complex pipeline of commands, rather than having to invent a wholly new syntax for invoking, controlling, scripting, and authoring little submodules. It's all about avoiding special-case mechanisms; if anything, I'd be happier with the ability to invoke a generic CORBA object that produces IInputStream or whatever [but see my objections to CORBA/Bonobo above] than this special-purpose pseudo-filesystem business.)

That said, I disagree with most of what AaronI has to say.

Change Notify (notification), posted 5 Dec 2000 at 13:42 UTC by lkcl » (Master)

gconf is a more interesting one. The limitations in the file system it attempts to solve seem to be: notification, very small files, and documentation. (Users having multiple backend sources is solved by union mount and userspace filesystems).
YES! notification _has_ been implemented in linux, a patch was submitted by the linuxcare oz research group for the 2.4 kernel.

let's now hope that the rest of the unixes follow suit.

p.s. it's based on the concepts of NT's "change notify" mechanism, which is extremely good, and works very well. the impl. for linux was needed for samba to have something decent to hook into, because the IRIX "change notify daemon" is... well... it's a userspace single process, single connection, which is utterly, utterly horrible.

framework approach sucks, posted 7 Dec 2000 at 12:04 UTC by Netdancer » (Journeyer)

The troublesome issue about GLib is that its always expected to use the whole package at once. You can't just pick the threading support and ignore the rest because library specific data-types are used throughout. Therefore, using GLib is a very invasive procedure and one that can't easily be taken back.

In addition to that, as much as some people seem to ignore this issue, GLib is not an international standard, its not part of 'C' and therefore using it is not portable to environments where its lacking (and that are quite a lot, at the moment). In contrast, a program written against glibc and compiled using gcc is mostly portable to other 'C' compilers and other libc's.

This is not GLib bashing, just think a minute about who invented GLib, who is using it and whom you are now trying to make use it. Theres a large number of 'C' library developers who didn't have any input into the design of GLib. This is not the way open standards are created.

Re: Documentation, posted 8 Dec 2000 at 16:43 UTC by nymia » (Master)

About Ryan's inquiry, it seems that there are documents describing the model, but, it's owned by the architect and business analysts. That would mean I don't have the authority to release it since it is now owned. What I can only describe is the model that we used to build the framework--which was based on public domain property. Basically, the model is very simple, all classes communicate on one medium and all messages are wrapped by a message class. All messages can be launched inside handlers and land on a standard name receive member function. Finally, most classes have only one public method and that is for receiving messages. That's all there's to it, it's very simple yet it works. Architects, Systems Analyst and Business Analyst here couldn't agree more as all business requirements are now in the process of getting converted to use the new model. However, this model is not the answer to all architectural problems, it just happened that this model was the solution for them. It worked for them, it might not work for others.

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

X
Share this page