I Am Appeased
Stuart Langridge is my hero. That is like a million times faster than gedit, and doesn’t seem to lock up on large pages.
I Am Appeased
Stuart Langridge is my hero. That is like a million times faster than gedit, and doesn’t seem to lock up on large pages.
I’ve been trying to get to the “0 messages in your Inbox” status for a few days now. Before I started, I had some 5000 messages in my Inbox, starting with messages from early 2003. I’m now down to a little over 1200.
I’m not sure that my organization strategy is really the best, unfortunately. I have a folder for Work with sub-folders for various clients and sub-contract clients. I have a folder for Projects with sub-folders for all of the projects I work on or with. Then I have the Personal folder which has sub-folders for Friends, Family, Notes, and some hobbyist organizations I belong to.
At this point in my Inbox, I’m out of low hanging fruit to pick. I have messages that include things like personal replies to personal blog posts and stuff like that (e.g., commentary with Miguel de Icaza over the DS and Wii). Where do I put this stuff? I don’t want to mass delete what’s left because I’m quite sure there are some important things hidden in there that my searches missed, and I don’t want to individually delete messages that I know are unimportant because I don’t feel like manually sorting through 1200 messages.
I have an Archive folder for all my messages prior to 2003, so I suppose I could just move these messages there. If I realize I’m missing something I need, I can search for it in there. Perhaps I should split the Archive folder up by year while I’m at it, too.
How exactly does one have 20 years experience in Java or PHP?
I’m hoping the intermediary just got the quote wrong, because the alternative would be that our new hire is from the future, and I’m just not prepared to accept such a possibility.
Ubuntu LTS Release Changes
Burgundavia: yeah, adding a new package management front-end would be a nightmare change in an LTS release. That would be like completely changing the installer in an LTS release, and surely Ubuntu wouldn’t ever… oh, wait.
KDE is Hard to Configure
So my roommate, who is setting up a Linux computer for a family member (their kids keep screwing up the Windows install by downloading tons of adware/spyware/virus crap), is using Ubuntu. After finally getting it installed, I commented to him that KDE is more configurable and a little closer to Windows. We converted the install to Kubuntu with a quick apt-get install kbuntu-desktop and a few other startup changes, and away he went.
So he comments to me this morning, “I think I’m going to install GNOME back as the default desktop. It’s easier to configure.”
I had a bit of a time trying to figure out what the heck he meant. GNOME’s lack of configurability is, of course, KDE fans’ most favorite design principle to flame.
“What do you mean?” I ask him.
“I can’t figure out how the hell to set things up,” he replies with an ashamed look. “I’m new to Linux.”
I nod. “KDE is vastly more configurable than GNOME, though. You can get it to look and work however you think is best for your family there.”
A quick laugh, followed by, “sure, if I could figure out where the hell the right settings are. There’s too much crap in KDE to sort through! I know the setting is there, somewhere, but I don’t have a clue where! GNOME is easier to configure.”
Even being a GNOME user, I found that rather surprising. Of all the reasons I’d expect someone to prefer GNOME over KDE, I never expected that one. Goes to show that the “less is more” approach is as valid as the GNOME developers expect it to be.
Ubuntu Installer Rant
Another rant. I must be extra cranky lately or something.
This time, Ubuntu’s installer is under fire. Several releases back, Ubuntu switched to using a LiveCD as both their demo platform and as their primary installation platform.
What a fucking disaster.
LiveCD’s in general take a while to boot. They’ve got a huge OS to load, a ton of applications which aren’t even that useful on a LiveCD (the amount of configuration a heavyweight mail client like Evolution needs is just ridiculous in a LiveCD environment), CD drives aren’t particularly fast, and RAM is often a lot tighter than than most people would prefer even when the filesystem isn’t copied into it. Even on a nice dual-core machine with 2GB of RAM, the LiveCD can take a while to boot. A painfully long while On an old Pentium II with 256MB of RAM? You’re going to be waiting a good long while. I’ve downloaded and installed FreeBSD on a weaker machine in less time than it’s taking Ubuntu’s LiveCD to finish booting on some not-too-old machines, and booting is just the first step of many to get a usable system actually installed.
If you’re lucky enough for the damn thing to ever finish booting at all.
It has gotten down to the point where the LiveCD installer is useless to me. I can’t get Ubuntu to install on the vast majority of machines I have access to using the LiveCD. It takes forever to boot, it locks during booting, it locks after booting, it locks during installation, it takes forever for installation to finish, or some other problem or issue pops up that didn’t exist in the old installer or the current “alternative installer.” The alternative installer has its own set of issues, though, which may be due to the less extensive testing it receives during development periods. I’m staring at a lockup on one machine right now from the alternative installer, and I’ve seen similar lockups on completely different machines installed from completely different CDs burned from independently downloaded ISOs from at least the last two releases.
Ubuntu is a nice OS, once you get it going. The major fuckup of switching to a slow, flaky, and mostly pointless LiveCD installer is killing any love I had for the distribution very quickly, and the pathetic state of the actual usable (when it doesn’t lock) alternative installer is making it difficult for me to even TRY to love Ubuntu, since I can’t get the damned thing installed on half the machines I’ve try to put it on.
What is the point of the LiveCD installer? It’s a neat toy, and that’s about it. I haven’t once ever had a need to use the LiveCD to actually do anything useful (a text-mode rescue CD on the other hand is an invaluable tool that nobody should be without… those usually have more of the actual tools you need than the Ubuntu LiveCD, and they don’t take 3 minutes to boot on a several-year-old hardware), and I’ve never found a need to wow and impress friends or family with a running Linux desktop on a LiveCD. If anything, I’d be embarrased to show them a LiveCD, since it’s just going to make them think Linux is slow and buggy.
I see LiveCDs as a fad, and not one of the more useful kinds. They are not useful to most people, they certainly aren’t ideal installer platforms, and yet they’re everywhere these days.
Let the madness end! Please ship the next version of Ubuntu with an installer that actually bloody works… please!
/me goes to reboot a machine that locked up 85% into the Ubuntu install process.
GNOME Bloat Rant
Frustration Alert Level: Orange
Why do the super simple programs in GNOME all have a bazillion features and plugin support these days? Gedit is a monstrosity of complexity. It crashes viewing medium-sized HTML docs (Epiphany loads gedit to “View Source” on a page, unlike Firefox which uses its own HTML viewer) and I have no idea if it’s gedit itself, gtksourceview (because a text editor just has to have syntax highlighting), or some other plugin.
Why can’t GNOME just ship a bare-bones text editor a la Notepad instead of a giant monstrosity that nobody in his right mind is actually going to USE for anything? How many developers - the Gedit developers themselves included - actually use Gedit to work on code instead of Vim, Emacs, or some IDE? It doesn’t need syntax highlighting. It doesn’t need plugins (are there even any third-party Gedit plugins?).
Now Eye of GNOME has also gained the bloat. I’m sure it comes accompanied with an abysmal startup time, just like Gedit. Notepad, in a Virtualbox VM, loads instantly on a clean Windows XP box. Gedit takes 2 seconds on the same machine running native Linux, and that’s a warm startup at that.
Nautilus has long been a beast, although I think it at least has an excuse, what with its direction having changed course several times after Eazel disappeared. I’m not saying it needs to be rewritten, but it probably has some cruft that could get the axe. Evolution has been a nightmare from day one, especially for the 95% of us who don’t use notes, todo, calendar, or various other groupware features. Most of us just check email, and use a contact database to make the checking and sending of email a little easier.
I use GNOME instead of KDE because I don’t want my desktop bogged down by tons of useless crap some developer wanted to add so he could wank off to how many features his program has. Gedit is full of features that nobody needs, and all those features destabilize it to the point where I can’t even trust it to do the one or two things I actually need it for.
I have a feature request for GNOME 2.22:
Cut out the pointless bored-programmers-induced bloat that nobody needs from Gedit, EOG, Nautilus, Evolution, gnome-panel, and so on. I’m sick of GNOME taking longer to load a working desktop (starting from when I finish logging in at GDM) than it takes for a virtualized Windows XP to load a working desktop (starting from when I click the “start machine” button).
Linux Packaging Sucks, Part II
This is a bit rambling. It’s late and this post isn’t as cooked as most of my posts are, so please forgive the wandering focus and half-complete ideas.
Would significantly expanding the LSB help, or is it also not the right solution?
I don’t think that standards in terms of the base platform are really a problem. That is solved, in my experience. I can take a binary and get it to run on a large variety of systems, especially if it’s compiled with a tool like apgcc that works around the incompatibilities glibc causes with each release (apps compiled against new versions of glibc don’t work on older versions of the library, but the reverse is not true).
The vast majority of compatibility problems for your average applications are 100% artificial and are caused by the package system, or by build systems that don’t take into account the incompatibilities that glibc and certain other libraries cause.
For example, if you compile a GTK app, it will be locked into a certain version of the library even if you only ever use any public API from the 2.0 release. This is because macros are used to silently convert function calls to newer functions. The old function remains for backwards compatibility, but your application now depends on a version of the library with the new version of the function even though you never asked for the new version. If you know that you only need 2.0 features, compile against 2.0 or you’ll be artificially locked into a newer version.
Perhaps LSB could help, but I’m not sure how. I’d rather that developers of libraries (and there are only a few guilty parties here) just took some care for those of us who don’t want to waste our lives compiling binaries other people have already compiled for us and fixed these silent ABI breaks. It would be far clearer, in my opinion, for a developer to explicitly state which version of a library his application is expecting, and to just not use any of the macro tricks.
For example, GTK could just include headers for all past versions (at least in cases when the ABI of a function is updated), so if an application does soemthing like:
The app will compile against the 2.4 ABI, will not be able to use any of the 2.6+ features (even if GTK 2.10 is installed), and no ABI changing tricks will be employed. The app will work on any machine with GTK 2.4+, even though it was technically compiled against GTK 2.10. Yeah, that means you need a lot of header file symlinks, versioned files, and #if checks for defining new functions… I think it’s worth it. The actual additions will not be that ugly or complex… certainly not much worse than the macro hacks in place now. :/
Instead of using header include paths to denote version, you could also just require a developer to state the version they want to use with a macro. Something like:
#define USE_GTK_VERSION 206 /* Version 2.6 */
If the macro is not defined, then implicity define it as 200 (2.0). To use a newer set of functions, the developer must explicitly state which version of the API he wants.
This is really quite useful even if you live in a world where apps are only distributed as source. It makes development easier for projects like Mozilla, which want to ensure that their binaries can run on a variety of Linux distros, including those that ship (by FOSS terms) ancient libraries. It ensures that when a developer is working on the code and accidentally uses a function or other feature from a newer version of the library that compilation fails for him up front, instead of having to wait for it to get processed by a build daemon (which only has the oldest supported version of the library installed) or for a bug to be reported. It makes sure that the developers get what they asked for and nothing else, helping them to keep control over what their code is doing.
Make no mistake, both glibc and GTK are excellent examples of projects that “do things right” in terms of maintaining ABI and API compatibility with older versions. Old binaries run against newer versions of the libraries and old projects compile against newer versions of the libraries. Not too long ago in the FOSS world even that was hard to get. These libraries are huge, complex, and they’ve had to pull some tricks to be able to both fix bugs and maintain that compatibility. I just think that they could go one step further and solve the last remaining problem that the platform in general has with version portability.
At that point, it’s all up to the distributions and their myriad of incompatible and frankly archaic package systems to fix their user interface (mis)designs and try to either standardize their systems or include a new vendor-neutral second layer.
The second layer idea is one that’s been tried a lot by people fed up with the packaging solution. AutoPackage is a “second layer,” a system which allows users to install software outside of the core package framework. I don’t feel that it’s ideal, but there’s no reason this couldn’t work. I don’t expect it to be able to handle upgrades to large frameworks or core OS tools, but that’s not really a problem I think - even something like the core version of GNOME installed is, to most non-geeks, a part of the OS itself, and not just a set of add-on packages. (This is part of the reason why I disagree with people who feel that X isn’t part of the OS - maybe it’s not on your server, and maybe it’s not on our desktops, but if X disappeared from my mom’s machine, I guarantee you that the system would no longer be anywhere close to operating to her needs, as it could no longer run even a single one of her applications.)
I would have no problem at all with a distro that used two totally different package systems: one for its core libraries, tools, and required apps, and another for all the stuff that is optional. Just so long as the UI was sane, which includes a single GUI app to handle updates; there is no need for a GUI app to install core packages, since for most users they will always all be installed.
Some add-on package systems, like AutoPackage, try to use fancy autodetection code to pick up the various libraries they need. This I rather strongly disagree with as an optimal way of handling things. Preferably, and this is something the LSB perhaps could help with, every library should be easily detectable without any kind of custom logic. It should be as easy as checking if libfoo.X.Y.so exists. Some libraries don’t hold to this, unfortunately, as they keep the same .so but change data files or just don’t update the ABI version appropriately. In these cases, the OS needs to offer integration with its native package system. AutoPackage (or another add-on system) should be able to ask the core system if “libfoo.1.2″ is installed, and the core system can then look in a database that maps these logical names to package names and can tell the add-on system if the library is available. It can also then offer hooks to ask for automatic download and installation of the library using the OS’s native software installation system.
Ideally, though, all software should follow the existing rules and practices and make checking for libraries easy. Existing package systems should be able to take a library name and find and install the appropriate package. Standardized utilities and scripts (eg /usr/sbin/sendmail) should be enough to check for instead of needing goofy hacks like DPKG’s and RPM’s meta-depends. It should be stupidly simple to write a wrapper for YUM, APT, and so on that just takes a library or binary name and installs a package that includes the appropriate library or binary that is guaranteed to be compatible as one would expect. Things like Perl modules and such also need support, of course.
Even if the LSB had a tool that used those principles to make it easy to install packages using vendor-neutral names, it still wouldn’t come close to solving the real problem. Packages still need a way to call those tools, requiring some standard package format. (The existing “standard” LSB uses is pretty much a joke in terms of actually being useful.) The distros tools still need a huge amount of polish. The standard package format needs the meta-data additions to make logical (from the point of view of users) package grouping possible. Updates have to be handled sanely. Upstream developers have to be able to rely on the ability to make a single package that works on every distro (well, one package per architecture, obviously).
Hacked Credit Database
Someone out there has a hacked credit card database. I know this because there is a new $280 purchase from Old Navy in Goodyear, AZ on my bank statement. I, of course, live nowhere near Goodyear, AZ, and I have never shopped at Old Navy.
Tomorrow I get to go bitch, get the charge removed, and get my third new debit card in a year. The first two were pro-active replacements from the bank after some other site got broken into, and banks replaced all the cards in those vendors’ databases.
Seriously, people, it is NOT hard to secure credit card databases. Starting with the fact that in the vast majority of cases, you don’t even need to store the credit card number if you’re doing immediate authorization. I want to blame shitty programmers. It probably was shitty programmers. It could also just be dumb management. In my experience, most managers and store owners (the people who pay programmers or contract to companies who pay the programmers) don’t understand a damn thing about security and literally demand (on threat of unemployment) that the programmers do stupid-ass things like store not only credit numbers but also CCV codes because the billing department thinks that they need that info to do their offline credit card processing.
Then you combine that with either shitty programmers or restrictive budgets and deadlines and you get unencrypted or wealky encrypted databases storing this info. Then the site eventually gets hacked into and aside from all the other usual treasures in the database, the hacker also has a huge list of complete credit card numbers with expiration dates and CCV numbers that the vendor didn’t even need to have because CCV numbers aren’t required to charge a card.
I’ve seen vendors demand that they take the CCV number for offline processing because they think that customers won’t use a store that doesn’t take a CCV number. The problem here is the gross misconception that CCV numbers are there to protect the consumer. They’re not. The card is already insured. I will be getting my $280 back for sure. The CCV there is to protect the _vendor_ to ensure that any cards they accept are less likely to be fraudulent. The CCV is there because the credit card companies don’t want to lose money to thieves, so they enforce PCI compliance when they can to make sure that vendors make it difficult to use stolen cards. Unfortunaetly, most vendors don’t comply with PCI standards, so they not only fail to use the CCV code properly, they also fail to maintain all of the security standards that PCI requires to protect stores that use CCV codes, thereby making the whole thing moot.
I looked through my statement for the last three months (about half the time I’ve had this particular card number), and I can’t find a single shoddy/no-name store in the list. All of my payments have been to large well-known chains (Krogers, Outback, etc.), upscale local businesses (Yotsuba, Zingermans), local convenience stores (Marathons and some other I don’t recall the name of), big-name online retailers and payment services (Amazon, Paypal, etc.), and the US government (taxes and stuff). I haven’t entered my credit card number in any other sites, lent the card to anyone, or otherwise done anything stupid or reckless with that data. So some fairly big well-known company is at fault here. I haven’t seen any recent announcements of hacked databases, so someone has access to one without anyone knowing (yet), or the company is breaking laws and keeping it a secret out of fear of the legal and public backlash that announcing cracked insecure credit card databases always brings.
In any event… I’m pissed and irritated.
And to whoever has nice new clothes from Old Navy - you’re a douchebag. (I’m not sure if that applies to everyone who has recently shopped there or just the thief - I mean, it’s Old Navy for goodness’ sake, what the hell is wrong with you people?)
Novell is not anti-GPL
salimma: “once” anti-GPL?!
Are you trying to imply that Novell is anti-GPL? What in ****’s sake gives people stupid ideas like that?
Novell is one of the largest producers of GPL code in existence at the moment, employing people who work on a huge range of projects including the kernel, KDE, GNOME, various core GNU tools and libraries, RPM, GCC, Apache, Samba, and whole tons of other stuff.
Some of their products being closed-source does not mean they’re anti-GPL. It doesn’t even mean that they’re not pro-GPL. It just means that they want to keep making money in order to keep employing engineers and tons of other people so that they can eat. Not everybody gets donated tens of thousands of dollars to preach ideological bullshit like Stallman does. It’s easy for him to say, “If you can’t work on Free Software for a living, go into a different field instead of making proprietary software.” He’s never had to actually make that choice.
minor note: yes I realize that not all of the projects I listed above are under the GPL. My point was merely that Novell works on a huge range of Free Software and Open Source projects without trying to subvert them.
New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.
Keep up with the latest Advogato features by reading the Advogato status blog.
If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!