Recent blog entries for yeupou

March 31th, Karen Sandler: “Financially the (GNOME) Foundation is in good shape”

I wanted to post his as a side note. But that’s a bit too much.

I dropped GNOME years ago. Back in the days when they dropped tons of cash on people creating shitty confusing companies like Eazel and HelixCode. I said Nautilus would never amount to anything and it never did. I said Miguel de Icaza was taking a very questionable path and he ended writing proprietary software. If it werent so sad, it would be kind of funny to see that nothing changed since then. Their Foundation is going more or less bankrupt while their financial reports shows that, for instance in 2012, they spent 1/4 of their resources to the pet project of their “executive director” Karen Sandler, some sexist bullshit called “Women’s Outreach” (I’m waiting for the “Black’s Outreach”, etc).

You don’t know who is Karen Sandler? Typical GNOME character. That’s just someone that never achieved anything related to computing but has been selected to be some sort of speaker nonetheless. I’m not saying only people that produced something that actually serve or served a purpose are entitled to speak. But to put people in position of “director”/whatever, at some point, there should be some knowledge, abilities, even just ideas, that makes the person stand out to be entitled to represent or lead the others.

So what could she speak of? About bad management?

More like, on GNOME.org “Announcing her departure, Karen said: “Working as the GNOME Foundation Executive Director has been one of the highlights of my career.” She also spoke of the achievements during her time as Executive Director: “I’ve helped to recruit two new advisory board members… and we have run the last three years in the black. We’ve held some successful funding campaigns, particularly around privacy. We have a mind-blowingly fantastic Board of Directors, and the Engagement team is doing amazing work. The GNOME.Asia team is strong, and we’ve got an influx of people, more so than I’ve seen in some time.”” 

Typical GNOME bullshit? Indeed: pompous titles, bragging, claiming. “Successful funding campaings”? Seriously? “Amazing work”. “Mind blowing”. It’s sad for the few GNOME developers that are worth it, because the main thing is a fucking joke.  It’s just empty words, no damn facts that matter that are even slightly true.

Not convinced? Too harsh maybe? Keep on reading. On her blog you’ll get her statement. The one quoted on GNOME.org.

“I think I have made some important contributions to the project while I have been Executive Director. I’ve helped to recruit two new advisory board members, and we recently received a one time donation of considerable size (the donor did not want to be identified). Financially the Foundation is in good shape, and we have run the last three years in the black. We’ve held some successful funding campaigns, particularly around privacy and accessibility. We have a mind-blowingly fantastic Board of Directors, and the Engagement team is doing amazing work. The GNOME.Asia team is strong, and we’ve got an influx of people, more so than I’ve seen in some time.
I hope that I have helped us to get in touch with our values during my time as ED, and I think that GNOME is more aware of its guiding mission than ever before.”

Yes, you can skip the fact that she consider recruiting advisory board members as an achievement (!!!). It seems that she thinks that a Foundation should focus on itself and not on the project it is derived of, seems that she does not even for a second mention anything that the software project GNOME would benefit of directly.

GNOME.org quoted her putting three dots and skipping “Financially the Foundation is in good shape”, and this just one week before we’re told they are definitely not.

She’s right one one thing though: now GNOME is definitely “more aware of its guiding mission than ever before”, since they are forced to cut on all unnessary expenses like the one she promoted.

I’m not sure to understand why someone smart as Bradley Kuhn recruited her at the Software Freedom Conservancy.


Syndicated 2014-04-14 15:15:24 from # cd /scratch

Synchronizing your (Roundcube) webmail and (KDE) desktop with a (Android) phone

So I finally got an Android-based phone. I thought waiting for Ubuntu/Firefox stuff to be released but my current one (Bada-based: never ever) died.

First, I learned that actually you need to lock your phone with a Google account for life. It just confirmed that the sane proper first steps with this is too remove anything linked to Google.

First place to go is to F-Droid. From there, instead of getting tons of shitty freeware from Google Play/Apps/whatever, you get Free Software, as in freedom even though I like free beer.

Using ownCloud? From F-Droid, get DavDroid. Yes, that works perfectly and is easy to set up, unlike the Dav-related crap on Google Apps. The only thing you have to take care of, if your SSL certificate (trendy topic theses days) is self signed, is to make a certificate the specific way Android accepts them. For now, they recommends to do it like:

#http://vimeo.com/89205175

KEY=fqdn.servername.net

openssl req -new -x509 -days 3550 -nodes -out $KEY.pem -keyout $KEY.key
openssl x509 -in $KEY.pem -outform der -out $KEY.crt

Apart from that, everything is straight-forward. You just add your IMAPS, CalDav and CardDav info like you did with KDE and Roundcube. And can obviously also use mozilla sync through your ownCloud.


Syndicated 2014-04-14 14:21:54 from # cd /scratch

Replicating IMAPs (dovecot) mails folders and sharing (through ownCloud) contacts (kmail, roundcube, etc)

dual IMAPs servers:

Having your own server handling your mails is enabling -you can implement anti-spam policies harsh enough to be incredibly effective, place catch-alls temporary addresses, etc. It does not even require much maintainance these days, it just takes a little time to set it up.

One drawback, though, is the fact if your host is down, or simply its link, then you are virtually unreachable. So you want a backup server. The straightforward solution is to have a backup that will simply forward everything to the main server as soon as possible. But having a backup server that is a replica of the main server allows you to use one or the other indifferently, and definitely have always one up at hand.

In my case, I run exim along with dovecot.  So once exim setup is replicated,  it’s only a matter of making sure to have proper dovecot setup (in my case mail_location = maildir:~/.Maildir:LAYOUT=fs:INBOX=~/.Maildir/INBOX
and mail_privileged_group =   mail  set in /etc/dovecot/conf.d/10-mail.conf along with ssl = required in /etc/dovecot/conf.d/10-ssl.conf  - you obviously need to create a certificate for IMAPs, named as described in said 10-ssl.conf but that’s not the topic here, you can use only IMAP if you wish).

Then, for each user account (assuming we’re talking about a low number), it’s as simple as making sure SSH access with no passphrase can be achieved from one of the hosts to the other and adding a cronjob like:

*/2 * * * *     user   dsync mirror secondary.domain.net 2> /dev/null

The first run may be a bit slow but it goes very fast afterward (I do have a strict expire policy though, it probably helps). This is done the the primitive  way, recent version of dovecot (ie: not yet in Debian stable) provides plugins to do it.

You may as well install unison on both server and synchronize things like ~/.procmailrc, /etc/aliases or whatever, for instance:

8 */2 * * *	user	unison -batch -auto -silent -log=false ~/.procmailrc ssh://secondary.domain.net//home/user/.procmailrc 2> /dev/null

Once you checked that you can properly login on both IMAPs, it’s just a matter of configuring your mail clients.

and many mail clients:

I use roundcube webmail whenever I have no access to a decent system with a proper mail client (kmail, gnus, etc) configured. With two IMAPs servers, there’s no benefit of not having the same webmail setup on both.

The only annoying thing is not to have common address book. It’s possible to replicate the roundcube database but it’s even better to have a cloud to share the address book with any client, not doing some rouncube-specific crap. So I went for the option of installing ownCloud on one of the hosts (so far I’ve not decided yet if there is a point in replicating also the cloud, seems a bit overkill to replicate data that is already some sort of backup or replica), pretty straight-forward since I already have nginx and php-fcgi running. And then if was just a matter of pluging roundcube in ownCloud through CardDav.

Once done, you may just want to also plug your ownCloud calendar/addressbook in KDE etc, so all your mail clients will share the same address book (yeah!). Completely unrelated, add mozilla_sync to your ownCloud is worth it too.

The only thing so far that miss is the replication of your own identities – I haven’t found anything clear about that but havent looked into it seriously. I guess it’s possible to put ~/.kde/share/config/emailidentities on the cloud or use it to extract identities vcard but I’m not sure a dirty hack is worth it. It’s a pity that identities are not part of the addressbook.

(The alternative I was contemplating before was to use kolab; I needed ownCloud for other matters so I went for this option but I keep kolab in mind nonetheless)


Syndicated 2014-02-10 15:19:44 from # cd /scratch

Release: SeeYouLater 1.2

Hi there! I’ve just released SeeYouLater 1.2 (fetch a list of IP or known spammers and to ban them by putting them in /etc/hosts.deny). It now includes seeyoulater-httpsharer, that enables to share ban list  over http instead of authenticated MySQL. It’s useful for distant hosts with unreliable link to each other/to avoid having MySQL listening on public ports.

You can obtain it on the Gna! project page using SVN or debian packages.


Syndicated 2014-02-07 19:55:46 from # cd /scratch

Caching debian/etc (apt) repositories on your local server with nginx and dsniff

It’s quite easy to set up a debian mirror. But having a mirror on a local server is rather overkill in a scenario where you simply regularly have say 3 boxes running some Debian testing amd64, 1 box running the same on arch i686 and 2 other boxes on Ubuntu. Well, it’s more caching than mirroring that you’ll want, as transparently (with no client side setup) as possible.

And that’s overly easy to do with nginx, similarly to Steam depot caching. No, really, just do the same!

So, assuming nginx and dnsspoof are already up and running -if not, really follow the link about steam cache- you want to:

- create the apt folders…

mkdir -p /srv/www/apt/debian /srv/www/apt/debian-security /srv/www/apt/ubuntu
chown www-data:www-data -R /srv/www/apt
cd /srv/www
ln -s /srv/www/apt/debian .
ln -s /srv/www/apt/debian-security .
ln -s /srv/www/apt/ubuntu .

- update nginx by adding a /etc/nginx/sites-available/apt (and a symlink in /etc/nginx/sites-enabled/) with:

# apt spoof/proxy
server  {
  listen 80;
  server_name ftp.fr.debian.org security.debian.org fr.archive.ubuntu.com security.ubuntu.com;

  access_log /var/log/nginx/apt.access.log;
  error_log /var/log/nginx/apt.error.log;

  root /srv/www/;
  resolver 127.0.0.1;

  allow 10.0.0.0/24;
  allow 127.0.0.1;
  deny all;

  location /debian/ {
    try_files $uri @mirror;
  }

  location /debian-security/ {
    try_files $uri @mirror;
  }

  location /ubuntu/ {
    try_files $uri @mirror;
  }

  location / {
    proxy_next_upstream error timeout http_404;
    proxy_pass http://$host$request_uri;
    proxy_redirect off;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded_For $proxy_add_x_forwarded_for;
    add_header X-Mirror-Upstream-Status $upstream_status;
    add_header X-Mirror-Upstream-Response-Time $upstream_response_time;
    add_header X-Mirror-Status $upstream_cache_status;
  }

  location @mirror {
    access_log /var/log/nginx/apt.remote.log;
    proxy_store on;
    proxy_store_access user:rw group:rw all:r;
    proxy_next_upstream error timeout http_404;
    proxy_pass http://$host$request_uri;
    proxy_redirect off;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded_For $proxy_add_x_forwarded_for;
    add_header X-Mirror-Upstream-Status $upstream_status;
    add_header X-Mirror-Upstream-Response-Time $upstream_response_time;
    add_header X-Mirror-Status $upstream_cache_status;
   }
}

- add the new domains to be spoofed in /etc/dnsspoof.conf:

10.0.0.1	ftp.fr.debian.org
10.0.0.1     security.debian.org
10.0.0.1	fr.archive.ubuntu.com
10.0.0.1     security.ubuntu.com

Then you have to restart both nginx and dnsspoof. Obviously, the domains have to match the sources you have configured in /etc/apt/sources.list[.d] – should be the nearest hosts to your location.

And since you do not want to keep a complete archive, you need to add a cronjob to remove outdated files, like this /etc/cron.weekly/apt-cache:

#!/bin/sh
# cleanup apt mirrors:

# remove any file that has not been accessed in the last 30 days 
find /srv/www/apt -type f -atime +30 -print0 | xargs -0 --no-run-if-empty rm

# remove any empty dir (except the main ones)
find /srv/www/apt -mindepth 2 -type d -empty -print0 | xargs -0  --no-run-if-empty rm -r

Done.


Syndicated 2014-01-28 16:08:06 from # cd /scratch

Running Debian GNU with kFreeBSD

As you could have guess considering my latest update to my iPXE setup, I’m currently giving a try to Debian GNU along with FreeBSD kernel – Debian GNU/kFreeBSD.

The hardware I’m giving this try with is neither simple nor complicated: it’s old but it’s also laptop; a Dell Latitude C640 with a P4 mobile CPU and 1GB RAM.

The install was made over network. There’s nothing overly complicated but to avoid wasting time, it’s always good to properly RTFM. For instance, I learned too late that kFreeBSD does not handle / partition set on a logical one. I did not understood exactly how come, but I had to get my / partition on ufs (ext2 for /home was ok though). I did not even got into ZFS, as it looks like it’s not recommended with a simple i686 CPU. It took me a while and find no way to get my NFS4 partitions mounted as usual from /etc/fstab, or even with mount, I had to add a dirty call to /sbin/mount_nfs -o nfsv4 gate:/all /path in /etc/rc.local. And when it came to Xorg, I found the mouse to be sometimes working, sometimes not, plenty of overly complicated and confusing info on the web, to finally come up with a working /etc/X11/xorg.conf containing only  Section “ServerFlags”  Option “AutoAddDevices” “False” EndSection (on three lines).

These are some little inconveniencies that you would not expect with a recent GNU/Linux system install, that the debian-installer does not prevent you in any way to hit/create. I’m not even sure that I found the best fixes for them. It feels a bit like installing RedHat 5.2 :-) with is more than what I actually expected.

So far I did not encountered any issue to get anything working but the suspend/sleep and general energy management looks much less reliable (with xfce4). On a side note, the fact that only OSS is available with kFreeBSD pushed me to update my wakey.pl script, I expect it to run on any BSD now.


Syndicated 2014-01-10 18:54:37 from # cd /scratch

Expiring old mails on (dovecot IMAPS) server side

Years ago, I was using gnus to read my mails: among other things, I liked the fact that it was, by default, as expected from a newsreader, only showing unread messages and properly expiring old messages after some time.  Then, using KDE, at some point, I switched to Kmail because of this nice integration within the desktop environment. Obviously I had to configure it to remove old mails (expires) in a similar fashion.

Then Kmail2 arrived. I’m not able to use this thing. It either does not even start or start overly slowly and use up 100% of cpu time for minutes, whatever computer I’m using, whether it’s an old bold regular P4 or an Athlon II X4, whether I have 1GB RAM or 8. I gather it’s related to akonadi/nepomuk/whatever, stuff supposed to improve your user experience, with fast search and so on. Fact is it’s unusable on any of my computers. So I end up, these days, using Roundcube webmail, which is not that bad but makes me wonder whether it’s worth waiting for Kmail2 to be fixed and, worse, leaves me with IMAPS folders with thousands of expired messages that should be removed.

So this led me to consider doing the expires on the server side instead of client side, with my user crontab on the server. Logged on the server, I just ran crontab -e and added the following:

# dovecot expires (SINCE means: received more recently than)
# not flagged and already read, 1 week old min
05 */5 * * *	/usr/bin/doveadm expunge -u 'user' mailbox '*' SEEN not FLAGGED not SINCE 1w
# not flagged nor read, 8 weeks old min
09 */5 * * *	/usr/bin/doveadm expunge -u 'user' mailbox '*' not SEEN not FLAGGED not SINCE 8w
# read junk, 2 hours old min
15 */5 * * * 	/usr/bin/doveadm expunge -u 'user' mailbox 'Trash/Junk' SEEN not SINCE 2h
# unread junk, 2 days old min
25 */5 * * *	/usr/bin/doveadm expunge -u 'user' mailbox 'Trash/Junk' not SEEN not SINCE 2d

(Obviously you want to replace user by your local user account and Trash/Junk by your relevant junk IMAP folder) . This setup could probably be enhanced by using flags like DRAFT and such – however, on my local server, no actual draft got properly flagged as such, so it’s better to rely on the basic mark FLAGGED.


Syndicated 2013-12-08 12:21:10 from # cd /scratch

Booting over the network to install the system (improved, with iPXE, installing Debian GNU/kFreeBSD)

I improved my improved, with iPXE instead of PXE, setup to boot over the network to install the system so it works also with Debian GNU/kFreeBSD. It simply uses the grub2pxe file provided by debian-installer. Check my PXE git directory.


Syndicated 2013-12-07 12:01:31 from # cd /scratch

Caching steam depots on your local server with nginx and dsniff

While I usually don’t advertise non libre software for obvious reasons (that’s a stupid way to think about computing), I admit, though, that the Steam platform goes toward what I’d like to see since many years. Proprietary software platform indeed – but the business is not made out of selling overly expensive DVD-Rom once in a while but cheap soft (downloadable) copies of games (often) maintained over years. They also seem about to base a future gaming console on some sort of GNU/Linux flavor, that’s not philanthropy, that’s just the only clever way to do a cool gaming based business without getting totally dependant on another software supplier that also brand his own gaming console. Latest South Park was about the fight beetween latest Xbox and Playstation. This issue only exists when you decide to make console non compatible with usual workstation, a shortcut with so many shortcomings. Making a GNU/Linux based console, because it is good business, is obviously going in the right direction.

So I’ll allow myself a little reminder here on how not to waste your bandwidth on a local network where you have several computers having copies of the same steam game. It’s merely a simplified version of the well thought Caching Steam Downloads @ LAN’s article. Obviously, to do this, you need to have your own home server. For instance, it should work out of the box with a setup like this (this is the setup mentioned before from now on in this article).

A) HTTP setup

We first create a directory to store steam depot. It will be served with http so you need to  create something like (working with the setup mentioned before):

mkdir /srv/www/depot
chown www-data:www-data /srv/www/depot

Next, you want to setup nginx, to be able to serve as a steam content provider. Everything is based on http -no proprietary non-standard crap- so it can only go smoothly.

If you have the setup mentioned before, then  /etc/nginx/sites-available/default contains a server { } statement for general intranet. Add a new file called /etc/nginx/sites-available/steam with the following (watch out for the listen and allow statements, change it depending on your server intranet IP!):

# steam spoof/proxy
server  {
  # you want this line to be set to your server intranet IP
  listen 10.0.0.1;
  listen 127.0.0.1;
  server_name *.steampowered.com;

  access_log /var/log/nginx/steam.access.log;
  error_log /var/log/nginx/steam.error.log;

  root /var/www/;
  resolver 8.8.8.8;

  # restrict to local wired network
  allow 10.0.0.0/24;
  allow 127.0.0.1;
  deny all;
location /depot/ { try_files $uri @mirror; } location / { proxy_next_upstream error timeout http_404; proxy_pass http://$host$request_uri; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded_For $proxy_add_x_forwarded_for; add_header X-Mirror-Upstream-Status $upstream_status; add_header X-Mirror-Upstream-Response-Time $upstream_response_time; add_header X-Mirror-Status $upstream_cache_status; } location @mirror { access_log /var/log/nginx/steam.remote.log; proxy_store on; proxy_store_access user:rw group:rw all:r; proxy_next_upstream error timeout http_404; proxy_pass http://$host$request_uri; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded_For $proxy_add_x_forwarded_for; add_header X-Mirror-Upstream-Status $upstream_status; add_header X-Mirror-Upstream-Response-Time $upstream_response_time; add_header X-Mirror-Status $upstream_cache_status; } }

Make it live:

cd /etc/nginx/sites-enabled && ln -s ../sites-available/steam .
invoke-rc.d nginx restart

Now nginx is able to fetch and serve steam depot files.

B) DNS setup

Now, you need your server to actually handle requests to steam content server, spoofing these servers IPs. It could be done by messing with the DNS cache server already up on the setup mentioned before but I actually find much more convenient to use dnsspoof from dsniff package with a two-line configuration than wasting time creating say bind9 unnecessarily complex db files.

So we first instead dnsspoof:

apt-get install dsniff

Here come’s the two line configuration, set in /etc/dnsspoof.conf. Obviously, here too you have to set the IP to be your server’s intranet one.

10.0.0.1     *.cs.steampowered.com
10.0.0.1     content*.steampowered.com

Then you want an init.d script. You can create an ugly /etc/init.d/dnspoof with the following (obviously, you want you ethernet ethX device to be properly set!):

/#! /bin/sh
### BEGIN INIT INFO
# Provides:          dnsspoof
# Required-Start:    bind9
# Required-Stop:     
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: Start dnsspoof over bind9
# Description:       Start dnsspoof over bind9
### END INIT INFO

# shitty version for testing purpose
/usr/sbin/dnsspoof -i eth1 -f /etc/dnsspoof.conf 2> /dev/null > /dev/null &

# EOF

Once ready, just start the spoofer:

chmod 755 /etc/init.d/dnsspoof
invoke-rc.d dnsspoof start

Now you can restart steam on your clients computers. It should work properly. You can check whether new directories appear in /srv/www/depot and monitor /var/log/nginx/steam* logs.

I’ll soon add a small script to get more meaningful info about the depots available on your server, so you can know which are what in a jiffy and remove the no longer useful willy-nilly.


Syndicated 2013-11-30 12:02:51 from # cd /scratch

Add alphanumeric prefixes to files inside a directory that serves as queue

In case, as in my previous article about post-image-to-tumblr.pl example, you use a directory as queue, you may want an easy way to rename files.

For instance, if you have files like shot001.png, shot003.png, shot012.png, whenever you want to insert a file at a specific position in the queue, you are forced to rename it to something like shotXXX.png; you may even have to rename other files.

So this qrename.pl script add a prefix like CCC5—$file in front until it reaches WWW… using 7 differents characters only, so it’s really easy to insert files anywhere. If it reaches WWW, then it’ll use the form WWWNNN5—, NNN being a three digits counter. You can set how many digits you want with the option —max-queue-digits so you virtually can manage queue with as many files as you want (however unpractical that could actually be). It works on the current directory, only on regular files, and actually does not do anything unless you set the option –please-do in order to avoid any accidental mess.


Syndicated 2013-09-11 15:35:35 from # cd /scratch

168 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!