Older blog entries for yeupou (starting at number 175)

Replicating IMAPs (dovecot) mails folders and sharing (through ownCloud) contacts (kmail, roundcube, etc)

dual IMAPs servers:

Having your own server handling your mails is enabling -you can implement anti-spam policies harsh enough to be incredibly effective, place catch-alls temporary addresses, etc. It does not even require much maintainance these days, it just takes a little time to set it up.

One drawback, though, is the fact if your host is down, or simply its link, then you are virtually unreachable. So you want a backup server. The straightforward solution is to have a backup that will simply forward everything to the main server as soon as possible. But having a backup server that is a replica of the main server allows you to use one or the other indifferently, and definitely have always one up at hand.

In my case, I run exim along with dovecot.  So once exim setup is replicated,  it’s only a matter of making sure to have proper dovecot setup (in my case mail_location = maildir:~/.Maildir:LAYOUT=fs:INBOX=~/.Maildir/INBOX
and mail_privileged_group =   mail  set in /etc/dovecot/conf.d/10-mail.conf along with ssl = required in /etc/dovecot/conf.d/10-ssl.conf  - you obviously need to create a certificate for IMAPs, named as described in said 10-ssl.conf but that’s not the topic here, you can use only IMAP if you wish).

Then, for each user account (assuming we’re talking about a low number), it’s as simple as making sure SSH access with no passphrase can be achieved from one of the hosts to the other and adding a cronjob like:

*/2 * * * *     user   dsync mirror secondary.domain.net 2> /dev/null

The first run may be a bit slow but it goes very fast afterward (I do have a strict expire policy though, it probably helps). This is done the the primitive  way, recent version of dovecot (ie: not yet in Debian stable) provides plugins to do it.

You may as well install unison on both server and synchronize things like ~/.procmailrc, /etc/aliases or whatever, for instance:

8 */2 * * *	user	unison -batch -auto -silent -log=false ~/.procmailrc ssh://secondary.domain.net//home/user/.procmailrc 2> /dev/null

Once you checked that you can properly login on both IMAPs, it’s just a matter of configuring your mail clients.

and many mail clients:

I use roundcube webmail whenever I have no access to a decent system with a proper mail client (kmail, gnus, etc) configured. With two IMAPs servers, there’s no benefit of not having the same webmail setup on both.

The only annoying thing is not to have common address book. It’s possible to replicate the roundcube database but it’s even better to have a cloud to share the address book with any client, not doing some rouncube-specific crap. So I went for the option of installing ownCloud on one of the hosts (so far I’ve not decided yet if there is a point in replicating also the cloud, seems a bit overkill to replicate data that is already some sort of backup or replica), pretty straight-forward since I already have nginx and php-fcgi running. And then if was just a matter of pluging roundcube in ownCloud through CardDav.

Once done, you may just want to also plug your ownCloud calendar/addressbook in KDE etc, so all your mail clients will share the same address book (yeah!). Completely unrelated, add mozilla_sync to your ownCloud is worth it too.

The only thing so far that miss is the replication of your own identities – I haven’t found anything clear about that but havent looked into it seriously. I guess it’s possible to put ~/.kde/share/config/emailidentities on the cloud or use it to extract identities vcard but I’m not sure a dirty hack is worth it. It’s a pity that identities are not part of the addressbook.

(The alternative I was contemplating before was to use kolab; I needed ownCloud for other matters so I went for this option but I keep kolab in mind nonetheless)


Syndicated 2014-02-10 15:19:44 from # cd /scratch

Release: SeeYouLater 1.2

Hi there! I’ve just released SeeYouLater 1.2 (fetch a list of IP or known spammers and to ban them by putting them in /etc/hosts.deny). It now includes seeyoulater-httpsharer, that enables to share ban list  over http instead of authenticated MySQL. It’s useful for distant hosts with unreliable link to each other/to avoid having MySQL listening on public ports.

You can obtain it on the Gna! project page using SVN or debian packages.


Syndicated 2014-02-07 19:55:46 from # cd /scratch

Caching debian/etc (apt) repositories on your local server with nginx and dsniff

It’s quite easy to set up a debian mirror. But having a mirror on a local server is rather overkill in a scenario where you simply regularly have say 3 boxes running some Debian testing amd64, 1 box running the same on arch i686 and 2 other boxes on Ubuntu. Well, it’s more caching than mirroring that you’ll want, as transparently (with no client side setup) as possible.

And that’s overly easy to do with nginx, similarly to Steam depot caching. No, really, just do the same!

So, assuming nginx and dnsspoof are already up and running -if not, really follow the link about steam cache- you want to:

- create the apt folders…

mkdir -p /srv/www/apt/debian /srv/www/apt/debian-security /srv/www/apt/ubuntu
chown www-data:www-data -R /srv/www/apt
cd /srv/www
ln -s /srv/www/apt/debian .
ln -s /srv/www/apt/debian-security .
ln -s /srv/www/apt/ubuntu .

- update nginx by adding a /etc/nginx/sites-available/apt (and a symlink in /etc/nginx/sites-enabled/) with:

# apt spoof/proxy
server  {
  listen 80;
  server_name ftp.fr.debian.org security.debian.org fr.archive.ubuntu.com security.ubuntu.com;

  access_log /var/log/nginx/apt.access.log;
  error_log /var/log/nginx/apt.error.log;

  root /srv/www/;
  resolver 127.0.0.1;

  allow 10.0.0.0/24;
  allow 127.0.0.1;
  deny all;

  location /debian/ {
    try_files $uri @mirror;
  }

  location /debian-security/ {
    try_files $uri @mirror;
  }

  location /ubuntu/ {
    try_files $uri @mirror;
  }

  location / {
    proxy_next_upstream error timeout http_404;
    proxy_pass http://$host$request_uri;
    proxy_redirect off;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded_For $proxy_add_x_forwarded_for;
    add_header X-Mirror-Upstream-Status $upstream_status;
    add_header X-Mirror-Upstream-Response-Time $upstream_response_time;
    add_header X-Mirror-Status $upstream_cache_status;
  }

  location @mirror {
    access_log /var/log/nginx/apt.remote.log;
    proxy_store on;
    proxy_store_access user:rw group:rw all:r;
    proxy_next_upstream error timeout http_404;
    proxy_pass http://$host$request_uri;
    proxy_redirect off;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded_For $proxy_add_x_forwarded_for;
    add_header X-Mirror-Upstream-Status $upstream_status;
    add_header X-Mirror-Upstream-Response-Time $upstream_response_time;
    add_header X-Mirror-Status $upstream_cache_status;
   }
}

- add the new domains to be spoofed in /etc/dnsspoof.conf:

10.0.0.1	ftp.fr.debian.org
10.0.0.1     security.debian.org
10.0.0.1	fr.archive.ubuntu.com
10.0.0.1     security.ubuntu.com

Then you have to restart both nginx and dnsspoof. Obviously, the domains have to match the sources you have configured in /etc/apt/sources.list[.d] – should be the nearest hosts to your location.

And since you do not want to keep a complete archive, you need to add a cronjob to remove outdated files, like this /etc/cron.weekly/apt-cache:

#!/bin/sh
# cleanup apt mirrors:

# remove any file that has not been accessed in the last 30 days 
find /srv/www/apt -type f -atime +30 -print0 | xargs -0 --no-run-if-empty rm

# remove any empty dir (except the main ones)
find /srv/www/apt -mindepth 2 -type d -empty -print0 | xargs -0  --no-run-if-empty rm -r

Done.


Syndicated 2014-01-28 16:08:06 from # cd /scratch

Running Debian GNU with kFreeBSD

As you could have guess considering my latest update to my iPXE setup, I’m currently giving a try to Debian GNU along with FreeBSD kernel – Debian GNU/kFreeBSD.

The hardware I’m giving this try with is neither simple nor complicated: it’s old but it’s also laptop; a Dell Latitude C640 with a P4 mobile CPU and 1GB RAM.

The install was made over network. There’s nothing overly complicated but to avoid wasting time, it’s always good to properly RTFM. For instance, I learned too late that kFreeBSD does not handle / partition set on a logical one. I did not understood exactly how come, but I had to get my / partition on ufs (ext2 for /home was ok though). I did not even got into ZFS, as it looks like it’s not recommended with a simple i686 CPU. It took me a while and find no way to get my NFS4 partitions mounted as usual from /etc/fstab, or even with mount, I had to add a dirty call to /sbin/mount_nfs -o nfsv4 gate:/all /path in /etc/rc.local. And when it came to Xorg, I found the mouse to be sometimes working, sometimes not, plenty of overly complicated and confusing info on the web, to finally come up with a working /etc/X11/xorg.conf containing only  Section “ServerFlags”  Option “AutoAddDevices” “False” EndSection (on three lines).

These are some little inconveniencies that you would not expect with a recent GNU/Linux system install, that the debian-installer does not prevent you in any way to hit/create. I’m not even sure that I found the best fixes for them. It feels a bit like installing RedHat 5.2 :-) with is more than what I actually expected.

So far I did not encountered any issue to get anything working but the suspend/sleep and general energy management looks much less reliable (with xfce4). On a side note, the fact that only OSS is available with kFreeBSD pushed me to update my wakey.pl script, I expect it to run on any BSD now.


Syndicated 2014-01-10 18:54:37 from # cd /scratch

Expiring old mails on (dovecot IMAPS) server side

Years ago, I was using gnus to read my mails: among other things, I liked the fact that it was, by default, as expected from a newsreader, only showing unread messages and properly expiring old messages after some time.  Then, using KDE, at some point, I switched to Kmail because of this nice integration within the desktop environment. Obviously I had to configure it to remove old mails (expires) in a similar fashion.

Then Kmail2 arrived. I’m not able to use this thing. It either does not even start or start overly slowly and use up 100% of cpu time for minutes, whatever computer I’m using, whether it’s an old bold regular P4 or an Athlon II X4, whether I have 1GB RAM or 8. I gather it’s related to akonadi/nepomuk/whatever, stuff supposed to improve your user experience, with fast search and so on. Fact is it’s unusable on any of my computers. So I end up, these days, using Roundcube webmail, which is not that bad but makes me wonder whether it’s worth waiting for Kmail2 to be fixed and, worse, leaves me with IMAPS folders with thousands of expired messages that should be removed.

So this led me to consider doing the expires on the server side instead of client side, with my user crontab on the server. Logged on the server, I just ran crontab -e and added the following:

# dovecot expires (SINCE means: received more recently than)
# not flagged and already read, 1 week old min
05 */5 * * *	/usr/bin/doveadm expunge -u 'user' mailbox '*' SEEN not FLAGGED not SINCE 1w
# not flagged nor read, 8 weeks old min
09 */5 * * *	/usr/bin/doveadm expunge -u 'user' mailbox '*' not SEEN not FLAGGED not SINCE 8w
# read junk, 2 hours old min
15 */5 * * * 	/usr/bin/doveadm expunge -u 'user' mailbox 'Trash/Junk' SEEN not SINCE 2h
# unread junk, 2 days old min
25 */5 * * *	/usr/bin/doveadm expunge -u 'user' mailbox 'Trash/Junk' not SEEN not SINCE 2d

(Obviously you want to replace user by your local user account and Trash/Junk by your relevant junk IMAP folder) . This setup could probably be enhanced by using flags like DRAFT and such – however, on my local server, no actual draft got properly flagged as such, so it’s better to rely on the basic mark FLAGGED.


Syndicated 2013-12-08 12:21:10 from # cd /scratch

Booting over the network to install the system (improved, with iPXE, installing Debian GNU/kFreeBSD)

I improved my improved, with iPXE instead of PXE, setup to boot over the network to install the system so it works also with Debian GNU/kFreeBSD. It simply uses the grub2pxe file provided by debian-installer. Check my PXE git directory.


Syndicated 2013-12-07 12:01:31 from # cd /scratch

Caching steam depots on your local server with nginx and dsniff

While I usually don’t advertise non libre software for obvious reasons (that’s a stupid way to think about computing), I admit, though, that the Steam platform goes toward what I’d like to see since many years. Proprietary software platform indeed – but the business is not made out of selling overly expensive DVD-Rom once in a while but cheap soft (downloadable) copies of games (often) maintained over years. They also seem about to base a future gaming console on some sort of GNU/Linux flavor, that’s not philanthropy, that’s just the only clever way to do a cool gaming based business without getting totally dependant on another software supplier that also brand his own gaming console. Latest South Park was about the fight beetween latest Xbox and Playstation. This issue only exists when you decide to make console non compatible with usual workstation, a shortcut with so many shortcomings. Making a GNU/Linux based console, because it is good business, is obviously going in the right direction.

So I’ll allow myself a little reminder here on how not to waste your bandwidth on a local network where you have several computers having copies of the same steam game. It’s merely a simplified version of the well thought Caching Steam Downloads @ LAN’s article. Obviously, to do this, you need to have your own home server. For instance, it should work out of the box with a setup like this (this is the setup mentioned before from now on in this article).

A) HTTP setup

We first create a directory to store steam depot. It will be served with http so you need to  create something like (working with the setup mentioned before):

mkdir /srv/www/depot
chown www-data:www-data /srv/www/depot

Next, you want to setup nginx, to be able to serve as a steam content provider. Everything is based on http -no proprietary non-standard crap- so it can only go smoothly.

If you have the setup mentioned before, then  /etc/nginx/sites-available/default contains a server { } statement for general intranet. Add a new file called /etc/nginx/sites-available/steam with the following (watch out for the listen and allow statements, change it depending on your server intranet IP!):

# steam spoof/proxy
server  {
  # you want this line to be set to your server intranet IP
  listen 10.0.0.1;
  listen 127.0.0.1;
  server_name *.steampowered.com;

  access_log /var/log/nginx/steam.access.log;
  error_log /var/log/nginx/steam.error.log;

  root /var/www/;
  resolver 8.8.8.8;

  # restrict to local wired network
  allow 10.0.0.0/24;
  allow 127.0.0.1;
  deny all;
location /depot/ { try_files $uri @mirror; } location / { proxy_next_upstream error timeout http_404; proxy_pass http://$host$request_uri; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded_For $proxy_add_x_forwarded_for; add_header X-Mirror-Upstream-Status $upstream_status; add_header X-Mirror-Upstream-Response-Time $upstream_response_time; add_header X-Mirror-Status $upstream_cache_status; } location @mirror { access_log /var/log/nginx/steam.remote.log; proxy_store on; proxy_store_access user:rw group:rw all:r; proxy_next_upstream error timeout http_404; proxy_pass http://$host$request_uri; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded_For $proxy_add_x_forwarded_for; add_header X-Mirror-Upstream-Status $upstream_status; add_header X-Mirror-Upstream-Response-Time $upstream_response_time; add_header X-Mirror-Status $upstream_cache_status; } }

Make it live:

cd /etc/nginx/sites-enabled && ln -s ../sites-available/steam .
invoke-rc.d nginx restart

Now nginx is able to fetch and serve steam depot files.

B) DNS setup

Now, you need your server to actually handle requests to steam content server, spoofing these servers IPs. It could be done by messing with the DNS cache server already up on the setup mentioned before but I actually find much more convenient to use dnsspoof from dsniff package with a two-line configuration than wasting time creating say bind9 unnecessarily complex db files.

So we first instead dnsspoof:

apt-get install dsniff

Here come’s the two line configuration, set in /etc/dnsspoof.conf. Obviously, here too you have to set the IP to be your server’s intranet one.

10.0.0.1     *.cs.steampowered.com
10.0.0.1     content*.steampowered.com

Then you want an init.d script. You can create an ugly /etc/init.d/dnspoof with the following (obviously, you want you ethernet ethX device to be properly set!):

/#! /bin/sh
### BEGIN INIT INFO
# Provides:          dnsspoof
# Required-Start:    bind9
# Required-Stop:     
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: Start dnsspoof over bind9
# Description:       Start dnsspoof over bind9
### END INIT INFO

# shitty version for testing purpose
/usr/sbin/dnsspoof -i eth1 -f /etc/dnsspoof.conf 2> /dev/null > /dev/null &

# EOF

Once ready, just start the spoofer:

chmod 755 /etc/init.d/dnsspoof
invoke-rc.d dnsspoof start

Now you can restart steam on your clients computers. It should work properly. You can check whether new directories appear in /srv/www/depot and monitor /var/log/nginx/steam* logs.

I’ll soon add a small script to get more meaningful info about the depots available on your server, so you can know which are what in a jiffy and remove the no longer useful willy-nilly.


Syndicated 2013-11-30 12:02:51 from # cd /scratch

Add alphanumeric prefixes to files inside a directory that serves as queue

In case, as in my previous article about post-image-to-tumblr.pl example, you use a directory as queue, you may want an easy way to rename files.

For instance, if you have files like shot001.png, shot003.png, shot012.png, whenever you want to insert a file at a specific position in the queue, you are forced to rename it to something like shotXXX.png; you may even have to rename other files.

So this qrename.pl script add a prefix like CCC5—$file in front until it reaches WWW… using 7 differents characters only, so it’s really easy to insert files anywhere. If it reaches WWW, then it’ll use the form WWWNNN5—, NNN being a three digits counter. You can set how many digits you want with the option —max-queue-digits so you virtually can manage queue with as many files as you want (however unpractical that could actually be). It works on the current directory, only on regular files, and actually does not do anything unless you set the option –please-do in order to avoid any accidental mess.


Syndicated 2013-09-11 15:35:35 from # cd /scratch

Managing a tumblr posts-queue locally with #Tags

A year ago, I posted a quite confused/ing article regarding a script useful to post a picture per day on a Tumblr. This article went outdated fast enough since Tumblr changed their (poorly documented) API.

David Moreno “Damog”, author of WWW::Tumblr, updated his trunk so it properly supports the new API. So here’s my updated version of my post-image-to-tumblr script.

To use it, you first need to do some black magic with this init-auth script. And when it’s done, it’s still not ready, because so far I’m not able to use the data[0] field as described in the API doc (hints welcome). So there is need to add some config snippet to be able to first upload the relevant image to a third party server with scp and then use the source field (URI) instead of the fakakte data field.

The configuration ~/.tumblrrc should look as:

base_url = blogname.tumblr.com

consumer_key = alongstringalongstringalongstringalongstring
consumer_secret = alongstringalongstringalongstringalongstring
token = alongstringalongstringalongstringalongstring
token_secret = alongstringalongstringalongstringalongstring

workaround_login = user@server
workaround_dir = /var/www/tempdir
workaround_url = http://server/tempdir

The script expects you to have a git repository ~/tmp/tumblr containing two subdirectories: queue and over, queue containing your image queue. You can override this path with the paramater content=

If an image have metadata with a Description field (XMP) containing a comma separated list of #Tags, #Another Tag, it will be posted with Tumblr tags (ignoring strings not starting with #).

In my case, as shorcut, I added a Makefile in ~/tmp/tumblr that contains:

SHELL := /bin/bash 

all:
	git pull
	git add over queue
	-git commit -am "Updated by a bunch of anonymous unanimous hamsters"
	git push
	make count

count:
	@echo
	@in_queue=`ls -1 queue/ | wc -l` && echo "$$in_queue file(s) in the queue total,"
	@in_queue=`ls -1 queue/ | wc -l` && in_queue=`echo $$in_queue - 5 | bc` && echo "so $$in_queue file(s) that can actually be posted,"
	@in_queue=`ls -1 queue/ | wc -l` && in_queue=`echo $$in_queue - 5 | bc` && if [ $$in_queue -gt 0 ]; then echo "enough until: "`date --date "$$in_queue days"`; else echo "not enough at all."; fi
	@cd queue && limit="9M" && if ((`find  . -size +$$limit | wc -l` > 0)); then echo "" && echo `find  . -size +$$limit | wc -l`" file(s) of size > $$limit :" && find  . -size +$$limit ; fi 

gifcheck:
	@cd queue && limit="1M" && echo `find  . -size +$$limit -name "*.gif"| wc -l`" file(s) with size > $$limit :" && find  . -size +$$limit  -name "*.gif" && find  . -size +$$limit -name "*.gif" -print0 | xargs -0 -I file convert file file.miff
	@cd queue && for i in *.miff; do newsize="100000" && for run in `seq 1 25`; do if [ ! -e "$$i.gif" ] || [ `stat -c %s "$$i.gif"` -gt 1000000 ]; then echo "$$i.gif too big (`stat -c %s $$i.gif`, run $$run, newsize $$newsize)" && convert -colors 128 +dither -layers optimize  -resize $$newsize@ $$i $$i.gif && newsize=`expr $$newsize - 4500` ;  fi ; done ;  done
	@cd queue && for i in *.miff; do newsize="100000" && if [ ! -e "$$i.gif" ] || [ `stat -c %s "$$i.gif"` -le 1000000 ]; then echo "$$i.gif is fine, removing the miff" && rm -f $$i; fi; done
random:
	cd queue && for i in *; do let cut_start=$${#i}-19 cut_end=$${#i}-4 && if (($$cut_start < 1)); then cut_start=1; fi && mv $$i `mktemp --dry-run --tmpdir=. -t XXXXXXX`-`basename $$i $${i%%.*} | cut -c $$cut_start-$$cut_end`.`echo $${i##*.} | tr A-Z a-z`; done 

young:
 	cd queue && count=0 && for i in *; do count=`expr $$count + 1` && case $$count in [0-5]) prefix=A;; [6-9]) prefix=C;; 1[0-5]) prefix=E;; 1[5-9]) prefix=G;; 2[0-9]) prefix=I;; 3[0-9]) prefix=K;; 4[0-9]) prefix=M;; 5[0-9]) prefix=O;; *) prefix=Q;; esac && mv $$i $$prefix`echo $$i | tr A-Z a-z`; done

log:
 	git log --stat -n100 --pretty=format:"%s of %ad" > ChangeLog

For the record, last year I posted an RFP, if you are a DD, I could probably offer you a beer to work on it. :-)


Syndicated 2013-08-08 15:26:48 from # cd /scratch

Building dpkg package from a .dsc

I wanted to give a try to a recent version of rekonq (none in testing/unstable, the provided are more than one year old and buggy as hell), so I ended up on a rekonq package page on mentors.net. It provides a .dsc. To build the package, I did as follow:

dget -x http://mentors.debian.net/debian/pool/main/r/rekonq/rekonq_2.3.1-1.dsc
cd rekonq-2.3.1
dpkg-checkbuilddeps
dpkg-buildpackage -rfakeroot -us -uc

(obviously we assume that you have installed build dependancies as printed out by dpkg-checkbuilddeps)

Then to install it, it’s easy enough:

cd ..
su
dpkg -i rekonq_*.deb

On an related note, I have this problem with this version of rekonq that does not save my default search engine.


Syndicated 2013-07-09 10:33:57 from # cd /scratch

166 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!