Older blog entries for yeupou (starting at number 173)

Caching debian/etc (apt) repositories on your local server with nginx and dsniff

It’s quite easy to set up a debian mirror. But having a mirror on a local server is rather overkill in a scenario where you simply regularly have say 3 boxes running some Debian testing amd64, 1 box running the same on arch i686 and 2 other boxes on Ubuntu. Well, it’s more caching than mirroring that you’ll want, as transparently (with no client side setup) as possible.

And that’s overly easy to do with nginx, similarly to Steam depot caching. No, really, just do the same!

So, assuming nginx and dnsspoof are already up and running -if not, really follow the link about steam cache- you want to:

- create the apt folders…

mkdir -p /srv/www/apt/debian /srv/www/apt/debian-security /srv/www/apt/ubuntu
chown www-data:www-data -R /srv/www/apt
cd /srv/www
ln -s /srv/www/apt/debian .
ln -s /srv/www/apt/debian-security .
ln -s /srv/www/apt/ubuntu .

- update nginx by adding a /etc/nginx/sites-available/apt (and a symlink in /etc/nginx/sites-enabled/) with:

# apt spoof/proxy
server  {
  listen 80;
  server_name ftp.fr.debian.org security.debian.org fr.archive.ubuntu.com security.ubuntu.com;

  access_log /var/log/nginx/apt.access.log;
  error_log /var/log/nginx/apt.error.log;

  root /srv/www/;
  resolver 127.0.0.1;

  allow 10.0.0.0/24;
  allow 127.0.0.1;
  deny all;

  location /debian/ {
    try_files $uri @mirror;
  }

  location /debian-security/ {
    try_files $uri @mirror;
  }

  location /ubuntu/ {
    try_files $uri @mirror;
  }

  location / {
    proxy_next_upstream error timeout http_404;
    proxy_pass http://$host$request_uri;
    proxy_redirect off;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded_For $proxy_add_x_forwarded_for;
    add_header X-Mirror-Upstream-Status $upstream_status;
    add_header X-Mirror-Upstream-Response-Time $upstream_response_time;
    add_header X-Mirror-Status $upstream_cache_status;
  }

  location @mirror {
    access_log /var/log/nginx/apt.remote.log;
    proxy_store on;
    proxy_store_access user:rw group:rw all:r;
    proxy_next_upstream error timeout http_404;
    proxy_pass http://$host$request_uri;
    proxy_redirect off;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded_For $proxy_add_x_forwarded_for;
    add_header X-Mirror-Upstream-Status $upstream_status;
    add_header X-Mirror-Upstream-Response-Time $upstream_response_time;
    add_header X-Mirror-Status $upstream_cache_status;
   }
}

- add the new domains to be spoofed in /etc/dnsspoof.conf:

10.0.0.1	ftp.fr.debian.org
10.0.0.1     security.debian.org
10.0.0.1	fr.archive.ubuntu.com
10.0.0.1     security.ubuntu.com

Then you have to restart both nginx and dnsspoof. Obviously, the domains have to match the sources you have configured in /etc/apt/sources.list[.d] – should be the nearest hosts to your location.

And since you do not want to keep a complete archive, you need to add a cronjob to remove outdated files, like this /etc/cron.weekly/apt-cache:

#!/bin/sh
# cleanup apt mirrors:

# remove any file that has not been accessed in the last 30 days 
find /srv/www/apt -type f -atime +30 -print0 | xargs -0 --no-run-if-empty rm

# remove any empty dir (except the main ones)
find /srv/www/apt -mindepth 2 -type d -empty -print0 | xargs -0  --no-run-if-empty rm -r

Done.


Syndicated 2014-01-28 16:08:06 from # cd /scratch

Running Debian GNU with kFreeBSD

As you could have guess considering my latest update to my iPXE setup, I’m currently giving a try to Debian GNU along with FreeBSD kernel – Debian GNU/kFreeBSD.

The hardware I’m giving this try with is neither simple nor complicated: it’s old but it’s also laptop; a Dell Latitude C640 with a P4 mobile CPU and 1GB RAM.

The install was made over network. There’s nothing overly complicated but to avoid wasting time, it’s always good to properly RTFM. For instance, I learned too late that kFreeBSD does not handle / partition set on a logical one. I did not understood exactly how come, but I had to get my / partition on ufs (ext2 for /home was ok though). I did not even got into ZFS, as it looks like it’s not recommended with a simple i686 CPU. It took me a while and find no way to get my NFS4 partitions mounted as usual from /etc/fstab, or even with mount, I had to add a dirty call to /sbin/mount_nfs -o nfsv4 gate:/all /path in /etc/rc.local. And when it came to Xorg, I found the mouse to be sometimes working, sometimes not, plenty of overly complicated and confusing info on the web, to finally come up with a working /etc/X11/xorg.conf containing only  Section “ServerFlags”  Option “AutoAddDevices” “False” EndSection (on three lines).

These are some little inconveniencies that you would not expect with a recent GNU/Linux system install, that the debian-installer does not prevent you in any way to hit/create. I’m not even sure that I found the best fixes for them. It feels a bit like installing RedHat 5.2 :-) with is more than what I actually expected.

So far I did not encountered any issue to get anything working but the suspend/sleep and general energy management looks much less reliable (with xfce4). On a side note, the fact that only OSS is available with kFreeBSD pushed me to update my wakey.pl script, I expect it to run on any BSD now.


Syndicated 2014-01-10 18:54:37 from # cd /scratch

Expiring old mails on (dovecot IMAPS) server side

Years ago, I was using gnus to read my mails: among other things, I liked the fact that it was, by default, as expected from a newsreader, only showing unread messages and properly expiring old messages after some time.  Then, using KDE, at some point, I switched to Kmail because of this nice integration within the desktop environment. Obviously I had to configure it to remove old mails (expires) in a similar fashion.

Then Kmail2 arrived. I’m not able to use this thing. It either does not even start or start overly slowly and use up 100% of cpu time for minutes, whatever computer I’m using, whether it’s an old bold regular P4 or an Athlon II X4, whether I have 1GB RAM or 8. I gather it’s related to akonadi/nepomuk/whatever, stuff supposed to improve your user experience, with fast search and so on. Fact is it’s unusable on any of my computers. So I end up, these days, using Roundcube webmail, which is not that bad but makes me wonder whether it’s worth waiting for Kmail2 to be fixed and, worse, leaves me with IMAPS folders with thousands of expired messages that should be removed.

So this led me to consider doing the expires on the server side instead of client side, with my user crontab on the server. Logged on the server, I just ran crontab -e and added the following:

# dovecot expires (SINCE means: received more recently than)
# not flagged and already read, 1 week old min
05 */5 * * *	/usr/bin/doveadm expunge -u 'user' mailbox '*' SEEN not FLAGGED not SINCE 1w
# not flagged nor read, 8 weeks old min
09 */5 * * *	/usr/bin/doveadm expunge -u 'user' mailbox '*' not SEEN not FLAGGED not SINCE 8w
# read junk, 2 hours old min
15 */5 * * * 	/usr/bin/doveadm expunge -u 'user' mailbox 'Trash/Junk' SEEN not SINCE 2h
# unread junk, 2 days old min
25 */5 * * *	/usr/bin/doveadm expunge -u 'user' mailbox 'Trash/Junk' not SEEN not SINCE 2d

(Obviously you want to replace user by your local user account and Trash/Junk by your relevant junk IMAP folder) . This setup could probably be enhanced by using flags like DRAFT and such – however, on my local server, no actual draft got properly flagged as such, so it’s better to rely on the basic mark FLAGGED.


Syndicated 2013-12-08 12:21:10 from # cd /scratch

Booting over the network to install the system (improved, with iPXE, installing Debian GNU/kFreeBSD)

I improved my improved, with iPXE instead of PXE, setup to boot over the network to install the system so it works also with Debian GNU/kFreeBSD. It simply uses the grub2pxe file provided by debian-installer. Check my PXE git directory.


Syndicated 2013-12-07 12:01:31 from # cd /scratch

Caching steam depots on your local server with nginx and dsniff

While I usually don’t advertise non libre software for obvious reasons (that’s a stupid way to think about computing), I admit, though, that the Steam platform goes toward what I’d like to see since many years. Proprietary software platform indeed – but the business is not made out of selling overly expensive DVD-Rom once in a while but cheap soft (downloadable) copies of games (often) maintained over years. They also seem about to base a future gaming console on some sort of GNU/Linux flavor, that’s not philanthropy, that’s just the only clever way to do a cool gaming based business without getting totally dependant on another software supplier that also brand his own gaming console. Latest South Park was about the fight beetween latest Xbox and Playstation. This issue only exists when you decide to make console non compatible with usual workstation, a shortcut with so many shortcomings. Making a GNU/Linux based console, because it is good business, is obviously going in the right direction.

So I’ll allow myself a little reminder here on how not to waste your bandwidth on a local network where you have several computers having copies of the same steam game. It’s merely a simplified version of the well thought Caching Steam Downloads @ LAN’s article. Obviously, to do this, you need to have your own home server. For instance, it should work out of the box with a setup like this (this is the setup mentioned before from now on in this article).

A) HTTP setup

We first create a directory to store steam depot. It will be served with http so you need to  create something like (working with the setup mentioned before):

mkdir /srv/www/depot
chown www-data:www-data /srv/www/depot

Next, you want to setup nginx, to be able to serve as a steam content provider. Everything is based on http -no proprietary non-standard crap- so it can only go smoothly.

If you have the setup mentioned before, then  /etc/nginx/sites-available/default contains a server { } statement for general intranet. Add a new file called /etc/nginx/sites-available/steam with the following (watch out for the listen and allow statements, change it depending on your server intranet IP!):

# steam spoof/proxy
server  {
  # you want this line to be set to your server intranet IP
  listen 10.0.0.1;
  listen 127.0.0.1;
  server_name *.steampowered.com;

  access_log /var/log/nginx/steam.access.log;
  error_log /var/log/nginx/steam.error.log;

  root /var/www/;
  resolver 8.8.8.8;

  # restrict to local wired network
  allow 10.0.0.0/24;
  allow 127.0.0.1;
  deny all;
location /depot/ { try_files $uri @mirror; } location / { proxy_next_upstream error timeout http_404; proxy_pass http://$host$request_uri; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded_For $proxy_add_x_forwarded_for; add_header X-Mirror-Upstream-Status $upstream_status; add_header X-Mirror-Upstream-Response-Time $upstream_response_time; add_header X-Mirror-Status $upstream_cache_status; } location @mirror { access_log /var/log/nginx/steam.remote.log; proxy_store on; proxy_store_access user:rw group:rw all:r; proxy_next_upstream error timeout http_404; proxy_pass http://$host$request_uri; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded_For $proxy_add_x_forwarded_for; add_header X-Mirror-Upstream-Status $upstream_status; add_header X-Mirror-Upstream-Response-Time $upstream_response_time; add_header X-Mirror-Status $upstream_cache_status; } }

Make it live:

cd /etc/nginx/sites-enabled && ln -s ../sites-available/steam .
invoke-rc.d nginx restart

Now nginx is able to fetch and serve steam depot files.

B) DNS setup

Now, you need your server to actually handle requests to steam content server, spoofing these servers IPs. It could be done by messing with the DNS cache server already up on the setup mentioned before but I actually find much more convenient to use dnsspoof from dsniff package with a two-line configuration than wasting time creating say bind9 unnecessarily complex db files.

So we first instead dnsspoof:

apt-get install dsniff

Here come’s the two line configuration, set in /etc/dnsspoof.conf. Obviously, here too you have to set the IP to be your server’s intranet one.

10.0.0.1     *.cs.steampowered.com
10.0.0.1     content*.steampowered.com

Then you want an init.d script. You can create an ugly /etc/init.d/dnspoof with the following (obviously, you want you ethernet ethX device to be properly set!):

/#! /bin/sh
### BEGIN INIT INFO
# Provides:          dnsspoof
# Required-Start:    bind9
# Required-Stop:     
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: Start dnsspoof over bind9
# Description:       Start dnsspoof over bind9
### END INIT INFO

# shitty version for testing purpose
/usr/sbin/dnsspoof -i eth1 -f /etc/dnsspoof.conf 2> /dev/null > /dev/null &

# EOF

Once ready, just start the spoofer:

chmod 755 /etc/init.d/dnsspoof
invoke-rc.d dnsspoof start

Now you can restart steam on your clients computers. It should work properly. You can check whether new directories appear in /srv/www/depot and monitor /var/log/nginx/steam* logs.

I’ll soon add a small script to get more meaningful info about the depots available on your server, so you can know which are what in a jiffy and remove the no longer useful willy-nilly.


Syndicated 2013-11-30 12:02:51 from # cd /scratch

Add alphanumeric prefixes to files inside a directory that serves as queue

In case, as in my previous article about post-image-to-tumblr.pl example, you use a directory as queue, you may want an easy way to rename files.

For instance, if you have files like shot001.png, shot003.png, shot012.png, whenever you want to insert a file at a specific position in the queue, you are forced to rename it to something like shotXXX.png; you may even have to rename other files.

So this qrename.pl script add a prefix like CCC5—$file in front until it reaches WWW… using 7 differents characters only, so it’s really easy to insert files anywhere. If it reaches WWW, then it’ll use the form WWWNNN5—, NNN being a three digits counter. You can set how many digits you want with the option —max-queue-digits so you virtually can manage queue with as many files as you want (however unpractical that could actually be). It works on the current directory, only on regular files, and actually does not do anything unless you set the option –please-do in order to avoid any accidental mess.


Syndicated 2013-09-11 15:35:35 from # cd /scratch

Managing a tumblr posts-queue locally with #Tags

A year ago, I posted a quite confused/ing article regarding a script useful to post a picture per day on a Tumblr. This article went outdated fast enough since Tumblr changed their (poorly documented) API.

David Moreno “Damog”, author of WWW::Tumblr, updated his trunk so it properly supports the new API. So here’s my updated version of my post-image-to-tumblr script.

To use it, you first need to do some black magic with this init-auth script. And when it’s done, it’s still not ready, because so far I’m not able to use the data[0] field as described in the API doc (hints welcome). So there is need to add some config snippet to be able to first upload the relevant image to a third party server with scp and then use the source field (URI) instead of the fakakte data field.

The configuration ~/.tumblrrc should look as:

base_url = blogname.tumblr.com

consumer_key = alongstringalongstringalongstringalongstring
consumer_secret = alongstringalongstringalongstringalongstring
token = alongstringalongstringalongstringalongstring
token_secret = alongstringalongstringalongstringalongstring

workaround_login = user@server
workaround_dir = /var/www/tempdir
workaround_url = http://server/tempdir

The script expects you to have a git repository ~/tmp/tumblr containing two subdirectories: queue and over, queue containing your image queue. You can override this path with the paramater content=

If an image have metadata with a Description field (XMP) containing a comma separated list of #Tags, #Another Tag, it will be posted with Tumblr tags (ignoring strings not starting with #).

In my case, as shorcut, I added a Makefile in ~/tmp/tumblr that contains:

SHELL := /bin/bash 

all:
	git pull
	git add over queue
	-git commit -am "Updated by a bunch of anonymous unanimous hamsters"
	git push
	make count

count:
	@echo
	@in_queue=`ls -1 queue/ | wc -l` && echo "$$in_queue file(s) in the queue total,"
	@in_queue=`ls -1 queue/ | wc -l` && in_queue=`echo $$in_queue - 5 | bc` && echo "so $$in_queue file(s) that can actually be posted,"
	@in_queue=`ls -1 queue/ | wc -l` && in_queue=`echo $$in_queue - 5 | bc` && if [ $$in_queue -gt 0 ]; then echo "enough until: "`date --date "$$in_queue days"`; else echo "not enough at all."; fi
	@cd queue && limit="9M" && if ((`find  . -size +$$limit | wc -l` > 0)); then echo "" && echo `find  . -size +$$limit | wc -l`" file(s) of size > $$limit :" && find  . -size +$$limit ; fi 

gifcheck:
	@cd queue && limit="1M" && echo `find  . -size +$$limit -name "*.gif"| wc -l`" file(s) with size > $$limit :" && find  . -size +$$limit  -name "*.gif" && find  . -size +$$limit -name "*.gif" -print0 | xargs -0 -I file convert file file.miff
	@cd queue && for i in *.miff; do newsize="100000" && for run in `seq 1 25`; do if [ ! -e "$$i.gif" ] || [ `stat -c %s "$$i.gif"` -gt 1000000 ]; then echo "$$i.gif too big (`stat -c %s $$i.gif`, run $$run, newsize $$newsize)" && convert -colors 128 +dither -layers optimize  -resize $$newsize@ $$i $$i.gif && newsize=`expr $$newsize - 4500` ;  fi ; done ;  done
	@cd queue && for i in *.miff; do newsize="100000" && if [ ! -e "$$i.gif" ] || [ `stat -c %s "$$i.gif"` -le 1000000 ]; then echo "$$i.gif is fine, removing the miff" && rm -f $$i; fi; done
random:
	cd queue && for i in *; do let cut_start=$${#i}-19 cut_end=$${#i}-4 && if (($$cut_start < 1)); then cut_start=1; fi && mv $$i `mktemp --dry-run --tmpdir=. -t XXXXXXX`-`basename $$i $${i%%.*} | cut -c $$cut_start-$$cut_end`.`echo $${i##*.} | tr A-Z a-z`; done 

young:
 	cd queue && count=0 && for i in *; do count=`expr $$count + 1` && case $$count in [0-5]) prefix=A;; [6-9]) prefix=C;; 1[0-5]) prefix=E;; 1[5-9]) prefix=G;; 2[0-9]) prefix=I;; 3[0-9]) prefix=K;; 4[0-9]) prefix=M;; 5[0-9]) prefix=O;; *) prefix=Q;; esac && mv $$i $$prefix`echo $$i | tr A-Z a-z`; done

log:
 	git log --stat -n100 --pretty=format:"%s of %ad" > ChangeLog

For the record, last year I posted an RFP, if you are a DD, I could probably offer you a beer to work on it. :-)


Syndicated 2013-08-08 15:26:48 from # cd /scratch

Building dpkg package from a .dsc

I wanted to give a try to a recent version of rekonq (none in testing/unstable, the provided are more than one year old and buggy as hell), so I ended up on a rekonq package page on mentors.net. It provides a .dsc. To build the package, I did as follow:

dget -x http://mentors.debian.net/debian/pool/main/r/rekonq/rekonq_2.3.1-1.dsc
cd rekonq-2.3.1
dpkg-checkbuilddeps
dpkg-buildpackage -rfakeroot -us -uc

(obviously we assume that you have installed build dependancies as printed out by dpkg-checkbuilddeps)

Then to install it, it’s easy enough:

cd ..
su
dpkg -i rekonq_*.deb

On an related note, I have this problem with this version of rekonq that does not save my default search engine.


Syndicated 2013-07-09 10:33:57 from # cd /scratch

Getting correct delays for subtitles

In 2010, I posted a couple of articles regarding videos and subtitles. Today I submitted a feature request to smplayer so they include within the player the ability to save in the subtitle file the custom delay we may have selected.

Basically, it would implies for smplayer to simply do something like with libsubtitles-perl‘s subs:

subs -i -b 00:00:16.5 Boardwalk.Empire.S01E10.480p.HDTV_en.srt

I sure hope they’ll implement that soon. Should not be too complicated while at the same time very convenient as, most of the time, issues with downloaded subtitles are delay-related.


Syndicated 2013-02-08 18:56:12 from # cd /scratch

Setting up a silent/low energy consumption home server (DHCP/DNS/SMB/UPnP)

Most users are probably fine with their ISP modem/box that even provides an hard disk. But having it’s own home server gives full control over the process and it’s not something utterly frivolous: no storage space real limit (except budget), finely tuned firewall, etc. In the past, it was at the expense of silence, energy consumption and space, but no longer, as described here.

Hardware setup:

The hardware is the following:
- board (APU) Intel DN2800MT
- RAM: 2 x 2 Go PC8500 DDR3 SODIMM
- Hard drive: Western Digital WD Green 3,5″ – SATA III 6 Gb/s – 2 To (Caviar)
- Secondary ethernet: StarTech.com ST1000SMPEX (Mini PCI-E)
- Wifi: TP-Link TL-WDN4800 (PCI-E)
+ a laptop adapter (16V, 4A)
+ a small case

The APU itself have a thermal design power (TDP) inferior to 10W. The hard drive is of the “Green” typen (RPM is lower than usual, etc). It’s important to note the RAM is of the SO-DIMM type (usually for laptops) PC8500 (max frequency supported by this board/CPU) and an laptop power charger/adapter is necessary instead of a regular power supply unit. Any case designed for the mini-ITX form factor could do. Low energy consumption, silent and small.

I was, actually, looking towards Sapphire Mini xxxx hardware at first, but it’s quite painy to get it shipped. So I went instead for the Intel Nano based hardware, despite its obvious drawbacks, which are supporting SATA II instead of 3, the SODIMM 4Gb RAM max and being known to be poorly supported on the target system, which is Debian GNU/Linux. I actually don’t care much for the GPU support, 4 Gb is more than enough for a home server and SATA II acceptable enough, so it should be fine anyway.

(Obviously, you should plug a hub on the secondary ethernet otherwise you’ll only be able to connect one box over ethernet)

Software setup:

Picking softwares:

Most obvious: we’ll run Debian stable on it, so to say Wheezy, the about-to-be-released-and-frozen one. The stable model in itself makes this distro the best choice for a server: this is stable and kept secure over time.

It’s supposed to work with an heterogenous network: GNU/Linux, MS Windows, over ethernet or wireless. So we’ll want:
- OpenSSH as secure shell, for the administrator
- any dhcpd server to provide IPs on the fly
- Samba for networked filesystems – and only, as we want each box to keep it’s original setup and not getting specific
- Bind to act as DNS cache and manage the domain
- Nginx as http server to provide basic sysinfo (phpsysinfo) and basic sysadmin (mostly: reset Samba passwords and connected wireless devices surveillance)
- transmission-daemon plus my torrent-watch.pl script to provide a networked BitTorrent client
- minidlna to make files available to non computer networked devices

Start with Debian netinst base install:

Obviously we’ll want some SWAP space. 2 Gb should be more than enough. Then we’ll want three ext4 filesystems. One for user data, one for the system, one for a system copy, as fallback. If we had two different disk, obviously the system copy would be the second one.

We’ll start the basic debian installation with that in mind: we’ll anyway just install the debian base stuff with OpenSSH.

/etc/default/rcS:

FSCKFIX=yes

/etc/default/grub:

GRUB_FALLBACK=2

Setting up basic functionalities/networking after reboot:

First, we’ll install some useful utilities:

apt-get install lm-sensors hddtemp cpufrequtils debfoster etckeeper localepurge
ethtool emacs23-nox ntp wget

Regarding sensors, you should configure hddtemp to run as a daemon listening on 127.0.0.1 and run:

sensors-detect

At this point, network devices should be known to the system. We have quite usual hardware so correct modules should already be loaded. lspci should return:

01:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection
02:00.0 Network controller: Atheros Communications Inc. AR9300 Wireless LAN adaptor (rev 01)
03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 06)

Edit the NAME strings in /etc/udev/rules.d/70-persistent-net.rules in order to have eth0 being the internet device, eth1 and wlan1 the intranet ones, for clarity sake. You may unload and reload modules of these devices in order for them to get their definitive name.

We’ll use hostapd to provide Wifi access.

apt-get install hostapd

/etc/default/hostapd:

DAEMON_CONF="/etc/hostapd/hostapd.conf"

/etc/hostapd/hostapd.conf:

## base
interface=wlan1
ssid=whatever
channel=3

## wifi mode
hw_mode=g
ieee80211n=1

## access with WPA PSK
wpa=2
wpa_passphrase=WHATEVERYOUWANTSOFAR
wpa_key_mgmt=WPA-PSK
#wpa_pairwise=TKIP
rsn_pairwise=CCMP
auth_algs=1

# hw address filter (relaxed, as it is not real security)
macaddr_acl=0
deny_mac_file=/etc/hostapd/hostapd.deny

# EOF
touch /etc/hostapd/hostapd.deny

(this enable WPA2 access, if you want also WPA1, you must set wpa=3 and uncomment wpa_pairwise)

Then we’ll configure the network, defining a different subnet for wired and wireless connectivity. Some tutorials on the web propose to bridge the wireless to the wired. We won’t do that, we actually want to be able to easily distinguish the source of any request. Regarding security, the safe bet is to assume that wireless is always on the verge of getting cracked, so it must be kept confined.
editing /etc/network/interface:

# internet
allow-hotplug eth0
iface eth0 inet dhcp

# intranet (wired) auto eth1 iface eth1 inet static address 10.0.0.1 netmask 255.255.255.0 broadcast 10.0.0.255 network 10.0.0.0 # intranet (wireless) #iface eth2 inet manual auto wlan1 iface wlan1 inet static address 10.0.1.1 netmask 255.255.255.0 broadcast 10.0.1.255 network 10.0.1.0 # EOF

We need a working dhcp daemon, able to dynamically register new boxes:

apt-get install isc-dhcp-server

In /etc/default/isc-dhcp-server:

INTERFACES="eth1 wlan1"

In /etc/dhcp/dhcpd.conf:

option domain-name "mynetworkname.ici";
option domain-name-servers 10.0.0.1;
option routers 10.0.0.1;

log-facility local7;
authoritative;

# wired
subnet 10.0.0.0 netmask 255.255.255.0 {
range 10.0.0.25 10.0.0.125;
}

# wireless
subnet 10.0.1.0 netmask 255.255.255.0 {
range 10.0.1.125 10.0.1.225;
option routers 10.0.1.1;
}

(it’s best to add, as fallback, to the domain-name-servers option the defaults DNS provided by your ISP, as shown in /etc/resolv.conf)

The dhcp client must be tuned a bit, /etc/dhcp/dhclient.conf:

prepend domain-name-servers 10.0.0.1
supersede domain-name "mynetworkname.ici";

We obviously need ip forwarding, editing /etc/sysctl.conf:

net.ipv4.ip_forward=1

and also immediately doing a:

echo 1 > /proc/sys/net/ipv4/ip_forward

We also need iptables

apt-get install iptables-persistent
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
/etc/init.d/iptables-persistent save

(I actually reused a perl script that also does some nice firewalling instead of simply doing this)

ifup eth1
ifup wlan1
invoke-rc.d hostapd restart
invoke-rc.d isc-dhcp-server restart

At this point, you should be able to log in with SSH on a distant box.

Provide DNS cache:

apt-get install bind9

Set up forwarders with your ISP’s DNS (as in /etc/resolv.conf) in /etc/bind/named.conf.options:

forwarders {
ISP_DNS_IP1;
ISP_DNS_IP2;
};

You need to create zones as you wish in /etc/bin/named.conf.local:

zone "mynetworkname.ici" {
type master;
notify no;
file "/etc/bind/db.mynetworkname.ici";
allow-update { key dhcpupdate; };
};

zone "0.10.in-addr.arpa" {
type master;
notify no;
file "/etc/bind/db.10.0.0";
allow-update { key dhcpupdate; };
};
cd /etc/bind && cp db.local db.mynetworkname.ici

db.mynetworkname.ici:

$TTL    64800
@           IN      SOA      gate.mynetworkname.ici. root.mynetworkname.ici. (
2         ; Serial
604800         ; Refresh
86400         ; Retry
2419200         ; Expire
604800 )       ; Negative Cache TTL

IN      NS      nano.mynetworkname.ici.
mynetworkname.ici.                     IN      A       10.0.0.1
mynetworkname.ici.    IN    MX         10    10.0.0.1
nano        IN    A    10.0.0.1
gate            IN      CNAME   nano
cp db.255 db.10.0

db.10.0:

;
; BIND reverse data file
;
@       IN    SOA    nano.mynetworkname.ici. root.mynetworkname.ici. (
1                     ; Serial
604800         ; Refresh
8600               ; Retry
2419200               ; Expire
604800 ) ; Negative Cache TTL

0.10.in-addr.arpa.         NS  nano.mynetworkname.ici.
1.0                        PTR nano.mynetworkname.ici.

Now we add support for dynamic updates:

cd /etc/dhcp
dnssec-keygen -a hmac-md5 -b 256 -n USER dhcpupdate

/etc/bind/named.conf:

key dhcpupdate {
algorithm hmac-md5;
secret "YOURKEYGOESHERE";
};

(the secret being the latest string of .key file we’ve just generated)

/etc/dhcp/dhcpd.conf:

ddns-domainname "mynetworkname.ici";
ddns-rev-domainname "in-addr.arpa.";
ddns-update-style interim;
ignore client-updates;
update-static-leases on;

key dhcpupdate {
algorithm hmac-md5;
secret "YOURKEYGOESHERE";
}
zone mynetworkname.ici. {
primary 127.0.0.1;
key dhcpupdate;
}
zone 0.10.in-addr.arpa. {
primary 127.0.0.1;
key dhcpupdate;
}

Restrict read access to files containing the secret key and restart all:

chmod o-r /etc/bind/named.conf.local
chmod o-r /etc/dhcp/dhcpd.conf
rm /etc/dhcp/Kdhcpupdate.*.key /etc/dhcp/Kdhcpupdate.*.private

invoke-rc.d isc-dhcp-server restart
invoke-rc.d bind9 restart

Put user data in place:

User data will go in /srv. So we’ll add a few symlinks, after mounting the partition.

mkdir /srv/home /srv/common
rm -r /home && ln -s /srv/home /home

We then add default dirs:

mkdir /srv/common/torrents /srv/common/download /srv/common/musique /srv/common/films /srv/common/temp
cd /srv/common && chmod a+w * -R

We’ll also make sure any new user get a ~/samba directory.

mkdir /etc/skel/samba

Make it accessible over Samba:

Users will access files with Samba: anonymous in r+w in common, user only in their ~/samba (we don’t allow direct access to ~/ to block any tampering with directories like ~/.ssh)

apt-get install samba libpam-smbpass

/etc/samba/smb.conf:

workgroup = MYNETWORKNAME.ICI
interfaces = eth1 wlan1
bind interfaces only = yes
security = user
invalid users = root
unix password sync = yes
pam password change = yes
map to guest = bad user

[homes]
comment = Données protégées
path = /srv/home/%S/samba
writable = yes

[commun]
comment = Commun
path = /srv/common
browseable = yes
public = yes
force group = users
force user = nobody
guest ok = yes
writable = yes

[media]
comment = clef USB, etc
path = /media
browseable = yes
public = yes
force group = users
force user = nobody
guest ok = yes
writable = yes

We also want to use unix passwords for Samba instead of having two passwords databases.

/etc/pam.d/samba:

@include common-password

Make it accessible with UPnP-AV/DLNA:

apt-get install minidlna

/etc/minidlna.conf:

media_dir=/srv/common
network_interface=eth0
friendly_name=nano
inotify=yes
rm -f /var/lib/minidlna/files.db
invoke-rc.d minidlna restart

Provide torrent client:

apt-get install transmission-daemon libtimedate-perl
invoke-rc.d transmission-daemon stop

mkdir /home/torrent
ln -s /srv/common/torrents /home/torrent/watch
usermod -d /home/torrent Debian-transmission

cd /usr/local/bin && wget https://github.com/yeupou/stalag13/raw/master/usr/local/bin/torrent-watch.pl && chmod +x torrent-watch.pl
cd /etc/cron.d && wget https://github.com/yeupou/stalag13/raw/master/etc/cron.d/torrent
cd /etc/cron.weekly && wget https://github.com/yeupou/stalag13/raw/master/etc/cron.weekly/torrent

edit /etc/transmission-daemon/settings.json

"alt-speed-down": 120,
"alt-speed-enabled": false,
"alt-speed-up": 1,
"blocklist-enabled": true,
"download-dir": "/srv/common/download",
"message-level": 0,
"peer-port-random-on-start": true,
"port-forwarding-enabled": true,
"rpc-authentication-required": false,
invoke-rc.d transmission-daemon start

And log rotation /etc/logrotate.d/torrent:

/srv/common/torrents/log {
weekly
missingok
rotate 2
su debian-transmission users
nocompress
notifempty
}

Provide basic info and management:

The following will provides reminders of upgrades to be performed.

apt-get install libapt-pkg-perl
cd /etc/cron.daily && wget https://github.com/yeupou/stalag13/raw/master/etc/cron.daily/apt-warn && chmod +x apt-warn
phpsysinfo : basic system infos

phpsysinfo : basic system infos

We’ll use phpsysinfo to provide an overview of the system and a homemade script to allow distant administration.

apt-get install nginx phpsysinfo php5-cgi spawn-fcgi libfcgi-perl mysql-server libemail-sender-perl
cd /etc/init.d && wget https://github.com/yeupou/stalag13/raw/master/etc/init.d/php-fcgi && chmod +x php-fcgi && update-rc.d php-fcgi defaults
wget http://nginxlibrary.com/downloads/perl-fcgi/fastcgi-wrapper -O /usr/bin/fastcgi-wrapper.pl && wget http://nginxlibrary.com/downloads/perl-fcgi/perl-fcgi -O /etc/init.d/perl-fcgi && chmod +x /usr/bin/fastcgi-wrapper.pl /etc/init.d/perl-fcgi && update-rc.d perl-fcgi defaults

mkdir /srv/www
ln -s /usr/share/phpsysinfo/ /srv/www/sysinfo

/etc/nginx/sites-available/default:

listen 10.0.0.1;
listen 127.0.0.1;

root /srv/www;
index index.html index.htm index.php index.pl;
autoindex on;
server_name localhost nano nano.mynetworkname.ici;

# restrict to local wired network
allow 10.0.0.0/24;
allow 127.0.0.1;
deny all;

# pass the  scripts to FastCGI server listening on 127.0.0.1
location ~ ^/sysinfo/(.*)\.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
#       # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
include fastcgi_params;
}
location /sysadmin/index.pl {
fastcgi_pass  127.0.0.1:8999;
fastcgi_index index.pl;
fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_na\
me;
include fastcgi_params;
}

/etc/php5/cgi/php.ini:

cgi.fix_pathinfo = 0;

/etc/phpsysinfo/config.php:

define('PSI_ADD_PATHS', '/bin,/usr/bin,/sbin,/usr/sbin');
define('PSI_BYTE_FORMAT', 'auto_binary');
define('PSI_SENSOR_PROGRAM', 'LMSensors');
define('PSI_HDD_TEMP', 'tcp');
define('PSI_SHOW_MOUNT_OPTION', false);
define('PSI_HIDE_FS_TYPES', 'tmpfs,usbfs,devtmpfs');
define('PSI_HIDE_DISKS', '/dev/disk/by-uuid/8f7f616e-9140-4876-890a-cd6abfde837\
f');
define('PSI_HIDE_NETWORK_INTERFACE', 'lo,mon.wlan0');
define('PSI_SHOW_NETWORK_INFOS', true);
sysadmin : admin unix/samba passwords and watch wifi connections

sysadmin : admin unix/samba passwords and watch wifi connections

Follows the specific sysadmin web interface:

apt-get install passwdqc liburi-encode-perl libdata-password-perl libdbd-mysql-perl libemail-send-perl
cd /srv/www
mkdir sysadmin

cd /srv/www/sysadmin && wget https://raw.github.com/yeupou/calaboose.sysadmin/master/index.pl
cd /usr/local/bin && wget https://raw.github.com/yeupou/calaboose.sysadmin/master/sysadmin-update.pl
chgrp www-data /srv/www/sysadmin/index.pl
chmod +x /srv/www/sysadmin/index.pl /usr/local/bin/sysadmin-update.pl
chmod o-rwx /srv/www/sysadmin/index.pl /usr/local/bin/sysadmin-update.pl
mysql -e "CREATE DATABASE sysadmin"
mysql -e "CREATE TABLE sambaclients (ip_address varchar(32) NOT NULL default '0', user_name text NOT NULL, PRIMARY KEY (ip_address))" sysadmin
mysql -e "CREATE TABLE wificlients (hw_address varchar(32) NOT NULL default '0', status varchar(32) NOT NULL default 'S', PRIMARY KEY (hw_address), ip_address varchar(32), hostname varchar(128))" sysadmin
mysql -e "CREATE USER 'www-data'@'localhost'"
mysql -e "SET PASSWORD FOR 'www-data'@'localhost' = PASSWORD('kdkadkda')"
mysql -e "GRANT ALL ON sysadmin.* TO 'www-data'@'localhost'"

/srv/www/sysadmin/index.pl:

my $db_password = "kdkadkda";

/usr/local/bin/sysadmin-update.pl:

my $db_password = "kdkadkda";

It requires a cronjob to be set up in /etc/cron.d/sysadmin:

* * * * * root /usr/local/bin/sysadmin-update.pl
invoke-rc.d nginx restart
invoke-rc.d php-fcgi restart
invoke-rc.d perl-fcgi restart

Both http://nano/sysinfo and http://nano/sysadmin should work. The sysadmin script allows to change, on-the-fly UNIX passwords. It means that anyone within the network

(note : the sysadmin interface is in French but the strings can easily be translated in English. Adding gettext support would have been overkill here)

Create backup system:

With only one disk, having a redundant system is not optimal. But it’s still an okay failsafe.

The following assumes you gave a label to your root partition, something like wd2Tdebian64 here. Create a filesystem on the backup partition:

mkfs.ext4 -L wd2Tdebian64backup /dev/sda7
mkdir /mnt/sysclone

Add /etc/cron.weekly/backup-system (based on https://github.com/yeupou/stalag13/blob/master/etc/cron.weekly/stalag13-backups):

if [ `hostname` != "nano" ]; then exit; fi

## system cloning
sys=wd2Tdebian64
bak=wd2Tdebian64bak
mount=/mnt/sysclone
ignore="dev lost+found media proc run sys tmp"

# determines which partition is currently / by reading /etc/fstab
orig=`cat /etc/fstab | grep $sys | cut -f 1 | cut -f 2 -d = | sed 's/ //g'`
case $orig in
$sys)
dest=$bak
;;
$bak)
dest=$sys
;;
*)
echo "Unable to determine whether we are currently using $sys or $bak, we found $orig. Exiting!"
exit
;;
esac

# then proceed

# easy reminder of the last cloning run
date > /etc/.lastclone
echo "$orig > $dest" >> /etc/.lastclone
etckeeper commit "cloning system from $orig to $dest" >/dev/null 2>/dev/null

# mount clone system
if [ ! -d $mount ]; then exit; fi
mount -L $dest $mount

# set up ignore list
for dir in $ignore; do
touch /$dir.ignore
done

# do copy
for dir in /*; do
if [ -d $dir ]; then
if [ ! -e $dir.ignore ]; then
# update if not set to be ignored
/usr/bin/rsync --archive --one-file-system --delete $dir $mount/
else
# otherwise just make sure the directory actually exists
if [ ! -e $mount/$dir ]; then mkdir $mount/$dir; fi
rm $dir.ignore
fi
fi
done

# update filesystem data
sed -i s/^LABEL\=$orig/LABEL\=$dest/g $mount/etc/fstab

# make system bootable (use --force: gpt partition table)
/usr/sbin/grub-mkdevicemap 2>/dev/null
/usr/sbin/update-grub 2>/dev/null
/usr/sbin/grub-install --force `blkid -L $orig | tr -d [:digit:]` >/dev/null 2>/dev/null

# (sleep to avoid weird timeout after rsync)
sleep 10s

# then cleanup
umount $mount
fsck -a LABEL=$dest > /dev/null

## EOF

Sets mails and restricts SSH access:

We activate exim4 for direct SMTP (and make sure the ISP does not block the relevant traffic) with the command:
dpkg-reconfigure exim4-config

Then we want some specific SSH access model. We already set up the sysadmin interface to change users password – both Samba and unix. But we actually have only one admin here. He’s own account will be the only one given SSH access. No root direct access. And he’ll be able to connect with a password only from wired intranet (eth1). Otherwise, internet (eth0) or wireless intranet (wlan1) will require a pair of SSH keys. To achieve this, we’ll actually restrict SSH to members of the staff unix group (just in case, at some point, we want to add a second one).

To achieve this easily, will plug OpenSSH into xinetd.

We have a few terminals open on the server. We shut SSH down (opened sessions wont be affected) and forbid the init script to start it anymore:

invoke-rc.d ssh stop
touch /etc/ssh/sshd_not_to_be_run

We change a bit the default configuration in /etc/ssh/sshd_config:

PermitRootLogin no
X11Forwarding no
AllowGroups staff
PasswordAuthentication no

We add the relevant user to the group:

adduser thisguy staff

Then we set up xinetd to run it:

apt-get install xinetd

Edit /etc/xinetd.d/ssh (replace IP OF ETH0 as provided by ifconfig):

# To work, sshd must not run by itself, so /etc/ssh/sshd_not_to_be_run
# should exists

# only from local wired network
service ssh
{
socket_type     = stream
protocol        = tcp
wait            = no
user        = root
bind            = 10.0.0.1
only_from    = 10.0.0.0/24
server          = /usr/sbin/sshd
server_args     = -i -o PasswordAuthentication=yes
log_on_success  = HOST USERID
}

# from local wireless network
service ssh
{
socket_type     = stream
protocol        = tcp
wait            = no
user        = root
bind            = 10.0.1.1
only_from       = 10.0.1.0/24
server          = /usr/sbin/sshd
server_args     = -i
log_on_success  = HOST USERID
}

# from internet
service ssh
{
socket_type     = stream
protocol        = tcp
wait            = no
user        = root
bind            = IP OF ETH0
server          = /usr/sbin/sshd
server_args     = -i
cps             = 30 10
per_source    = 5
log_on_success  = HOST USERID
}

# EOF

Then you can make a few test and see results in /var/log/auth.log.

At this point, you should realize that this perfectly working setup has an obvious drawback: if you’re wirelessly connected (subnet 10.0.1.0) `ssh nano` will, thanks to the DNS, actually do a `ssh 10.0.0.1`. And per our xinetd rules, you’ll get kicked out, as we accept on this IP only clients from the same subnet (10.0.0.0). So you’ll have to manually type ssh 10.0.1.1 to be able to connect. We’ll add an iptable rule to fix this: we’ll say that whenever we try to connect to 10.0.0.1 over ssh from wireless interface, we’ll redirect to 10.0.1.1 same port. So we’ll do:

iptables -t nat -A PREROUTING -p tcp -i wlan1 --destination 10.0.0.1 --dport 22 -j DNAT --to 10.0.1.1:22
/etc/init.d/iptables-persistent save

Reminder, need to be changed whenever the server is relocated:

(obviously you should not use any sample password provided in this page)

We avoided hardcoding IPs but it was not always possible. In case of an ISP/main network change, which usually implies IP changes, don’t forget to update:

/etc/bind/named.conf.option: ISP DNS IPs.
/etc/xinetd.d/ssh : Internet IP (eth0)

Disclaimer: this whole setup has been made to be maintainable by people that have not much experience in computer system administration – but enough to log in via SSH without being completely lost in limbo. As such, you’ll probably notice I made some tradeoff between security and easiness, for instance by providing in clear text the Wifi passphrase on the web sysadmin page. Anyway I think most important pieces are rock solid and secondary one does not matter much (Wifi is insecure by design, by concept I would even dare to say, using it is itself such an obvious tradeoff).

(this is still being tested, I may update this page soon; it’s likely I forgot to mention a few apt-get of perl packages required by the scripts; please mail me if you find any flaws or obvious issues with what is proposed here)


Syndicated 2012-12-23 22:43:00 from # cd /scratch

164 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!