Older blog entries for StephanPeijnik (starting at number 31)

How to force a local DNS resolver to be used using resolvconf

I know it has been a while, but after reading a blog post by Anand Kumria over at planet.debian.org I decided to have a quick look at one of the problems he described.

Basically, Anand wants to force the local resolver to be used for each and every network connection, may that connection be established manually or via NetworkManager. He wrote that fixing this configuration for every new connection manually is tedious, and I fully agree on that. So here is a solution to do this all automatically, using resolvconf:

After installing the resolvconf package every time /etc/resolv.conf is to be updated resolvconf takes care of that. Using the files in /etc/resolvconf this process can be controlled and the resulting file modified to fit one own's needs.

So at first we would like the local resolver to be used for every connection. This works by simply adding the "nameserver 127.0.0.1" directive to the /etc/resolvconf/resolv.conf.d/head file. Simple as that. Every time /etc/resolv.conf gets generated the contents of the head file are actually used as /etc/resolv.conf's header.

Using this method the local resolver is used for every connection. But Anand wanted to use only the local resolver and discard any resolvers possibly obtained via DHCP for example. Guess what, this is also possible using resolvconf.

Adding TRUNCATE_NAMESERVER_LIST_AFTER_127="yes" to /etc/default/resolvconf does exactly that. Now every nameserver directive after the 127.0.0.1 one is ignored and will not make it into /etc/resolv.conf. You can of course add more nameservers to the head file above the 127.0.0.1 directive.

Problem fixed I guess.
Don't forget to re-connect to the network or manually force re-creation of /etc/resolv.conf so the changes you made get populated. I really hope this is of use to some of you facing similar problems.

Syndicated 2011-06-01 19:53:00 (Updated 2011-06-01 19:53:24) from sp

ISC dhcpd and IP assignments from a pool to specific hosts only

Assigning an IP address statically to a host with a given MAC address using ISC dhcpd is quite trivial, one host entry, a hardware ethernet entry and a fixed-address entry and you are up and running.
But what if you want to assign IP addresses from a pool to only a few hosts with specific MAC addresses?

Before you ask yourself why someone might want to do that, have a look at my (very real) use-case.
I am currently working on setting up an installation server for my employer, ANEXIA Internetdienstleistungs GmbH. The server itself uses PXE, TFTP and FAI for installing systems. To be able to do PXE booting one has to set up an DHCP server to provide configuration details, like the TFTP Server Address and the boot filename.

Now what one should consider is that this system is designed to provide automatic installations for internet-facing hosts, namely ones in public IP networks. Running a DHCP server in such a network is not a good idea. We neither want to dish out configurations to each and every hosts that asks for them, neither do not want to do a PXE boot each and every time one of our systems is restarted. Now the combination of FAI and pxelinux allows for default configurations which force local booting, but this still causes the (re-)boot time for those systems to increase and potentially also increases the load on the TFTP server. Also, let's not even consider thinking about whether this setup is "clean" or not. I personally believe that dishing out IP addresses in a public IP network is a bad thing(tm) and I guess a lot of people will be nodding when reading these lines.

What I was asking myself is how to get something like that set up in a cleaner way, and guess what, I found a solution.
The basic idea behind this is only providing IP configuration via DHCP to a specific set of hosts (with a specific set of MAC addresses) and not providing any information to all other hosts. The specific set of hosts are those that we want to do an install run on. This is a no-brainer and I guess the right way to do that, but implementing this approach is not as straight-forward as I initially thought.

Actually the implementation of that idea caused me a bit of a headache and cost me a few work-hours to get right, that's why I'd like to share the configuration details with you.



Let's have a look at how to get such a setup using ISC dhcpd. We are using the fact that ISC dhcpd allows you to not only configure a subnet, but rather also pools inside subnets, which can have allow and deny rules. Such rules can be in the form of "allow/deny member of ", where classes (and subclasses, keep on reading for details) can be defined inside the configuration file as well.

What we first did was creating a subnet with a pool declaration, as follows:

subnet 10.0.0.0 netmask 255.255.255.0 {  
    option routers 10.0.0.254;
    option broadcast-address 10.0.0.255;
    filename "fai/pxelinux.0";
    next-server 10.0.0.254;
    server-name "10.0.0.254";
    pool { 
      allow members of "install"; 
      range 10.0.0.10 10.0.0.230; 
   }
}
This one configures the subnet 10.0.0.0/24, with 10.0.0.254 being the network gateway, 10.0.0.254 being the TFTP server and "fai/pxelinux.0" being the TFTP filename. Additionally pool allows us to define a range of IP addresses we want to use, along with a line stating that only members of the "install" class should get a network configuration. If you do not have any other subnet defined in your config and a client that is not in this "install" class asks for an IP address you will see something like this in your syslog:  "dhcpd: DHCPDISCOVER from 11:22:33:44:55:66 via eth1: network 10.0.0/24: no free leases". dhcpd will not even answer these requests and thus the client will not even know that there is a DHCP server running here. Exactly what we wanted.

I wrote about this giving me a headache, but so far things have been pretty straight-forward. Getting this far did not take very long, believe me.

Next thing we did was defining that "install" class as follows:

class "install" { match hardware; }
Again, not very hard to do. This tells dhcpd to look for subclasses of "install" with a matching hardware-address. So let's have a look at the subclass for, let's say the host with MAC address "11:22:33:44:55:66":

subclass "install" 1:11:22:33:44:55:66;
I intentionally highlighted the leading "1:" there. This means nothing more or less than "ethernet". Without that leading "1:" you won't get anywhere. Matching will fail, simple as that. It took me a while to find  information about this in "man 5 dhcp-eval". Quoting parts of the interesting section:

The hardware operator returns a data string whose first  element  is
the  type of network interface indicated in packet being considered,
and whose subsequent elements are client’s link-layer address. [...] Hardware types include  ethernet  (1),  token-ring  (6), and fddi (8).
 Now, with the combination of the subnet, pool, class and subclass directives we could get the setup we wanted: a DHCP server only providing IP configuration to a specific set of hosts and ignoring all other DHCP requests.

If you have any comments about this setup or ideas on how to get something similar set-up using another approach feel free to leave a comment.

Personal final note: accidentally typing 80 instead of 08 in a MAC address will cost you an additional two hours and will even have you re-compile ISC dhcpd with eval debugging turned on, believe me. :-)

Syndicated 2011-01-01 22:01:00 (Updated 2011-01-01 22:02:38) from sp

What's all the fuzz about canonical-census?

I know I have not updated this blog in quite a long time now, but something caught my attention today: canonical-census.

As slashdot.org reports Canonical begins with tracking their (OEM) installations. Now it's obvious that people are uncomfortable with a program running on their system which phones back to their OS vendor, that's why I have had a quick look at what exactly canonical-census does.

Firstly however, I would like to point out that the report on slashdot.org is very clear about which information is being gathered, being "the number of times this system previously sent to Canonical [...], the Ubuntu distributor channel, the product name as acquired by the system's DMI information, and which Ubuntu release is being used". And it's perfectly correct. After getting the canonical-census Debian source package (using dget -u https://launchpad.net/ubuntu/+archive/partner/+files/canonical-census_0.1.dsc) the source package shows, besides the Debian packaging information, two scripts:

  • census (written in Python) and
  • send-census (a GNU bash script).
Now what do those scripts actually do?

send-census is installed in /etc/cron.daily, which means it will be executed once a day by the system's cron daemon. It's a mere 48 lines long, and its code is quite simple. So everyone with at least some shell scripting experience can easily check what it's doing. Now guess what, it sends exactly the information as reported on slashdot to Canonical. Nothing more and nothing less.

Technically it keeps a plain text file containing a single number as its call-counter, residing in /var/lib/send-install-count/counter and uses an on my Ubuntu Lucid system nonexistent /var/lib/ubuntu_dist_channel file for getting information about the distribution channel.
The above mentioned "system's DMI information" is not the whole bunch of DMI information available, but only the contents of /sys/class/dmi/id/product_name, which strangely enough returns "System Product Name" on my machine. Last but not least it uses lsb-release to get the distribution release (ie. 10.04 for my system).

Now those four pieces of information are sent to http://census.canonical.com/submit via a simple HTTP GET query, using wget. The full URL with all the parameters added is:
http://census.canonical.com/submit?count=count&dcd=dist_channel&product=dmi_product_name&release=ubuntu_release_version

The second script, census, is the part working on Canonical's script. Basically census reads in their Apache's access log file and creates an SQLite database from the contents of the log file. With 391 lines this script is a bit longer, but it does not end up in the Debian package at all.

Personally I do not see how Canonical or one of their partners could possibly do anything harmful with that information. Comparing this to Debian's popcon reveals that Debian is gathering a lot more information.

Now there are two more things one should consider: census is targeted at OEMs, which means its unlikely that it will end up on each and every Ubuntu installation and can be uninstalled by removing the canonical-census package with your favorite package manager.

Finally, think about this for a second: It's a shell script you can always examine. There is no hidden magic and it's a plain HTTP request the script is sending. No evil things happening there.
And now compare that to what other (often proprietary) software vendors do and how much data they submit, possibly even in encrypted form so you do not know for sure what is being sent to them.

Personally I welcome the openness of Canonical with providing their users with the package's code this early and being straight about what information it submits. They could have silently added it to those installations after all...

Happy hacking!

Syndicated 2010-08-10 12:07:00 (Updated 2010-08-10 12:07:35) from sp

Rest in peace Flo: 13.11.1986–16.12.2009

Today is a sad day. Everything feels like I am having a bad nightmare. That's because today I learnt from the too early death of my friend Florian Hufksy.

I am sitting here and do not really know what to write. I keep on thinking about the great times we spent together. The time we started programming when we were twelve. The time we spent learning BASIC. All the times you knew more than me and could teach me a thing or two. I remember our geek talks. How we would discuss latest games. How we lost contact and how we met again. I am thinking about how sorry I am for not having met you often enough. I keep on trying to understand what drove you that far. How you could just end it all. More and more memories come to my mind, like the moment when you showed me one of your projects, Super Mario War. The moments we had playing video games together. All those moments, all that time, I miss you my friend. You were a genius, always a step ahead, not only of me, but seemingly the whole world. I can't stop thinking about your brilliant ideas and how you always finished your projects. You were a real hacker, a real genius, a person trying to make the world a better place, a person who will be missed, not only by me.

You were a genius and I always respected you, not only as a hacker, but as a beloved friend. Why did we not spend more time together? Why did you have to go? Why do I have to write this now, sitting here in my chair with tears in my eyes? And all those memories come up again and again. There is so much more that comes to my mind, but I can't keep on writing, it just hurts too much.

The world is a sad place today. I am sad. I am mourning the too early death of my beloved friend, Florian. You will always have a special place in my heart.

Syndicated 2009-12-19 23:15:00 (Updated 2009-12-19 23:15:48) from sp

ujail: use cases, FAQs, part 1 & proof of concept, part 2

As I ran out of time whilst writing the "introducing ujail" post on monday I would like to further elaborate on the idea, giving you some examples of possible use cases and then having a look at FAQs regarding ujail. Additionally I have created a second proof of concept that should be a lot faster, see below for more details.

Use cases of ujail

Monday's post was rather technical, so let's have a look at possible use cases today.

The main reason for both having the idea of ujail and starting working on it is my web server. I am running quite a few (S)CGI scripts there and, even though running them as different users, on a per-vhost basis, I have the impression of the whole thing being a bit insecure.

Okay, PHP does provide its famous open_basedir feature, but I am also running some Python applications which I simply cannot restrict easily. My first ideas involved adding something similar to open_basedir to Python, followed by the idea of replacing some C library functions, like fopen and friends on startup time.

Whilst the adding open_basedir to Python would have involved changing a lot of Python's internals I soon discarded the library patching idea as those could be worked around by injected code directly invoking syscalls. It didn't take long for me to notice that I have to dig deeper. The idea of ujail was born and after coming up with the proof of concept this seems to be a viable solution.

Now ujail is not only about protecting a web server from its web applications, but could do a lot more, for example:

  • Creating a sandbox for untrusted code (socket&file i/o emulation)
  • Implementing some sort of personal firewall (socket-call only emulation)
  • Testing applications that perform low-level system operations (read: package managers and friends, filesystem emulation)
 I am sure you can come up with even more use-cases. What should be noted is that emulating a system call does not mean that one necessarily needs to emulate the whole filesystem. What can be done, for example, is patching through access to common files (libraries, executables, etc.) whilst maintaining a virtual filesystem for data that will eventually be modified. A copy-on-write approach is possible too, for example. There are multiple methods with which the multiple filesystem could be implemented, the most common would probably be using a state directory.

FAQs

There have been some questions about ujail in comments to my first post which I would like to answer. Also, I have been thinking about things that are different about ujail compared to other virtualization techniques. Feel free to add additional questions either in a comment or drop me an email: debian at sp dot or dot at.

  1. Could you change the license of ujail to ... ?

    Not likely to happen. The proof of concept's license is GPLv3 and the actual code's license will be too. However, ujail is a userspace application that does not need any modifications to the kernel so there should be no problems with porting ujail from GNU/Linux to any other system.

  2. Does ujail work on operating systems other than GNU/Linux?

    Not yet. If it's technically possible to implement the technique on other operating systems I would be happy to accept patches.

  3. Do I need to patch my kernel for ujail to work?

    No, ujail is running in userspace. The only thing it needs is Linux with support for PTRACE_SYSEMU.

  4. How is this approach different from using LD_PRELOAD?

    With LD_PRELOAD one can replace library functions, but malicious code could still directly invoke syscalls, working around this protection completely. Also, statically linked binaries cannot be restricted with LD_PRELOAD.

  5. How is this approach different from user-mode-linux?

    User-mode-linux (UML) works by emulating a full kernel in userspace and allows you to virtualize a whole Linux instance (including a new init process, etc). ujail is about providing a way of restricting a single process (and its childs) inside a running system in terms of access to syscalls and the partial emulation of those.

  6. How is this approach different from linux-vserver?

    Linux-vserver is a kernel patch and runs in kernel space, as opposed to ujail, which works in userspace.
    Also, linux-vserver works similarly to user-mode-linux, providing a fully virtualized Linux instance.

  7. Does the account running ujail need any special privileges?

    No, the only restrictions that apply are those of ptrace.

  8. Where is the code?

    Right now ujail is in a planning phase, and only the proof of concept code has been written and published. The actual ujail code is yet to be written and the code will be hosted on launchpad.net.

Proof of concept, part 2

An anonymous person (who were you stranger?) added a comment to my first post, suggesting "Also, why patch the process rather than just modifying its state and trapping into the kernel?". I have had a look at this approach earlier, but it didn't work out. However, I decided to give it yet another try and created a second proof of concept. That code does not require patching any code, but only modifies the instruction pointer (eip) and the first register (eax). This should be a lot faster than patching the code.

Technically the new main loop works by calling PTRACE_SYSEMU and waiting for a notification. It then saves the instruction pointer and switches to PTRACE_SYSCALL. As before it waits for the emulated syscall to exit and at this point sets eax from orig_eax and decreases the value of the instruction pointer by the size of the "int $0x80" instruction. Another call to PTRACE_SYSCALL resumes the process. The next event is the process actually entering the real syscall and yet another one leaving the syscall again. These are resumed by PTRACE_SYSCALL and PTRACE_SYSEMU respectively. So, comparing this with the first approach we are only modifying two registers now, instead of writing to the TEXT area of the running process.

Thanks should go to the anonymous commenter for making me give this approach another try.

Questions? Criticism? More ideas? Want to contribute?

Coming to an end I would yet again like to let you know that I am  open for questions, criticism, more ideas and contributions in general. So if you are interested in this topic come join the discussion by either dropping me an email, writing a comment to this post or replying to this post on your own blog.

Syndicated 2009-12-09 10:16:00 (Updated 2009-12-09 10:16:46) from sp

Introducing ujail & proof of concept

Lately I have been thinking about methods to provide a stripped down, secured environment for running untrusted code on GNU/Linux. With this post I would like to present you with the first results of my research.

ujail - brief introduction

I have chosen ujail as the name for the technique I am proposing. ujail stands for micro jail in userspace and, in itself, describes the concept briefly. The main idea is to have a userspace process monitor system calls of one of its childs and emulate some calls, if needed. This is done using ptrace and namely both PTRACE_SYSEMU and PTRACE_SYSCALL.
The ujail process should not be able to monitor syscalls, like strace does, but also intercept and emulate them.

This sounds a lot like user mode linux (uml), but the method is different. Whilst uml comes with a complete kernel, emulates all system calls and this way provides a virtualized system, ujail is intended to only emulate some systemcalls, without emulating the kernel.

Revisiting PTRACE_SYSCALL & PTRACE_SYSEMU

To better explain how the ujail technique works I would like to have a quick look at PTRACE_SYSCALL and PTRACE_SYSEMU again.

PTRACE_SYSCALL allows a userspace process to be notified whenever a traced process enters or leaves a system call. This means that two notifications are normally sent: one before system call entry and one afterwards. Even though one is able to change the parameters of system calls this method does not allow system calls to be fully emulated (think virtual filesystem here).

PTRACE_SYSEMU on the other hand provides one notification on syscall entry and expects the receiver of the notification to emulate the syscall. This method alone sounds great, but this also means that memory allocation needs to be emulated too, which is quite complex in userspace.

A hybrid of PTRACE_SYSCALL & PTRACE_SYSEMU

Now on to the concept behind ujail. The method I am describing works by calling PTRACE_SYSEMU for a specific process and this way taking over emulation of all system calls. However, some system calls are complex to emulate in userspace, and so a hybrid of both PTRACE_SYSEMU and PTRACE_SYSCALL is needed. In short this works by checking whether the syscall needs to be emulated when the PTRACE_SYSEMU event is received.
Now one way is emulating the syscall, filling the processes' registers and resuming execution of the process. This is simple and straight-forward.

The second way is forwarding the system call to the kernel. The problem here is that calling the syscall in the monitoring process will make the new resources available to that very process, and not the process to be jailed. This is where the hybrid method kicks in.

The proof of concept code creates a backup of the next instruction to be executed along with a copy of the instruction pointer at this point and patches it with the opcodes for "int $0x80", causing the syscall to be made again. After that it resumes execution with PTRACE_SYSCALL and waits again. The first event to be received now is the program leaving the emulated system call, which can be ignored. Resuming yet again will give use two PTRACE_SYSCALL events, one for syscall entry and one for syscall exit.

The first event is not really interesting, but at the second event the opcode backup is restored and the eip set from the saved value. Now the kernel has handled the syscall and the result is ready for the child process. A final call of PTRACE_SYSEMU resumes execution of the child and waits for the next syscall.

Proof of concept

The proof of concept code can be downloaded from its bazaar branch at launchpad.net. It is intended to be used on i386 systems only and works with simple programs, but is known not to work with anything using fork, vfork and most likely will not work for binaries using threading.

Finally, I would like to thank Pradeep Padala for his "Playing with ptrace" articles [0][1], which were fun to read and worked as a great introduction of ptrace for me.

Now there is only one thing left to say: if you are interested in this method, see loopholes or problems or want to contribute, please go ahead and contact me:

debian at sp dot or dot at

Syndicated 2009-12-07 16:49:00 (Updated 2009-12-09 10:17:25) from sp

How to copy partitions under GNU/Linux the easy way

After getting a new disk for my Popcorn Hour A-110 device I had to copy all partitions from the old disk onto the new one so I do not have to reinstall some applications and reconfigure everything.

After searching the web and trying to find a free alternative to Norton Ghost and Acronis True Image, preferably not using a boot disk on its own (I did not want to backup my workstation after all, just a simple partition to partition copy between two SATA disks) I gave up and decided to do the copying manually.

So I fired up gparted to do the partitioning, did a right click and... I noticed that gparted supports copy/paste. Being curious about what this could potentially do I gave it a try. I marked partition one on the old disk, did a copy, went to the new disk and clicked on paste - and guess what, gparted did what I was looking for.

Putting a long story short: you can copy whole partitions using gparted's copy/paste mechanism and even resize them whilst doing so. I am somehow ashamed I did not notice this feature earlier, having been a gparted user for a few years now and I can imagine I am not the only one who missed that.

Syndicated 2009-12-01 17:22:00 (Updated 2009-12-01 17:23:47) from sp

kvm, qemu and the magic of ubuntu-vm-builder

As I noted two days ago I was unable to build Android on Ubuntu 9.10 x86-64 and thus needed to set up a virtual machine.

At first I went for my preferred virtualization solution, VirtualBox and had to notice that even though I assigned all 4 processor cores of my workstation (along with 2GiB of memory) to the virtual machine building was painfully slow. I immediately ditched the idea of using  VirtualBox again and decided to give something new to me a try: the combination of kvm and qemu.

Having Intel VT-x support built into my workstation's processor I thought that this combination should give better performance, and I wasn't disappointed. To be honest, I am astonished on how fast the beast is now. Disk speed still seems to be not as fast as running things natively, but there must be a downside somewhere. :-)

After a bit of googling I also found that ubuntu-vm-builder exists, which simplifies virtual system creation tremendously.

My Android working tree is being synchronized right now, which means that I should be able to start building in a few minutes time. I hope the virtual machine stays as fast as it is right now during the build and I hope everything goes well.

Syndicated 2009-11-11 08:25:00 (Updated 2009-11-11 08:28:55) from sp

An update on the proprietary Maemo SDK installer

Yesterday I wrote about my dissatisfaction with the current state of the Android 2.0 code tree and how a proprietary install script for Maemo scared me off.

As suggested in one of the comments to my post I filed a bug report against Maemo, bug 6087.

Besides getting quite a few replies to the bug report within a matter of hours Carsten Munk pointed me at Maemo SDK+, which has less restrictive licensing.

Another comment, by Marius Gedminas (thanks!) pointed me at Mer,

a new operating system for small, mobile touch-screen devices.

It is Linux based and layers the best open-source elements of Nokia's Maemo platform over a modern Ubuntu base.

The goals of Mer include:

  • Integrate the best solutions for a wide variety of small form-factor devices
  • Encourage wider access to device capabilities through the Vendor Social Contract
  • Demonstrably provide an easy route to market for vendors
  • Dramatically reduce costs to vendors of supporting EOL hardware
  • Focus, harness and support community contributions to the platform
  • Encourage and ease migration of existing applications
  • Support experimentation, innovation and development

Syndicated 2009-11-09 21:00:00 (Updated 2009-11-09 21:09:58) from sp

My Android repositories

As I wrote in my last post I noticed a few problems with Android's roaming detection code and decided to try fixing it myself.

So, I am basing my work on CyanogenMod, which I am also using on my Android device. My repositories are hosted at github.com/speijnik and you can fetch (nearly) everything you need for building by using repo. See the README file in my android repository over at github for details.

For now only the simplification of the roaming detection code has made it into the repository, but be aware that even though I have published the code I still have neither built nor tried it, as I do not have a working build environment set up yet.

Oh, about the working build environment: there seem to be problems with either the webkit code in the Android repositories (unlikely) or with building that code on Ubuntu 9.10 x86-64 (more likely). Right now I am downloading Ubuntu 8.04 LTS i386 for use in a virtual machine. I will let you know whether that fixes my problems or not.

Syndicated 2009-11-09 20:58:00 (Updated 2009-11-09 20:59:29) from sp

22 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!