Older blog entries for henrique (starting at number 9)

SuspiciousOperation on Django with FileField?

Just a quick tip. I was wondering why this error happens, but until I got the time to investigate it I just commented out the column on the model and moved on, but now (after a 5 minutes search at the django's code) I found the reason and I want to share it. The complete error message is:

Exception Type:	SuspiciousOperation
Exception Value:	
Attempted access to '/Users/henrique/tmp/uploads/half_logo.png' denied.

Suppose the project directory (where manage.py and its friends are) is: /Users/henrique/dev/someproject/. The problem is quite simple: since the directory specified for the parameter upload_to is /Users/henrique/tmp/uploads and it has a "base path" different of the place where the project is (/Users/henrique/dev/someproject/), django forbides access to it. It has a bit of reason to be, since django is avoiding you from future headaches (pollutioning directories outside your sandbox, for instance).

Of course, to fix it you just need to change the upload_to parameter to something like /Users/henrique/dev/someproject/uploads, that is inside the project's sandbox.

12 Oct 2008 (updated 12 Oct 2008 at 01:48 UTC) »
Testing your django app

Today, I found this nice blog post about testing on django:

http://ericholscher.com/blog/2008/jul/26/testmaker-002-even- easier-automated-testing-django/

It is interesting to see the way the test process is realized, it makes me remember a similar problem we had on the stoqdrivers (part of the Stoq project).

In that project we needed to (automatically) test our code against a couple of fiscal printers... teorically we would need to have connected all the printers at the same computer to run the test suite, and even this being possible, the tests would run very slowly and a lot of paper would be needed (apart the risk of putting the printer in a inconsistent state, what would make the printer unusable for a day at least :)

The solution was to log all the commands sent to the printer and the output returned. Once this log was saved, we could use it to feed the code, through a virtual printer (or actually through several virtual printers, created on demand based on the list of printers supported).

I think this way of testing has a defined name, but it still unknown for me... However, it is nice to see this "pattern" being validated in other areas like web and desktop applications development.

By the way, you can check the stoqdrivers test implementation at the: project's source web browser.

Pycon Brasil 2008

Last week I went to this amazing event in Rio de Janeiro. Well, I had the opportunity to know many guys that I never met before and to know other interesting guys as well (most of them users of django, community #django- br on freenode and the mailing list at google groups).

In this event I'd participated of a coding dojo season, since I never had participated of a season before and I would like to know what is that about. That was cool... nothing like a programming contest or a project sprint, like I was thinking of it. The dojo itself is more like a trainning, where a problem is presented to a group of people and then solved using a baby steps approach together with pair programming (from extreme programming) and TDD (test driven development).

Once a pair start presenting a solution and then coding it, they have 7 minutes to move it on. A refactory can be done (since all the tests still passing after that) or then continue implementing the last solution proposed (test first, as TDD dictates). After 7 minutes, the pair is changed.

The main feature of this dojo I liked most is the fact that nobody (except the pair writing the code) can to communicate, they have to wait for the test implementation finishes and executes (passing or failing) to speak something. This leads the public to think carefully about the code/solution that is being write, what is hard if everybody could speak at same time, proposing new solutions or starting complaining about the current solution (your solution could be accepted, but you need to present the advantages or then go ahead and implement it there).

For more information: http://www.codingdojo.org/

13 Aug 2008 (updated 27 Sep 2008 at 21:35 UTC) »

(nothing here)

7 Jun 2008 (updated 13 Sep 2008 at 16:05 UTC) »

Lets say you work on a company which has an employee working at home 'cause currently he is not at company's city (and will note be for the next 2 months). That's all fine, but it got complicated to ask the "sysadmin" to give him access to our server, his response takes so long and even this way the response propably will be a "no-no, sorry".

Well, at least some things got available to our employee and he can do its work.

It is raining and it is cold in the city and another employee would like to work at home; asking the sysadmin for access is not an option, so this employee explains the other employee that there is an option if he agrees to give access to your machine through SSH, so he could try some port forwarding through SSH and access the servers freely.

The slack employee then runs the following command on your machine:

ssh -L 8888:XXX.XXX.XXX.XXX:YYY slackuser@ofirst_employee_host -N

Where XXX.XXX.XXX.XXX is the IP address of the company's server you want to access, the "8888" is the port number at your machine which you access the company's services (you access the services through localhost:8888 and it will be forwarded to "XXX.XXX.XXX.XXX:YYY") and YYY, obviously, is the port. With that, you can access the services at company normally (actually, it should be a pretty slow, but it is better than rain and cold).

(For reference and pictures, please visit this site)

All the story was just to put that tip in a context, the slack employee will be at the office in the next monday :)

So long!

Currently it is being hard to find a good job. About one month ago I went to a new company to work with smart people and interesting projects, but life is strange and no much time was required to make me think about the future and if I really had make the right choice...

There are some things that I consider essential to classify a good job. First, the most important, is how interesting is the project to develop... that is very important, even if the needed tools are not the ones you use every day and "want to use forever". Currently, there are tools for everything (or almost everything) you need to do, so you can just "join the points" and start appreciating the system to work, i.e, your own design.

Second, and this may sounds strange, is the language (or tool) used to develop. If the project is not very good, there is a chance that you enjoy the development, since you are improving your knowledge on the language that you like to write (or being in contact with new tools you didn't known before)...

If neither the project nor the tool are good, well.. you can talk to colleagues and see what they think about the projects and if there is some possibility to change the way things are done; if it doesn't work, something is wrong and it can be you!

...or it can be the company, in both cases it is worth to try finding a new job. That is exactly what I'm going to do now and I think this time things will be better; the project is good and the language is "perfect", but the people are not that good... (should I write about this combination later?)

29 Mar 2008 (updated 29 Mar 2008 at 01:01 UTC) »

Well, finally we have found a way to get the accounts for a contact on maemo. We need this to be able to identify the contact which the user has called through "Internet Call".

As you may have read in the post about "watching for calls through telepathy", you can have noted that all the contact information available was just an URI, kind of "sip:1234@". However, this info is only enough to supply a way to search for contacts which has this URI as account "specification".

So the question over this week was: How maemo store the accounts available for a contact?. Basically, how do you can get the list of accounts by which you can make a conversation with the contact?

We've looked at the APIs available (ebook, abook, mission-control, ...) and nothing seemed to supply the list... so I went to the database file and the first attempt was to try open it and investigate tables structure or something similar that could give me some information. No way there, the db is a Berkeley DB and there is no python modules to manage this kind of database (python for maemo seems to have removed this standard module of the distribution) and I wasn't disposed to try it in C (to work with a "mysterious" DB in C was not the kind of fun that I would like to play at the moment, at least with python this could be a bit easier).

Then I had the idea to try get the VCARD info for a contact and... WELL, the information was there. One thing that I forgot to say is that through the APIs I could get each information (named "fields" there) for the contact (first name, last name, email and so forth), but the accounts wasn't available through "fields", since you can have an variable number of accounts and the function to get a contact field expects a well defined constant (note, the "vcard" is a "well defined constant").

Through the vcard, you get things like:


The "X-SIP" value is the user account which the contact account is mapped to. For example: I've two accounts on the device: gtalk and SIP. I've the contact X which I can talk through gtalk, the contact Y I can talk through gtalk and SIP... so you need a way to say what is the user account the contact account is bounded to (in the example, for the contact Y you could ask if his account is a "gtalk" or "sip" account), it servers as an "account type" identifier.

Given this, you can now map your "sip uri" available through the "calls watcher" to a real contact on the system. If you can do this, you can get all the other contact info, like the general ones available on most of the softwares (like name, email, web address, phone and another ways to contact the same person :)

Also note that there is a python package for vcard management, it is named vobject and is available on the project page and Python Package Index (Pypi).

Hope this information can help some people having to do this kind of interaction with maemo's rtcomm.

24 Mar 2008 (updated 24 Mar 2008 at 23:41 UTC) »

I want something good to die for
To make it beautiful to live
I want a new mistake
Loose is more than hesitate
You believe it in your head

I can go with the flow Don't say it doesn't matter anymore I can go with the flow You believe it in your head?

- Go with the flow, Queens of the Stone Age

Telepathy rocks, and it rocks a lot! First, let me explain our problem: we need a way to watch outgoing voip calls on maemo and since maemo is, fortunately, using telepathy for its communication software subsystem (named rtcomm), we finish having to use telepathy to do this job.

In the begining, things were obscure, we didn't know the way to follow, but the folks on the telepathy IRC channel (#telepathy @ irc.freenode.net) were very cool... and so, we got the problem fixed before the expected, this motivated to write about the solution here.

The first step was to watch for new connections made to a connection manager, it is quite simple, we just need to connect a callback to the "NewConnection" event, the initial block of code follows:

import dbus
bus = dbus.SessionBus()

cm = bus.get_object( "org.freedesktop.Telepathy.ConnectionManager.sofiasip", "/org/freedesktop/Telepathy/ConnectionManager/sofiasip") iface = dbus.Interface(cm, "org.freedesktop.Telepathy.ConnectionManager.sofiasip") iface.connect_to_signal("NewConnection", on_new_connection)

When a connection is created, we need to watch for new channels being created (which can be read as "a communication channel with some of the contacts was created"), this way:

def on_new_connection(bus_name, object_path, protocol):
    conn = bus.get_object(bus_name, object_path)
    iface = dbus.Interface(conn,
    iface.connect_to_signal("NewChannel", on_new_channel)

So when a channel is created, you got the channel type, if it is a text conversation, the channel_type parameter would be something like "org.freedesktop.Telepathy.Channel.Type.Text". For our case, we need to filter "streamed-media" conversation, so channel_type must be "org.freedesktop.Telepathy.Channel.Type.StreamedMedia". If it is satisfied, we need then attach a new callback for the "StreamAdded" event -- this callback will be fired when a voice or video data transfer has began.

def on_new_channel(object_path, channel_type, handle_type,
                   handle, supress_handler):
    if channel_type.split(".")[-1] == "StreamedMedia":
        channel = bus.get_object(conn.bus_name, object_path)
        iface = dbus.Interface(channel, channel_type)

Finally, on on_stream_added callback, we filter out voice channels, and get the person the user is trying to speak with:

def on_stream_added(stream_id, contact_handle, stream_type):
    # discard video channels
    if stream_type == 1:
    iface = dbus.Interface(conn,
    contact_data = iface.InspectHandles(1, [contact_handle])
    uri = filter(lambda d: d.startswith("sip"),
    print "new call: %s" % uri

Shazam! It works! There is two things to note, though. If you just copy-and-paste the code, it'll not work -- as you can have noticed, some callbacks needs the connection object... you have two choices here: to use lambdas to proxy the real handler passing the connection object among all the handlers until on_stream_added or you can encapsulate all these handlers in a class and define the connection object as an attribute of it, it is up to you.

The other thing is that it can seem not too simple as watch_for_voip_connections(), but you must to note that telepathy doesn't have something specific to do what we needed here, but even in this case the framework seems to be so well designed that it gives to you the power needed to do your work.

I loved telepathy since the moment I start working with it, and I expect to contribute with the project as soon as possible. Hope you like it too :)

5 Mar 2008 (updated 20 Mar 2008 at 21:15 UTC) »

I was implementing a distutils' setup script when I got a problem to define permissions to the so called data files. As I hadn't found anything in the documentation about this, the remaining alternative was to look at the source code (in distutils.command.install_data and distutils.cmd modules) to see how these things were expected to work.

For my surprise, permissions seemed not to be supported by distutils and all data files always were being installed in a non-restricted mode, i.e, 0777. So good, this liberal mode could be my solution, although I don't like the idea to be so limited in this way.

So the question was: although the mode 0777 was being used as default, the data files still being wrote with mode 0755 (actually, it depends on the umask of the user running the installer)... then, my last attempt was to look in depth to the remaining related modules and I found out that distutils implements its own "makedirs" on distutils.dir_utils.mkpath module and this function was ignoring completely its "mode" parameter (the one which defaults to 0777)! Finally I've found the culprit. I moved ahead and create a patch and report an issue on python's bug tracking system: http://bugs.python.org/issue2236.

Hmm, but the problem hadn't been fixed, even with mkpath using its mode parameter now, the files/directories were still being save with modes other than 0777; but the problem here is more complicated, it seems it is related to how python is using the mkdir system call: depending on the compiler directives, it doesn't specify the mode parameter to mkdir... I don't known why it is this way and I think we moved too down - the python core developers should have some good reason and it is out of the scope of this post).

So I go to the less attractive solution (IMO): to extend the distutils' install_data command. To do this we need to known a bit how the commands are structured and executed. The idea is quite simple, every distutils' command is a python module available through distutils.command dir. In this directory, there is a module for each command, and each module has a class with the same name, so we do:

from distutils.commands.install_data import install_data

class MyInstallData(install_data): pass

Each class must implement a method run(), which is the place to look to see how the command does its work. For the install_data command, the operations are closed to copy files and create directories (through copy_file() and mkpath() methods of the Command superclass). The mkpath() was the problem, so it is what needs to be extended:

class MyInstallData(install_data):
    def mkpath(self, name, mode=0777, verbose=0, dry_run=0):
        rv = Command.mkpath(self, name, mode, verbose, dry_run)
        os.chmod(name, mode) 
        return rv

When a path is created, I force the chmod to fix the permissions. Problem fixed. Not so good solution, but "it works" (tm). Ah! to use our custom install_data command with setup, we just specify the cmdclass parameter:

    name="package name",
    cmdclass={"install_data": MyInstallData,}

that's it.

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!