I'm surprised that no one else blogged about this yet. Hopefully someone else who was there will say something soon.
p.s. while I have your attention: test your software!
I'm surprised that no one else blogged about this yet. Hopefully someone else who was there will say something soon.
p.s. while I have your attention: test your software!
A Pronouncement at SciPy '06: the BDFL just pronounced Django to be The Python Web Framework.
He was quick to say that it wouldn't be included in the Python library (because the development lifecycle was so different) and that this was something he was unofficially promoting, but still...
I've been travelling, and now I'm under the gun for some books and my thesis, so who knows whether or not I'll be able to blog much anytime soon. Nonetheless, here's a brief update on various things.
If you're interested in trying out the code coverage tool I've been working on, you can download 'figleaf' at my projects page.
It works well, and it can output to HTML.
Hopefully I'll get a chance to work on it more next week...
scotch, the recording HTTP proxy tool I've been working on, is proving to be useful for debugging twill problems. Here is a recipe for figuring out what is different between Web requests executed by twill and Web requests executed by your browser, using scotch.
The future of astrophysics
For those of you with Science magazine subscriptions, there was an interesting article on Hans Bethe and the future of astrophysics in a recent issue. It mentions my dad.
Two weeks ago, I went to the Woods Hole Embryology course (which I attended last summer) and TAed a section on gene regulatory networks and sea urchin development. I gave a lecture on finding and analyzing cis-regulatory regions computationally; as part of the lecture, I wrote up a guide to computational cis-regulatory analysis. It's intended for biologists, but it might be useful to bioinformaticians, too.
A Java coder admits to liking Python: Simon sez. But... if the problem is complex, shouldn't the tools you use to solve the problem also be complex?
Here's a (recently rare) foray into politics for ya, just to keep you on your toes...
In response to a post to IP by Hiawatha Bray, I offered my reasons for disliking both the Bush Administration's approach to the War on Terror, and the naive "Golden Rule"-ism of some liberals. (Incidentally, I count myself as a liberal in these matters.)
Charlie Stross (a fantastic sci-fi writer) weighed in on the absurd "asymmetric warfare"</a> statement made about the four Gitmo suicides.
Authoring for O'Reilly
The umbrella topic is 'Testing Stuff', and a brief list of individual topics we hope to cover includes: Intro to Web Testing (covering twill & intro Selenium); Advanced Web Testing (some twill, lots more Selenium); Unit Testing in Python; Continuous Integration with buildbot; Python Testing with FitNesse; and perhaps more. Each book will cover a discrete chunk of material, and we hope the series will pull the individual books together into a whole, as well.
The first two, together with a (free) introductory book on testing, should be available within 3 months.
(And remember, Gentlemen prefer PDFs!)
When last we met, I professed a rewrite of coverage.py. Since then, I realized that my tokenize-based method of extracting interesting lines of code ... didn't work. There were several situations where lines of code simply wouldn't be counted, because there wasn't enough context to determine whether or not it was an actual expression. Specifically, this kind of code broke the parser:
def f( a = (lambda x: x + 1)(1), b = (lambda y: y * 2) ): pass
After flailing a bit, I realized that you really needed the AST to properly determine what lines of code are worth counting. This realization was helped by the fact that the sys.settrace 'line' tracing function is only called on the 'lambda' and 'def' lines, above, and not on the 'a=' and 'b=' code. (Kudos to Ned Batchelder for including so many nasty evil tests with coverage.py -- I just stole his code. ;)
I delved into coverage.py and confirmed my suspicion that some nasty AST visitation was occurring, using code based on the 'compiler' package. Moreover, coverage.py used code way beyond me to determine what was actually an executable line of code... and then did even more clever things to count that code even when the sys.settrace function didn't hit it.
Now, my rule is, if it's too complicated for me to understand, it shouldn't be in software I write. So I set myself a new goal: make a really really simple coverage-measuring utility that (a) only counts lines that Python actually "executes" (as measured by sys.settrace); and (b) I can understand.
In the process of working on implementing this with the parser module, I discovered a few amusing details about Python. First: can you guess which lines of code are "executed" in the following?
def f(): "a" 5 "b" 6
Well, in a bit of a surprise to me, it turns out that only the numbers are counted:
>- def f(): "a" >- 5 "b" >- 6
(where '>-' represents executed lines). Yep, only the numbers, not the stringS! There are two reasons for this, I think: one is that each number is actually a numerical expression, to be evaluated and replaced by its value, while strings are just literals; and the other is that this doesn't count docstrings.
I'm also a bit surprised by some aspects of the AST that is generated, too. For example, here's what my AST pretty-printer outputs for the number "5", all alone in a file:
file_input stmt simple_stmt small_stmt expr_stmt testlist test and_test not_test comparison expr xor_expr and_expr shift_expr arith_expr term factor power atom NUMBER ('5', 1) NEWLINE ('', 1) NEWLINE ('', 1) ENDMARKER ('', 1)
Is this really a necessary part of the AST?
I clearly need to read up on this more ;).
The last thing I did was build in an optimization: coverage.py uses a global trace function that is continually reassigned to the local trace function by calls into new code blocks. This means that all
Python code is traced. Figuring that this would be kind of a speed drain, I separated out the logic into a global trace function that only set the local trace function on a call into interesting code, where "interesting" could be specified by the user. In other words, rather than tracing coverage on everything, only code executing in user-specified modules would be traced.
The early results are pretty positive. With the usual caveats about naive benchmarking -- which this certainly is -- I found the following times for running the twill tests in nose:
Naively, it looks like I get a ~20% speedup from switching to my naive AST implementation, and I get another ~25% speedup from only looking at local code. Neat, huh? (Sadly, you still lose a factor of 2 because of the code coverage!)
Here's my (hideously ugly and un-re-factored, yes) implementation of a class to turn code into "interesting line numbers":
class _LineGrabber: def __init__(self, fp): self.lines = sets.Set()
ast = parser.suite(fp.read()) tree = parser.ast2tuple(ast, True)
def find_terminal_nodes(self, tup): """ Recursively eat an AST in tuple form, finding the first line number for "interesting" code. """ (sym, rest) = tup, tup[1:]
line_nos =  if type(rest) == types.TupleType: ### node
for x in rest: min_line_no = self.find_terminal_nodes(x) if min_line_no is not None: line_nos.append(min_line_no)
if symbol.sym_name[sym] in ('stmt', 'suite', 'lambdef', 'except_clause') and \ line_nos:
# store the line number that this statement started at self.lines.add(min(line_nos))
else: ### leaf if sym not in (token.NEWLINE, token.STRING, token.INDENT, token.DEDENT): return tup
if line_nos: return min(line_nos)
## use like so: lines = _LineGrabber(open(filename)).lines
I will be eternally grateful to anyone who points out why this is a stupid way to do things, and/or can improve the logic. (I already know it's unmaintainable.)
I'll post the full figleaf module sometime soon; right now it's too dangerous to let loose on the Internet. If you're willing to handle such dangerous material, just drop me a line.
For some reason I hadn't heard this yet -- Robert Jordan has amyloidosis!
Max discovered pinocchio, my package of nose extensions, and then pointed out that I hadn't put any version info or contact info on my page for
Oops. Fixed. Sorry 'bout that!
I've been using Ned Batchelder's code coverage module for a while now, and it's been great. We used a slightly hacked version for the agile testing tutorial, and now I need to do even more hacking on it.
I decided that rather than serially refactoring the code I'd swipe a few of the clever bits and do a complete rewrite. This effectively makes it a complete fork. I decided upon this tack because in my previous hacking I spent a lot of time struggling with the basic design of the module, and while the clever bits are pretty isolated and portable, the rest -- path munging, option handling, etc. -- is what I want to change in the first place.
Of course, immediately after deciding to steal some of the code, I ended up rewriting most of it. Sigh.
One of the main clever bits in coverage.py was the AST traversal code that decided which statements were potentially executable; this section used the compiler module. I'd heard somewhere that this module was deprecated, or unreliable, so I looked for some alternatives.
I put in some work on it last night, and arrived at the following function to extract interesting lines of code using tokenize:
class _TokeneaterObj: def __init__(self): self.lines = sets.Set() self.start_line = None self.ignore = (tokenize.COMMENT, token.NEWLINE, token.INDENT, token.DEDENT, token.ENDMARKER, tokenize.NL, token.STRING)
def tokeneater(self, *a): token_type, s, (srow, scol), (erow, ecol), logical_line = a
if token_type == token.NEWLINE: if self.start_line is not None: self.lines.add(self.start_line) self.start_line = None
elif token_type not in self.ignore: if self.start_line is None: self.start_line = srow
def get_lines(fp): t = _TokeneaterObj() tokenize.tokenize(fp.readline, t.tokeneater) return t.lines
I don't know if this will be a good choice, long term. I have to write some tests... Any better ideas? (Let me know.)
My goals in this rewrite are a better interface for large projects & simplified filename handling. Switching to using sets and tokenize may be simple side-benefits, or perhaps costly diversions ;).
And then, the eternal dilemma -- what should I call it? Grig, my inane bozo of a friend, suggested 'figleaf'. I like it. (Runners up were 'blanket' (Diane) and 'wet blanket' (me).)
I finally released a new version of twill, the Web scripting & testing language. This one is pretty solid, as far as I can tell; relatively few bug reports over a 2 month period. Between now and the (fairly distant) 0.9 beta release, I expect to make a number of changes to the underlying implementation, but I think the API and command-line usage is pretty stable.
In other twill news, I've moved the twill Web site to twill.idyll.org, and created a Trac site at twill.idyll.org/trac/. Check out the twill 0.9 milestone! The Trac site is intended for Wiki info, tickets, and milestones related to not only twill but also scotch and wsgi_intercept. It's not linked into the source code for the three projects because they're all in separate darcs repositories.
Let them eat Kwalitee!
I'm a fan of Grig's Cheesecake project, not least because we're both SoCal Piggies. The project is one that aims to provide a single score representing how well a Python project is packaged. He's gotten some interestingly negative comments about the project as part of the Google SoC wrangling, and I feel obliged to comment on them.
The two comments that I disagree with the most are these: first, that this will lead to an era of pseudo-fascism on PyPi, with people 'endlessly tweaking' their Python package to get a better Cheesecake score; and second, that unit testing is not applicable to a fairly large subset of the projects out there.
In response to the first comment: I just don't see it happening. I do expect many projects to work a bit to providie a README, a working setup.py, and the various other files. Perhaps they'll even toss in some unit tests. That's all to the good; right now I'm not aware of a single place that documents what should go in a Python package, and if you install a lot of Python software you probably believe that there should be. (Grig?) Nonetheless, just the act of attaching a score to a package isn't going to make people devote an excessive amount of time to raising that score. Perhaps if Grig was giving out ice cream to the top 10 percent of the scores -- but he's not.
In response to the second comment, there's a widespread misunderstanding about unit tests that seems to crop up when people first implement them. Unit tests are not about anything external to your code. They're all about making sure that your code works, and that your
code stays working. As soon as you start talking about unit testing graphics, or video, or the Web, or your database API, you're actually shifting to discuss what are known as "functional" or "integration" tests. These can run in a unit test framework, but they are not "unit tests". (If you think I'm trying to redefine "unit test", go read Kent Beck's original writings on this stuff.) So in practice all code is unit testable, and I'd be willing to be that over 95% of the packages that exist could have useful unit tests.
Anyway, that's my 2 cents -- exactly what my opinion is worth ;).
richdawe mentions an SMTP shell (e.g. twill with an SMTP extension ;), and
then there's TestableEmailer. Cool stuff.
cinamod, sounds like you're having too much fun. Let me know if you show up in LA sometime and want to check out the pickup scene here.
I've spent much of the last week arguing with nose, Jason Pellerin's unit testing framework for Python.
The fruits of that labor?
First, an extended introductory article and associated demo code, introducing, demonstrating, and discussing many nose features. (It's still a bit of a rough draft, folk. Send comments.)
Second, the pinocchio project. Yep, nose extensions. (Aren't I cute?) This adds 'stopwatch' and 'decorator' extensions to nose.
Putting wsgiref in the stdlib
Ever wonder how untested modules with non-standard interfaces and little documentation get into the stdlib?
Wonder no more.
No response from PJE.
Today I noticed that the bug I pointed out had been fixed. Neither of the e-mails were answered.
I posted publicly, I sent a private e-mail. What more should I do? I only got irritated when I saw the checkin fixing the bug without any acknowledgement of the other issues raised. <snark>Well, I guess once you've got Guido's OK, you don't need to listen to anyone else, right?</snark>
I'd be less irritated if the barrier to fixing problems once modules are in the stdlib wasn't so high. wsgiref will become effectively immutable -- overcomplicated constructor and all -- once it's integrated. That's presumably why GvR asked for comments, yeh?
Oh, well. Phillip -- if you actually want any contributions from me for wsgiref, you're going to need to answer my questions. I don't fancy writing documentation for an interface that could change, and I won't exactly enjoy bug testing your code in the future if I'm going to get the silent treatment for having the temerity to ask questions. (I'm not bucking for an apology here -- there's nothing to apologize for. Just be a member of the community, please.)
A bunch o' miscellaneous links
No new comments on scotch, but I did locate and write down
a bunch of other python-relevant HTTP recorders & proxies. Pound is particularly interesting.
apenwarr has an interesting parable on testing.
Busking and educating the police about the law. Priceless.
Domain Specific languages rock. Even in Ruby ;) ;).
autotest. Apart from the amusing quote about "not needing to open a Web browser to test" -- well, first of all, neither do I, and second of all, how much do you want to bet your site doesn't actually work the way you think it does? -- this autotest phenom sounds interesting. (py.test supports similar behavior.) Might be time to hack it into nose...
First on the list of things I didn't think would ever work -- crowdsourcing R&D!? Way cool.
scotch, a WSGI-based HTTP recording proxy
I finally wrote up some preliminary docs on scotch, a project I first wrote about yesterday. scotch is my solution for recording twill scripts, as well as tracking AJAX Web calls and doing general Web site regression testing. The scotch examples page is probably the place to start, although the front page is more conversational. There are also some simple simple code recipes that demonstrate the potential. (You can grab scotch at the usual place.)
I had a nice e-mail conversation with Ben Bangert about the possibility of using scotch for more clever twill script making. It's always nice to have people grok the tool you just wrote ;).
all about twisted
Glyph Lefkowitz posted a link to this interesting paper on twisted. Old paper, but still good, I think.
New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.
Keep up with the latest Advogato features by reading the Advogato status blog.
If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!