Older blog entries for bagder (starting at number 748)

Introducing curl_multi_wait

Facebook contributes fix to libcurl’s multi interface to overcome problem with more than 1024 file descriptors.

When we introduced the multi interface to libcurl about (what feels like) one hundred years ago, we went with simple in some ways. One way it shows: an application that wants to do many transfers in parallel asks libcurl to do it, and then it extracts the set of file descriptors (sockets!) from libcurl (using curl_multi_fdset) to wait for as plain fd_sets. fd_set is the variable type made for select(). This API choice made applications pretty much forced to use select. select() has  its fair share of problems, where possibly the biggest one is that it has problems with file descriptors > 1024.

Later on we introduced an enhanced version of the multi interface for libcurl that allows an application to use whatever method it pleases. I tend to refer to that variation as the multi_socket API after its main function curl_multi_socket_action. That’s the high performance, event-driven API.

As you may be aware, event-driven code make things a bit more complicated at times so many people still prefer to use the older and simpler multi interface and thus they were forced to remain using select(). But now that era has ended. Now…

curl_multi_wait() is introduced!

This poll(3)-like function basically works as a replacement for curl_multi_fdset() + select(). Starting in libcurl 7.28.0 (strictly speaking in commit de24d7bd4c03ea3), this is a function that any application can use for this purpose, and thus avoid the problem with many file descriptors.

This new function doesn’t use any struct from the “real” poll() or associated headers to make sure that it works even for systems without a real poll() implementation. It instead uses private curl versions of both the struct and the defines used. An application can of course also tell curl_multi_wait to wait for a set of private file descriptors, just like poll() or select().

The patch set that brought this function was provided by Sara Golemon, a friend from from a related project

cURL

Syndicated 2012-09-03 21:13:19 from daniel.haxx.se

ptest because “make test” is insufficient

CAUTION: test in progressMuch thanks to autoconf and automake we have an established more or less standardized way to build and install tools, libraries and other software. We build them with ‘make’ and we install them with ‘make install‘. This works great and it works equally fine even when we build stuff cross-compiled.

For testing however, the established concept and procedure is not as good. For testing we have ‘make test’ or ‘make check’ which typically first builds whatever needs to get built for the tests to run and then it runs all tests.

This is not good enough

Why? Because in lots of use-cases we build software using a cross-compiler on a build system that can’t run the executables. Therefore we need to first build the tests, then install the tests (somewhere that is reachable from the target system) and finally execute them. These steps need to be possible to run independently since at least the building and installing will sometimes happen on a different host than the execution of the tests.

yocto-projectIntroducing ptest

Within the yocto project, Björn Stenberg has pushed for ptest to be the basis of this new reform and concept. The responses he’s gotten so far has been positive and there’s a pending updated patch to be posted to the upstream oe-core list soon.

The work does not end there

Even if or when this can be incorporated into OpenEmbedded and Yocto – and I really think it is a matter of when since I believe we can work out all the flaws and quirks until virtually everyone involved is happy. The bulk of the changes however, really should be done upstream, in hundreds and thousands of open source packages. We (as upstream open source projects) need to start doing testing in at least two different steps, where one step build everything that needs to be built for the tests and then a second step that run the suite. The two steps could then in a cross-compiled scenario get executed first on the host system and then on the target system.

I expect that this will mean a whole bunch of patches and scripts to have to be maintained within OpenEmbedded for a while, when things will be tried to get merged into upstream projects and I also foresee that a certain percentage of all projects just won’t accept this new approach and will reject all patches in this vein.

Output format

I think the most controversial part of these suggested “universal” changes is the common test suite output format. The common format is of course required so that we can “supervise” the output and results from any package without having to know any specifics.

While the ptest output format follows the automake test output syntax, I expect many projects that have selected a particular output format to rather stick with that. Hopefully we can then make projects introduce a separate make target or option that runs the test suite with the standard output format.

One little step forward

Building full-fledged Linux distributions cross-compiled that are completely tested on target will remain being hard work for a while more. But we are improving things, one step at a time.

Of course, the name ‘ptest’ is what the system is currently called by Björn within the yocto/OE environment. It is not supposed to be a catchy name for this idea outside of there. The ‘P’ refers to package, as opposed to for example system test and to make it less generic than simply test.

Syndicated 2012-08-31 18:54:29 from daniel.haxx.se

Screen scraping expert witness

This is a slightly edited version of a genuine email I received in May 2012:

Dear Mr. Stenberg -

I recently came across the text you co-authored with Michael Schrenk, Webbots, Spiders, and Screen Scrapers, and was wondering if you might be interested in being a paid expert witness in a lawsuit we’re handling.

One of the major claims in the suit is unauthorized computer access in the form of a massive, multi-year campaign of screen scraping, and we’re looking for a qualified expert who can make the activity make sense to a jury (in the unlikely event that this matter reaches trial; fewer than 2% of cases do, in federal court).

We’re in Los Angeles, California, as is the case (and naturally would cover travel expenses, an hourly or per diem expert witness fee, etc). If you’re interested (or even if you’re not), please let me know? You can reach me via email or at (xyz) xyz-xyzx.

Many thanks,
[withheld]

Link to the book.

I responded to this mail saying that I’d rather not due to the distance and travel it’d require, but I never heard back from them again so I have no idea whatever happened in this case or who got to be the expert in the end…

Syndicated 2012-08-23 20:52:40 from daniel.haxx.se

Fixed name to dynamic IP with CNAME

Notice: this is not an advanced nor secret trickery. This is just something I’ve found even techsavvy people in my surrounding not having done so its worth being highlighted.

When I upgraded to fiber from ADSL, I had to give up my fixed IPv4 address that I’ve been using for around 10 years and switch to a dynamic DNS service .

In this situation I see lots of people everywhere use dyndns.org and similar services that offer dynamicly assigning a new IP to a fixed host name so that you and your computer illiterate friends still can reach your home site.

I suggest a minor variation of this, that still avoids having to run your own dyndns server on your server. It only requires that you admin and control a DNS domain already, but who doesn’t these days?

  1. Get that dyndns host name at the free provider that makes the name hold your current IP. Let’s call it home.dyndns.org.
  2. In your own DNS zone for your domain (example.com) you add an entry for your host ‘home.example.com‘ as a CNAME, pointing over to home.dyndns.org
  3. Now you can ping or ssh or whatever to ‘home.example.com‘ and it will remain your home IP even when it moves.
  4. Smile and keep using that host name in your own domain to your dynamic IP

Syndicated 2012-08-20 21:58:40 from daniel.haxx.se

The official 2012 Haxx towel

Haxx-towel

We may not get a very warm and sunny summer this year in Sweden, but now at least we have the Haxx towels to use the very brief moments on the beach! It measures 140 x 70 centimeters and if you click on the image you’ll get a higher resolution version to browse.

Syndicated 2012-07-26 10:27:26 from daniel.haxx.se

curl ‘techJobs’ SanFran

billboard on the US101, near SFO July 2012

I got this awesome curl sighting from the US101, near SFO posted to me (click for a slightly larger version) a few days ago – showing curl on a very large billboard in an ad for Dice.

It is clearly meant to look like a unix style command line that invokes curl. The command line would only be syntactically correct if the user truly has two host names named “techJobs” and “SanFran” and curl would attempt to get their root HTTP document off them on port 80.

It gets followed by that C++/C99 comment that really is an odd added context.

To me, all the sign shows is a company that desperately tries to look techy but in trying too hard it fails really miserably. I’m really honored my work found its way to such a high profile spot though!

(photo by Stan van de Burgt)

Syndicated 2012-07-20 19:39:48 from daniel.haxx.se

HTTP2 Expression of Interest: curl

For the readers of my blog, this is a copy of what I posted to the httpbis mailing list on July 12th 2012.

Hi,

This is a response to the httpis call for expressions of interest

BACKGROUND

I am the project leader and maintainer of the curl project. We are the open source project that makes libcurl, the transfer library and curl the command line tool. It is among many things a client-side implementation of HTTP and HTTPS (and some dozen other application layer protocols). libcurl is very portable and there exist around 40 different bindings to libcurl for virtually all languages/enviornments imaginable. We estimate we might have upwards 500
million users
or so. We’re entirely voluntary driven without any paid developers or particular company backing.

HTTP/1.1 problems I’d like to see adressed

Pipelining – I can see how something that better deals with increasing bandwidths with stagnated RTT can improve the end users’ experience. It is not easy to implement in a nice manner and provide in a library like ours.

Many connections – to avoid problems with pipelining and queueing on the connections, many connections are used and and it seems like a general waste that can be improved.

HTTP/2.0

We’ve implemented HTTP/1.1 and we intend to continue to implement any and all widely deployed transport layer protocols for data transfers that appear on the Internet. This includes HTTP/2.0 and similar related protocols.

curl has not yet implemented SPDY support, but fully intends to do so. The current plan is to provide SPDY support with the help of spindly, a separate SPDY library project that I lead.

We’ve selected to support SPDY due to the momentum it has and the multiple existing implementaions that A) have multi-company backing and B) prove it to be a truly working concept. SPDY seems to address HTTP’s pipelining and many-connections problems in a decent way that appears to work in reality too. I believe SPDY keeps enough HTTP paradigms to be easily upgraded to for most parties, and yet the ones who can’t or won’t can remain with HTTP/1.1 without too much pain. Also, while Spindly is not production-ready, it has still given me the sense that implementing a SPDY protocol engine is not rocket science and that the existing protocol specs are good.

By relying on external libs for protocol and implementation details, my hopes is that we should be able to add support for other potentially coming HTTP/2.0-ish protocols that gets deployed and used in the wild. In the curl project we’re unfortunately rarely able to be very pro-active due to the nature of our contributors, which tends to make us follow the rest and implement and go with what others have already decided to go with.

I’m not aware of any competitors to SPDY that is deployed or used to any particular and notable extent on the public internet so therefore no other ”HTTP/2.0 protocol” has been considered by us. The two biggest protocol details people will keep mention that speak against SPDY is SSL and the compression requirements, yet I like both of them. I intend to continue to participate in dicussions and technical arguments on the ietf-http-wg mailing list on HTTP details for as long as I have time and energy.

HTTP AUTH

curl currently supports Basic, Digest, NTLM and Negotiate for both host and proxy.

Similar to the HTTP protocol, we intend to support any widely adopted authentication protocols. The HOBA, SCRAM and Mutual auth suggestions all seem perfectly doable and fine in my perspective.

However, if there’s no proper logout mechanism provided for HTTP auth I don’t forsee any particular desire from browser vendor or web site creators to use any of these just like they don’t use the older ones either to any significant extent. And for automatic (non-browser) uses only, I’m not sure there’s motivation enough to add new auth protocols to HTTP as at least historically we seem to rarely be able to pull anything through that isn’t pushed for by at least one of the major browsers.

The “updated HTTP auth” work should be kept outside of the HTTP/2.0 work as far as possible and similar to how RFC2617 is separate from RFC2616 it should be this time around too. The auth mechnism should not be too tightly knit to the HTTP protocol.

The day we can clense connection-oriented authentications like NTLM from the HTTP world will be a happy day, as it’s state awareness is a pain to deal with in a generic HTTP library and code.

Syndicated 2012-07-13 17:57:27 from daniel.haxx.se

Three static code analyzers compared

I’m a fan of static code analyzing. With the use of fancy scanner tools we can get detailed reports about source code mishaps and quite decently pinpoint what source code that is suspicious and may contain bugs. In the old days we used different lint versions but they were all annoying and very often just puked out far too many warnings and errors to be really useful.

Out of coincidence I ended up getting analyzes done (by helpful volunteers) on the curl 7.26.0 source base with three different tools. An excellent opportunity for me to compare them all and to share the outcome and my insights of this with you, my friends. Perhaps I should add that the analyzed code base is 100% pure C89 compatible C code.

Some general observations

First out, each of the three tools detected several issues the other two didn’t spot. I would say this indicates that these tools still have a lot to improve and also that it actually is worth it to run multiple tools against the same source code for extra precaution.

Secondly, the libcurl source code has some known peculiarities that admittedly is hard for static analyzers to figure out and not alert with false positives. For example we have several macros that look like functions and on several platforms and build combinations they evaluate as nothing, which causes dead code to be generated. Another example is that we have several cases of vararg-style functions and these functions are documented to work in ways that the analyzers don’t always figure out (both clang-analyzer and Coverity show problems with these).

Thirdly, the same lesson we knew from the lint days is still true. Tools that generate too many false positives are really hard to work with since going through hundreds of issues that after analyze turn out to be nothing makes your eyes sore and your head hurt.

Fortify

The first report I got was done with Fortify. I had heard about this commercial tool before but I had never seen any results from a run but now I did. The report I got was a PDF containing 629 pages listing 1924 possible issues among the 130,000 lines of code in the project.

fortify-curl

Fortify claimed 843 possible buffer overflows. I quickly got bored trying to find even one that could lead to a problem. It turns out Fortify has a very short attention span and warns very easily on lots of places where a very quick glance by a human tells us there’s nothing to be worried about. Having hundreds and hundreds of these is really tedious and hard to work with.

If we’re kind we call them all false positives. But sometimes it is more than so, some of the alerts are plain bugs like when it warns on a buffer overflow on this line, warning that it may write beyond the buffer. All variables are ‘int’ and as we know sscanf() writes an integer to the passed in variable for each %d instance.

sscanf(ptr, "%d.%d.%d.%d", &int1, &int2, &int3, &int4);

I ended up finding and correcting two flaws detected with Fortify, both were cases where memory allocation failures weren’t handled properly.

LLVM, clang-analyzer

Given the exact same code base, clang-analyzer reported 62 potential issues. clang is an awesome and free tool. It really stands out in the way it clearly and very descriptive explains exactly how the code is executed and which code paths that are selected when it reaches the passage is thinks might be problematic.

clang-analyzer report - click for larger version

The reports from clang-analyzer are in HTML and there’s a single file for each issue and it generates a nice looking source code with embedded comments about which flow that was followed all the way down to the problem. A little snippet from a genuine issue in the curl code is shown in the screenshot I include above.

Coverity

Given the exact same code base, Coverity detected and reported 118 issues. In this case I got the report from a friend as a text file, which I’m sure is just one output version. Similar to Fortify, this is a proprietary tool.

coverity curl report - click for larger version

As you can see in the example screenshot, it does provide a rather fancy and descriptive analyze of the exact the code flow that leads to the problem it suggests exist in the code. The function referenced in this shot is a very large function with a state-machine featuring many states.

Out of the 118 issues, many of them were actually the same error but with different code paths leading to them. The report made me fix at least 4 accurate problems but they will probably silence over 20 warnings.

Coverity runs scans on open source code regularly, as I’ve mentioned before, including curl so I’ve appreciated their tool before as well.

Conclusion

From this test of a single source base, I rank them in this order:

  1. Coverity – very accurate reports and few false positives
  2. clang-analyzer – awesome reports, missed slightly too many issues and reported slightly too many false positives
  3. Fortify – the good parts drown in all those hundreds of false positives

Syndicated 2012-07-12 07:14:47 from daniel.haxx.se

curl’s new HTTP cookies docs

Whenever libcurl saves cookies to a file, it saves them in the old “Netscape cookie format”.cookie

Since ages ago libcurl has written a comment at the top of that cookie file (the file we refer to as the cookie jar in curl lingo) that explains that it is a cookie netscape cookie file format and then a URL pointing to the original Netscape document describing what cookies are and how they work. To be perfectly honest, we pointed to a local copy of that document since the original is since long removed and gone and now only available through archive.org.

The web page pointed to were never a perfect match since the page never explained the file format or its use, only the cookie protocol – as it was originally described to work and not even like cookies works today.

Starting with curl 7.27.0, it will write the header to point to another URL: http://curl.haxx.se/docs/http-cookies.html

cURL

Syndicated 2012-07-08 12:48:04 from daniel.haxx.se

Is there a case for a unified SSL front?

There are many, sorry very many, different SSL libraries today that various programs may want to use. In the open source world at least, it is more and more common that SSL (and other crypto) using programs offer build options to build with either at least OpenSSL or GnuTLS and very often they also offer optinal build with NSS and possibly a few other SSL libraries.

In the curl project we just added support for library number nine. In the libcurl source code we have an internal API that each SSL library backend must provide, and all the libcurl source code is internally using only that single and fixed API to do SSL and crypto operations without even knowing which backend library that is actually providing the functionality. I talked about libcurl’s internal SSL API before, and I asked about this on the libcurl list back in Feb 2011.

So, a common problem should be able to find a common solution. What if we fixed this in a way that would be possible for many projects to re-use? What if one project’s ability to select from 9 different provider libraries could be leveraged by others. A single SSL API with a simplified API but that still provides the functionality most “simple” SSL-using applications need?

Marc Hörsken and I have discussed this a bit, and pidgin/libpurble came up as a possible contender that could use such a single SSL. I’ve also talked about it with Claes Jakobsson and Magnus Hagander for postgresql and I know since before that wget certainly could use it. When I’ve performed my talks on the seven SSL libraries of libcurl I’ve been approached by several people who have expressed a desire in seeing such an externalized API, and I remember the guys from cyassl among those. I’m sure there will be a few other interested parities as well if this takes off.

What remains to be answered is if it is possible to make it reality in a decent way.

Upsides:

  1. will reduce code from the libcurl code base – in the amount of 10K or more lines of C code, and it should be a decreased amount of “own” code for all projects that would decide to use thins single SSL library
  2. will allow other projects to use one out of many SSL libraries, hopefully benefiting end users as a result
  3. should get more people involved in the code as more projects would use it, hopefully ending up in better tested and polished source code

There are several downsides with a unified library, from the angle of curl/libcurl and myself:

  1. it will undoubtedly lead to more code being added and implemented that curl/libcurl won’t need or use, thus grow the code base and the final binary
  2. the API of the unified library won’t be possible to be as tightly integrated with the libcurl internals as it is today, so there will be some added code and logic needed
  3. it will require that we produce a lot of documentation for all the functions and structs in the API which takes time and effort
  4. the above mention points will deduct time from my other projects (hopefully to benefit others, but still)

Some problems that would have to dealt with:

  1. how to deal with libraries that don’t provide functionality that the single SSL API does
  2. how to draw the line of what functionality to offer in the API, as the individual libraries will always provide richer and more complete APIs
  3. What is actually needed to get the work on this started for real? And if we do, what would we call the project…

PS, I know the term is more accurately “TLS” these days but somehow people (me included) seem to have gotten stuck with the word SSL to cover both SSL and TLS…

Syndicated 2012-07-04 14:12:42 from daniel.haxx.se

739 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!