bagder is currently certified at Master level.

Name: Daniel Stenberg
Member since: 2000-05-10 09:34:05
Last Login: 2009-12-04 19:23:29

FOAF RDF Share This



My blog is on


Articles Posted by bagder

Recent blog entries by bagder

Syndication: RSS 2.0

copy as curl

Using curl to perform an operation a user just managed to do with his or her browser is one of the more common requests and areas people ask for help about.

How do you get a curl command line to get a resource, just like the browser would get it, nice and easy? Both Chrome and Firefox have provided this feature for quite some time already!

From Firefox

You get the site shown with Firefox’s network tools.  You then right-click on the specific request you want to repeat in the “Web Developer->Network” tool when you see the HTTP traffic, and in the menu that appears you select “Copy as cURL”. Like this screenshot below shows. The operation then generates a curl command line to your clipboard and you can then paste that into your favorite shell window. This feature is available by default in all Firefox installations.


From Chrome

When you pop up the More tools->Developer mode in Chrome, and you select the Network tab you see the HTTP traffic used to get the resources of the site. On the line of the specific resource you’re interested in, you right-click with the mouse and you select “Copy as cURL” and it’ll generate a command line for you in your clipboard. Paste that in a shell to get a curl command line  that makes the transfer. This feature is available by default in all Chome and Chromium installations.


On Firefox, without using the devtools

If this is something you’d like to get done more often, you probably find using the developer tools a bit inconvenient and cumbersome to pop up just to get the command line copied. Then cliget is the perfect add-on for you as it gives you a new option in the right-click menu, so you can get a quick command line generated really quickly, like this example when I right-click an image in Firefox:


Syndicated 2015-11-23 07:46:25 from

This post was not bought

coinsAt times I post blog articles that get the view counter go up to and beyond 50,000 views. This puts me in a position where I get offers from companies to mention them or to “cooperate” on further blog posts that would somehow push their agenda or businesses.

I also get the more simple offers of adding random ads or “text only information” on specific individual pages on my sites that some SEO person out there figured out could potentially attract audience that search for specific terms.

I’ve even gotten offers from a company to sell off my server logs. Allegedly to help them work on anti-fraud so possibly for a good cause, but still…

This is by no counts a “big” blog or site, yet I get a steady stream of individuals and companies offering me money to give up a piece of my soul. I can only imagine what more popular sites get and it is clear that someone with a less strict standpoint than mine could easily make an extra income that way.

I turn down all those examples of “easy money”.

I want to be able to look you, my dear readers, straight in the eyes when I say that what’s written here are my own words and the opinions revealed are my own – even if of course you may not agree with me and I may do mistakes and be completely wrong at times or even many times. You can rest assured that I did the mistakes on my own and I was not paid by anyone to do them.

I’ve also removed ads from most of my sites and I don’t run external analytic scripts, minimizing the privacy intrusions and optimizing the contents: the stuff downloaded from my sites are what your browser needs to render the page. Not helps of useless crap to show ads or to help anyone track you (in order to show more targeted ads).

I don’t judge others’ actions based on how I decide to run my blog. I’m in a fortunate position to take this stand, I realize that.

Still biased of course

This all said, I’m still employed by a company (Mozilla) that pays my salary and I work on several projects that are dear to me so of course I will show bias to some subjects. I don’t claim to have an objective view on things and I don’t even try to have that. When I write posts here, they come colored by my background and by what I am.

Syndicated 2015-11-20 08:28:26 from

The most popular curl download – by a malware

During October 2015 the curl web site sent out 1127 gigabytes of data. This was the first time we crossed the terabyte limit within a single month.

Looking at the stats a little closer, I noticed that in July 2015 a particular single package started to get very popular. The exact URL was

Curious. In October it alone was downloaded more than 300,000 times, accounting for over 70% of the site’s bandwidth. Why?

The downloads came from what appears to be different locations. They don’t use any HTTP referer headers and they used different User-agent headers. I couldn’t really see a search bot gone haywire or a malicious robot stuck in a crazy mode.

After I shared some of this data over in our IRC channel (#curl on freenode), Björn Stenberg stumbled over this AVG slide set, describing how a particular malware works when it infects a computer. Downloading that particular file is thus a step in its procedures to create a trojan that will run on the host system – see slide 11 for the curl details. The slide also mentions that an updated version of the malware comes bundled with the curl library already, which then I guess makes the hits we see on the curl site being done by the older versions still being run.

Of course, we can’t be completely sure this is the source for the increased download of this particular file but it seems highly likely.

I renamed the file just now to see what happens.

Evil use of good code

We can of course not prevent evil uses of our code. We provide source code and we even host some binaries of curl and libcurl and both good and bad actors are able to take advantage of our offers.

This rename won’t prevent a dedicated hacker, but hopefully it can prevent a few new victims from getting this malware running on their machines.

Syndicated 2015-11-16 11:43:18 from

TCP tuning for HTTP

I’m the author of a brand new internet-draft that I submitted just the other day. The title is TCP Tuning for HTTP,  and the intent is to gather a set of current best practices for HTTP implementers; to share and distribute knowledge we’ve gathered over the years. Clients, servers and intermediaries. For HTTP/1.1 as well as HTTP/2.

I’m now awaiting, expecting and looking forward to feedback, criticisms and additional content for this document so that it can become the resource I’d like it to be.

How to contribute to this?

  1.  ideally, send your feedback to the HTTPbis mailing list,
  2. or submit an issue or pull-request on github for the
  3. or simply email me your comments: daniel <at>

I’ve been participating first passively and more and more actively over the years within the IETF, mostly in the HTTPbis working group. I think open protocols and open standards are important and I like being part of making them reality. I have the utmost respect and admiration for those who are involved in putting the RFCs together and thus improve the world we live in, step by step.

For a long while I’ve been wanting  to step up and “pull my weight” too,  to become a better participant in this area, and I’m happy to now finally take this step. Hopefully this is just the first step of many more to come.

(Psssst: While gathering feedback and updating the git version, the current work in progress version of the draft is always visible here.)

Syndicated 2015-11-07 23:17:07 from

h2 performance at Velocity NYC

Tuesday October 13th 2015 I co-presented a talk at the Velocity conference in NYC together with Ragnar Lönn of Loadimpact. Ragnar is a friend of mine and another Swede.

Daniel and Ragnar at VelocityThe presentation was split up in two parts, in which I laid out the foundations of HTTP/2 in the first part, and Ragnar then presented the results of his performance study in the second part.

I think an interesting take away from the study is the following.

Existing sites are usually having a lot of resources that need to get downloaded. An average site has around one hundred now and the number is increasing. Those resources often have dependencies or trigger subsequent transfers. Like a HTML file gets parsed and then a CSS file is downloaded and once the CSS is downloaded it gets parsed and images specified in there are downloaded. It easily gets even more “steps” like that when downloading javascript, that triggers more javascript that renders parts of the page that causes more resources to get downloaded.

velocity room

Nothing new there, right? But when switching a site like that over to HTTP/2 the performance gain will be capped at a certain percentage no matter how large latency you have to the site because what limits such a site to perform well is the time it takes to get to the end of the slowest “dependency chain”. It is less of an issue with HTTP/1.1 since if the resources are from the same site, browsers won’t do more than 6 requests in parallel anyway (on the 6 separate TCP connections it’ll use).

It becomes evident that in order to make such a site really benefit from HTTP/2, the site would have to be modified ever so slightly so that it would deliver its contents with shorter chains and allow the browsers to get more of the resources earlier, in parallel rather than serially.

The actual talk

Splitting up a presentation in two parts with two talkers is more difficult than doing it yourself. I think we did a decent job and we ended the presentation early. It enabled us to answer to a lot of questions and we were actually quite bombarded with them – all relevant and well considered and I think we managed to bring more to the room thanks to them. A lot of the questions were about more generic HTTP/2 and deployments though and not all exactly about the performance study of the presentation.

The audience gave us an average score of 3.74 out of 5. Not too shabby. The room seated 360 persons but it wasn’t completely filled up.

Syndicated 2015-10-21 07:42:21 from

912 older entries...


bagder certified others as follows:

  • bagder certified shughes as Journeyer
  • bagder certified andrei as Master
  • bagder certified kbob as Apprentice
  • bagder certified mbp as Master
  • bagder certified shughes as Journeyer
  • bagder certified sussman as Journeyer
  • bagder certified mpawlo as Apprentice
  • bagder certified BrucePerens as Master
  • bagder certified rmk as Master
  • bagder certified Fefe as Journeyer
  • bagder certified gstein as Master
  • bagder certified robey as Master
  • bagder certified edd as Journeyer
  • bagder certified ask as Journeyer
  • bagder certified joe as Master
  • bagder certified alan as Master
  • bagder certified pawal as Apprentice
  • bagder certified stone as Apprentice
  • bagder certified sej as Journeyer
  • bagder certified fxn as Apprentice
  • bagder certified forrest as Apprentice
  • bagder certified wsanchez as Master
  • bagder certified zagor as Journeyer
  • bagder certified ben as Master
  • bagder certified kfogel as Master
  • bagder certified orabidoo as Master
  • bagder certified linas as Master
  • bagder certified jas as Master

Others have certified bagder as follows:

  • ib certified bagder as Master
  • chipx86 certified bagder as Master
  • rupert certified bagder as Master
  • larsu certified bagder as Master
  • mvw certified bagder as Journeyer
  • neurogato certified bagder as Journeyer
  • whytheluckystiff certified bagder as Master
  • andrei certified bagder as Journeyer
  • jbowman certified bagder as Journeyer
  • alexr certified bagder as Journeyer
  • pretzelgod certified bagder as Journeyer
  • thallgren certified bagder as Journeyer
  • execve certified bagder as Master
  • pelleb certified bagder as Master
  • GJF certified bagder as Master
  • kroah certified bagder as Master
  • jooon certified bagder as Master
  • nixnut certified bagder as Journeyer
  • jLoki certified bagder as Master
  • mpawlo certified bagder as Journeyer
  • technik certified bagder as Master
  • highgeek certified bagder as Journeyer
  • Stab certified bagder as Master
  • TheCorruptor certified bagder as Master
  • sethcohn certified bagder as Master
  • elho certified bagder as Master
  • monkeyiq certified bagder as Master
  • ebf certified bagder as Master
  • jmg certified bagder as Master
  • robey certified bagder as Master
  • edd certified bagder as Master
  • jbontje certified bagder as Journeyer
  • khazad certified bagder as Master
  • walken certified bagder as Master
  • ask certified bagder as Journeyer
  • xsa certified bagder as Master
  • shlomif certified bagder as Master
  • pawal certified bagder as Master
  • CarloK certified bagder as Master
  • stone certified bagder as Master
  • lerdsuwa certified bagder as Master
  • sej certified bagder as Master
  • nny certified bagder as Journeyer
  • ks certified bagder as Journeyer
  • fxn certified bagder as Master
  • jono certified bagder as Master
  • forrest certified bagder as Master
  • rw2 certified bagder as Master
  • wsanchez certified bagder as Master
  • dlc certified bagder as Journeyer
  • zagor certified bagder as Journeyer
  • ncm certified bagder as Master
  • redi certified bagder as Master
  • ianweller certified bagder as Master
  • bwy certified bagder as Master
  • badvogato certified bagder as Journeyer
  • mnot certified bagder as Master
  • Limbus certified bagder as Master new

[ Certification disabled because you're not logged in. ]

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

Share this page