I just saw a dude order a 4-shot red eye.
Lies, d… oh, forget it
According to a recent Linux Foundation study, Cisco is already contributing to Linux and currently represents 0.5 percent of changes (which is a good number). I would expect that with the AXP in the market, Cisco’s contribution rate will go up.
Now, I don’t work on AXP or anything related to ISRs, so I have no idea what those groups plan to do with respect to Linux, but it was somewhat amusing to see the Linux Foundation report cited to show how much work CIsco does on the kernel.
This isn’t the first time I’ve seen this study cited to show what a kernel development powerhouse Cisco is. In the report, Cisco is credited with 442 commits to the kernel; however, more than 400 of of those commits are mine, and about 30 are Don Fry maintaining the pcnet32 net driver. So if you take away my work on InfiniBand/RDMA, Cisco’s contributions to the Linux kernel are pretty minimal.
I’m not sure if I have much of a point except that I wish we really did have more than one or two isolated developers at Cisco really engaged with the upstream kernel.
He who divides and shares is left with the best share
I’ve been talking to a lot of people about the “iWARP port sharing problem” lately, so I thought it might be a good idea to write a quick summary to point at and bring new people up to speed without constantly repeating myself.
To start with, iWARP is an RDMA (remote direct memory access) protocol that runs over TCP (or conceivably SCTP or any other stream protocol). It was defined by the IETF rddp working group, and the standard is in RFC 5040 and later RFCs. So what’s so great about RDMA?
The rationale for RDMA is laid out in great detail in RFC 4297, but the basic idea is that allowing network messages to carry information about where they should be received and allowing the NIC to place the data directly in that buffer allows fundamentally better performance.
To take a concrete example, think of iSCSI: an initiator sends a bunch of SCSI commands to a target (probably queuing up multiple commands), and the target processes the commands, possibly out of order, and returns the responses to the initiator. Without RDMA (or at least, without “direct data placement,” which is pretty equivalent to RDMA), for each read that the initiator does, it has to receive the data from the target, look at which command the data corresponds to, and copy it into the buffer that the SCSI midlayer wants it in. With RDMA and the “iSCSI Extensions for RDMA” (iSER, which is RFC 5046), the target can send the data in response to a read command and have it placed directly in the receive buffer on the initiator, which saves the copy and uses 3x less memory bandwidth (which is huge if the data is running at 10Gb/sec). In the SCSI world, this is nothing particularly exciting: pretty much every Fibre Channel HBA in the world already does the equivalent thing. What’s cool about iWARP is that it allows similar optimizations for NFS (the IETF nfsv4 working group is defining a standard for NFS/RDMA, and kernel 2.6.24-rc1 already has the client side of this draft protocol merged) as well as other applications that we haven’t thought of yet.
The way that iWARP is implemented is that RDMA NICs handle the full iWARP protocol including TCP in hardware — yes, the dreaded “TCP offload engine.” This is crucial to the performance: if the network data isn’t processed to the point of knowing where to put it on the NIC’s side of the PCI bus, then the memory bandwidth savings of copy avoidance is lost. So while one can imagine an iWARP implementation with stateless NIC hardware using some super-fancy header splitting and chipset DMA engine tricks, it’s not clear that it will perform as well as current iWARP NICs do.
Now, in addition to handling TCP connections, iWARP NICs also have to act like normal NICs so that they can handle normal network traffic such as ARPs, pings or ssh logins. What this means is that some packets are received normally and passed up the standard network stack, while other packets that belong to iWARP connections are consumed by the NIC.
This is what leads to the “port sharing problem.” One application might do a normal bind() to accept TCP connections on port X. It might even let the kernel choose a port number for it. Then another application (possibly even the same application) does an iWARP bind and tells the iWARP NIC to accept TCP connections on the same port X. This might happen because two different applications do the bind and have no way of coordinating with each other, or it might happen because one application just passes 0 in the sin_port field of its bind requests, and the kernel chooses the same port for both the normal and iWARP bind(). Whatever the reason, the end result is not good: the NIC and the network stack are left fighting for the same packets, and someone has to lose.
The reason this is an issue is because the kernel’s network stack and iWARP stack have completely separate port allocators, so there is no way for applications to prevent port collisions from happening. The obvious solution is to have normal TCP and iWARP port numbers allocated from the same space.
Unfortunately, the Linux networking developers are not too interested in cooperating on this. It seems that some people have just decided that anyone who wants to use iWARP is wrong to want that (no matter how much better than the alternatives it is for that user’s app) and will just reflexively reject anything iWARP-related without trying to engage in constructive discussion. (Given that attitude, it’s rather ironic when the same people preach about open-mindedness and “thinking outside the box,” but let’s not get sidetracked…)
Given the current deadlock, the advice I’ve been giving to the various iWARP NIC companies is just to sell a lot of iWARP NICs and make the problem so big that we’re forced to find a solution. I don’t see any other way to force people to work together.
On social networks
With Google’s big OpenSocial announcement, I find myself thinking about social networking in general. I think I may be a generation or so too old to really “get it,” but I do use four social networking sites at least a little bit:
If Google can open all this up so I have better control of my own information and don’t have to deal with three or four different sites all the time, that would be cool. But I doubt they can pull off anything so pro-consumer….
Materials from LinuxConf.eu RDMA tutorial (at last)
At long last, after several requests, I’ve posted the slides, notes, and client and server examples from the tutorial I gave at LinuxConf.eu 2007 in Cambridge back in September. Hyper-observant readers will notice that the client program I posted does not match the listing in the notes I handed out; this is because I fixed a race condition in how completions are collected.
I’m not sure how useful all this is without me talking about it, but I guess every little bit helps. And of course, if you have questions about RDMA or InfiniBand programming, come on over to the mailing list and fire away.
InfiniBand/RDMA in 2.6.23 and 2.6.24
With yesterday’s release of kernel 2.6.23, I thought it might be a good time to look back at what significant changes are in 2.6.23, and what we have queued up for 2.6.24..
So first I looked at the kernel git log from the v2.6.22 tag to the v2.6.23 tag, and I was surprised to find that nothing really stood out. We merged something like 158 patches that touched 123 files, but I couldn’t really find any headline-worthy new features in there. There were just tons of fixes and cleanups all over, although mostly in the low-level hardware drivers. For some reason, 2.6.23 was a pretty calm development cycle for InfiniBand and RDMA, which means that at least that part of 2.6.23 should be rock solid.
2.6.24 promises to be a somewhat more exciting release for us. In my for-2.6.24 branch, in addition to the usual pile of fixes and cleanups, I have a couple of interesting changes queued up to merge as soon as Linus starts pulling things in:
Also, bonding support for IP-over-InfiniBand looks set to go in through Jeff Garzik’s tree. This is something that I’ve been wanting to see for years now; the patches allow the standard bonding module to enslave IPoIB interfaces, which means that multiple IB ports can finally be used for IPoIB high-availability failover. Moni Shoua and others did a lot of work and stuck with this for a long time, and the final set of patches turned out to be very clean and nice, so I’m really pleased to see this get merged.
Force of habit…
I went to the little mini post office in the grocery store today and bought some stamps. The conversation went something like this:
Me: “I’d like a sheet of 41-cent stamps and two sheets of 2-cent stamps, please.”
Counter person: “That will be $9. Do you need any stamps or postal supplies today?”
Me: “Um. Just the stamps I already asked for, thanks.”
Lazyweb: best American plug to UK receptacle adapter?
As I mentioned in my previous post, I’ll be traveling to Cambridge next month. I haven’t been to the UK in nearly 10 years, and so I’m in the market for an electrical adapter, since even after powertop’s best efforts, I still need to charge my laptop occasionally. So I’m looking for something I can use between a North American plug and a UK receptacle. Ideally the adapter would be neither impossible tight nor prone to coming out, and wouldn’t fall apart until after my trip. I don’t need any gold-plated active phase skew compensation or anything like that, though.
Any suggestions? Thanks….
New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.
Keep up with the latest Advogato features by reading the Advogato status blog.
If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!