Older blog entries for dkg (starting at number 37)

Debian NYC Workshop: What's in a Package?

Debian NYC will be holding a workshop next week: What's in a Package? will happen at 7:00pm New York time on October 27, 2010. If you're in the New York area, interested in packaging things for debian and related systems, or just want to understand the packages in your system better, you should RSVP and come on out!

This workshop will provide advanced theory useful for people modifying or creating packages. For people modifying packages, you'll learn many typical motifs and about various build systems. For creating packages, you'll be much better prepared to read and understand guides a deep level. However, this is still not a step-by-step guide in "how to build packages", but will get you very close to there.

See you there!

Syndicated 2010-10-20 21:31:00 from Weblogs for dkg

monkeysphere and distributed naming

Roland Mas writes an interesting article about decentralized naming, in which he says:

Monkeysphere aims at adding a web of trust to the SSL certificates system, but the CA chain problem seems to persist (although I must admit I'm not up to speed with the actual details).
Since i'm one of the Monkeysphere developers, i figure i should respond!

Let me clarify that Monkeysphere doesn't just work in places where X.509 (the SSL certificate system) works. It works in other places too (like SSH connections). And I don't think that the CA chain problem that remains in Monkeysphere is anything like the dangerous mess that common X.509 usage has given us. I do think that at some level, people need to think about who is introducing them to other people -- visual or human-comprehensible representations of public key material are notoriously difficult to make unspoofable.

On the subject of distributed naming: OpenPGP already allows distributed naming: everty participant in the WoT is allowed to assert that any given key maps to any given identity. Duplicates and disagreements can exist just fine. How an entity decides to certify another entity's ID without a consensus global namespace is a tough one, though. If i've always been known as "John Smith" to my friends, and someone else has also been known as "John Smith" to his friends, our friends aren't actually disagreeing or in conflict -- it's just that neither of us has a unique name. The trouble comes when someone new wants to find "John Smith" -- which of us should they treat as the "correct" one?

I think the right answer probably has to do with who they're actually looking for, which has to do with why they're looking for someone named "John Smith". If they're looking for John Smith because the word on the street is that John Smith is a good knitter and they need a pair of socks, they can just examine what information we each publish about ourselves, and decide on a sock-by-sock basis which of us best suits their needs.

But if they're looking for "John Smith" because their cousin said "hey, i know this guy John Smith. I think you would like argue politics over a beer with him", then what matters is the introduction. And OpenPGP handles that just fine -- if their cousin has only ever met a single John Smith, that's the right one. If their cousin has met several John Smiths, then the searcher would do well to ask their cousin some variant of "hey, do you mean John Smith or John Smith ", or even "do you mean the John Smith who Molly has met, or the one who Charles has met?" (assuming that Molly and Charles have each only certified one John Smith in common with the cousin, and not the same one as each other), or to get a real-time introduction to a particular John Smith, where his specific key is somehow recordable by the searcher for future conversations (or beer drinking). This is what we do in the real world anyway. We currently lack good UIs for doing this over the network, but the certification infrastructure is in place already.

What we're lacking in infrastructure, though, is a way to have a distributed addressing. Roland's proposal was to publish addresses corresponding to cryptographic identities within some DNS zone, or in freenet or gnutella. Another approach (piggybacking on existing infrastructure) would be to include IP address information in the OpenPGP self-certification, so the holder of the name could claim exactly their own IP address. This could be distributed through the keyserver network, just like other updates are today, and it could be done simply and immediately with a well-defined OpenPGP notation. I'd be happy to talk to interested people about how to specify such a notation, and what possible corner cases we might run into. Drop a note here, or mail the Monkeysphere mailing list or hop onto #monkeysphere on irc.oftc.net

Syndicated 2010-10-06 23:19:00 from Weblogs for dkg

You should be using ssh-agent

If you're not using ssh-agent to authenticate yourself to SSH servers, you should be. (i'm assuming you're already using PubKeyAuthentication; if you're still using PasswordAuthentication or KbdInteractiveAuthentication, fix that please).

You should use ssh-agent for a number of reasons, actually, but the simplest is this: when you authenticate to a text-based channel on a remote server, you should never have to type anything about that authentication into the channel that will eventually be controlled by the remote server.

That's because a malicious server could simply accept your connection as an anonymous connection and print out the exact prompt you're expecting. Then, whatever you're typing goes into the remote server instead of into your authentication scheme. and congrats, you just gave away the passphrase for your key.

With ssh-agent, you talk first to your agent. Then, you talk to the server and your ssh client talks to the agent. Your keys and your passphrase are never exposed.

the second reason is that the agent is a much smaller piece of code than the ssh client, and it doesn't talk to the network at all (unless you force it to). It holds your key and never releases it to querying processes; It even runs in a protected memory space so other processes can't peek at it.

So if this protected, isolated agent is what holds your key, you're in much better shape than if a non-protected, larger, network-active process (the ssh client) has direct access to your secret key material.

The third reason is that it's just more convenient -- you can put a key in your agent, and ask it to prompt you when its use is requested. you don't actually need to re-type your passphrase each time. you can just hit enter or type "yes".

And if that scares you security-wise then you can put the key in for a limited period of time, as well.

(btw, you should be using the ssh-agent that ships with OpenSSH, probably not the implementation offered by gnome, which doesn't offer a confirmation prompt, doesn't run in protected memory space, and links in a ton more libraries)

So how do you use the agent? It's probably already installed and running on your computer if you run a desktop with debian or another reasonable free operating system.

Query what keys are in your agent:

0 dkg@pip:~$ ssh-add -l
The agent has no identities.
1 dkg@pip:~$ 

Add a standard OpenSSH secret key to your agent, prompting for confirmation after each use:

0 dkg@pip:~$ ssh-add -c ~/.ssh/id_rsa
Enter passphrase for /home/dkg/.ssh/id_rsa: your nice long passphrase here
Identity added: /home/dkg/.ssh/id_rsa (/home/dkg/.ssh/id_rsa)
The user must confirm each use of the key
0 dkg@pip:~$ 
(if you drop the -c, you will not be prompted at each use)

Add a standard OpenSSH secret key to your agent, with a lifespan of one hour (3600 seconds)

0 dkg@pip:~$ ssh-add -t 3600 ~/.ssh/id_rsa
Enter passphrase for /home/dkg/.ssh/id_rsa: your nice long passphrase here
Identity added: /home/dkg/.ssh/id_rsa (/home/dkg/.ssh/id_rsa)
Lifetime set to 3600 seconds
0 dkg@pip:~$ 
(note that you can combine the -t $SECONDS and -c flags to get key that is time-constrained and requires a confirmation prompt at each use)

Add a monkeysphere-style key (an authentication-capable subkey from your GnuPG secret keyring) to the ssh-agent (this will prompt you for your GnuPG passphrase with a graphical ssh-askpass program during this keyload, if such a program is available), for one hour:

0 dkg@pip:~$ monkeysphere subkey-to-ssh-agent -t 3600
Identity added: Daniel Kahn Gillmor <dkg@fifthhorseman.net> (Daniel Kahn Gillmor <dkg@fifthhorseman.net>)
Lifetime set to 3600 seconds
0 dkg@pip:~$ 
If you don't already have such a subkey, but you want to use the monkeysphere, you'll need to run monkeysphere gen-subkey to create one first.

Note also that you can use both -c and -t $SECONDS with monkeysphere subkey-to-ssh-agent, just like they are used with ssh-add.

Remove all keys from your running agent:

0 dkg@pip:~$ ssh-add -D
All identities removed.
0 dkg@pip:~$ 

I hope this is helpful to people!

Tags: security, ssh, ssh-agent, tip

Syndicated 2010-09-27 20:18:00 from Weblogs for dkg

hotmail thinks powerpc means mobile

Apparently, live.com thinks that any browser coming from a ppc architecture is a mobile device. This sucks for the users of the hundreds of thousands of powerpc desktops still in service.

I don't use hotmail myself, but i do support people who use it. I set one of my clients up with debian squeeze on their PPC machine because all the proprietary vendors have basically given up on that architecture -- debian represents the best way to get modern tools on these machines (and other machines too, but that's a different argument).

However, this client couldn't get to their hotmail account, despite using the latest version of iceweasel (3.5.12). They were directed to a crippled interface that didn't include the ability to attach files, and was a gross waste of the desktop screen space. It appears to be the "mobile" version of live.com's services.

However, the same version of iceweasel on an i686 test machine could access the standard version of hotmail with no trouble. My friend jeremyb helpfully suggested fiddling with the User Agent string exported by the browser. Some experimentation shows that the presence of the string "ppc" within any parenthetical expression in the UA makes live.com show the crappy interface. You can try it yourself (if you have a hotmail account) on your x86 or amd64 machine by adding (ppc) to the default valule of general.useragent.extra.firefoxComment in about:config. Stupid stupid stupid.

I'd like to have fixed this by overriding the browser's reported architecture (or simply by removing it -- why does a web server need to know the hardware architecture of my client?). But there doesn't appear to be a way to do that with the way that mozilla constructs the UA. Instead, i needed to add a new string key named general.useragent.override which is not exposed by default in about:config.

This raises some questions:

  • Why are we publishing our hardware architectures from our browsers anyway? This seems like unncessary leakage, and not all browsers do it. For example, Arora doesn't leak this info (despite a poorly-argued request to do so). Browsers are already too identifiable by servers. This information should not be leaked by default.
  • Why does live.com insist on sending ppc users to the crappy "mobile" version? Are they trying to encourage the treadmill of hardware upgrades that proprietary vendors benefit from? Is there some less insidious explanation? Are there actually more powerpc-based mobile devices than desktops?
  • why is there no simple way to tell Firefox/Iceweasel to override or suppress the architecture information? Having to override the useragent string entirely means that when iceweasel does eventually get upgraded, it's going to report the wrong version unless i can remember to update the override myself (i can't reasonably expect a non-techie client who never heard of user agents before today to remember how to do this correctly).

Any ideas?

Tags: browser, hotmail, powerpc, ppc, useragent, wtf

Syndicated 2010-09-21 19:55:00 from Weblogs for dkg

NYC SYEP still requires Microsoft Software

A year ago, i wrote about how New York City's Summer Youth Employment Program (SYEP) requires the use of Internet Explorer to apply online (and it even appears to require IE just to download the PDF of the application!)

Sadly, the situation has not changed, a year later. Today, I'm writing to Dan Garodnick, Chair of The City Council's Committee on Technology (and the rest of the committee members), Carole Post, Commissioner of DoITT (the city's Department of Information Technology and Telecommunications), and Jeanne B. Mullgrav, Commissioner of DYCD (the Department of Youth and Community Development, which runs SYEP).

Here's what i wrote:

For the last two years at least, the DYCD's Summer Youth Employment Program
(SYEP) has been only available to users of Internet Explorer:


Internet Explorer (IE) is only made by Microsoft, and is only available for
people running Microsoft operating systems.  Users of other operating systems,
such as GNU/Linux, Macintosh, or others cannot access the SYEP application
process.  Even users of Windows who care about their online security or simply
desire a different web browsing experience might prefer to avoid Internet

Not only is the online form inaccessible from browsers other than IE, even
retrieving a copy of the PDF to print out and fill in manually is unavailable
for web browsers other than IE.

What is the city's policy is on access to government sites?  Is it city policy
to mandate a single vendor's software for access to city resources?  Should NYC
youth be required to purchase software from Microsoft to be able to apply
for the Summer Youth Employment Program?

The sort of data collection needed by such an application is a mainstay of the
standards-based web, and has been so for over 15 years now.  There is no reason
to require particular client on an open platform.  I can point you toward
resources who would be happy to help you make the system functional for users
of *any* web browser, if you like.

I raised this issue over a year ago (see nyc.gov correspondence #1-1-473378926,
and a public weblog posted around the same time [0]), and got no effective
remedy.  It's worrisome to see that this is still a problem.

Please let me know what you plan to do to address the situation.



[0] https://www.debian-administration.org/users/dkg/weblog/47
Feel free to send your own message to the folks above (especially helps if you live in or near NYC)

Finally, Carole Post, the head of DoITT will also be present at a panel tonight in Soho, which i'm unfortunately be unable to attend. If you go there, you might ask her about the situation.

Tags: policy

Syndicated 2010-05-19 19:44:00 from Weblogs for dkg

Talks and tracks at debconf 10

I'm helping out on the talks committee for debconf 10 this summer in NYC (so yes, i'm going to be here for it, even though i don't have that little badge thingy). This is a call for interested folks to let us know what you want to see at debconf!


If you haven't already, submit your proposal for a talk, performance, debate, panel, BoF session, etc! We know you've got good ideas, and the final call for contributions went out yesterday, due in less than a week. Please propose your event soon!


Also, we want to introduce Tracks as a new idea for debconf this summer. A good track would thematically group a consecutive set of debconf events (talks, panels, debates, performances, etc) to encourage a better understanding of a broader theme. For this to work, we need a few good people to act as track coordinators for the areas where they are knowledgeable and engaged.

A track coordinator would have a chance to set the tone and scope for their track, schedule events, assemble panels or debates, introduce speakers, and report back at the end of debconf to the larger gathering. We also hope that a coordinator could identify potential good work being done in their area, encourage people to submit relevant events for debconf, and shepherd proposals in their track through the submission process.

Are you interested in coordinating a track on some topic? Or do you have a suggestion for someone else who might do a good job on a topic you want to see well-represented at debconf? You can contact the talk committee privately if you have questions at talks@debconf.org, or you can contact the whole team publicly at debconf-team@lists.debconf.org.

Some ideas about possible tracks:

  • Science and Mathematics in Debian
  • Debian Integration into the Enterprise
  • Media and Arts and Debian
  • Trends and Tools in Debian Packaging
  • Debian Systems and Infrastructure
  • Debian Community Outreach
  • ...your topic here...

We can't guarantee that any particular track will happen at dc10, but we can guarantee that it won't happen if no one proposes it or wrangles the relevant events together. Help us make this the best debconf ever and make sure that your own topical itch gets scratched!

Tags: debconf, debconf10

Syndicated 2010-04-25 22:29:00 from Weblogs for dkg

Avoiding Erroneous OpenPGP certifications

i'm aware that people don't always take proper measures during mass OpenPGP keysignings. Apparently, some keys even get signed with no one at the keysigning present speaking for that key (for example, if the key was submitted to the keysigning via online mechanisms beforehand, but the keyholder failed to show up). Unverified certifications are potentially erroneous, and erroneous certifications are bad for the OpenPGP web of trust. Debian and other projects rely on the OpenPGP web of trust being reasonable and healthy. People should make a habit of doing proper verifications at keysignings. People who make unverified certifications should probably be made aware of better practices.

So for future keysignings, i may introduce a key to the set under consideration and see what sort of OpenPGP certifications that key receives. I won't pretend to hold that key in person, won't speak for it, and it won't have my name attached to it. But it may be on the list.

Depending on the certifications received on that key (and the feedback i get on this blog post), i'll either publish the list of wayward certifiers, or contact the certifiers privately. Wayward certifiers should review their keysigning practices and revoke any certifications they did not adequately verify.

Remember, at a keysigning party, for each key:

  • Check that the fingerprint on your copy exactly matches the one claimed by the person in question
  • Check that the person in question is actually who they say they are (e.g. gov't ID, with a photo that looks like them, with their name matching the name in the key's User ID)
  • If the fingerprints don't match, or you don't have confidence in the name or their identity, or no one stands up to claim the key, there's no harm done in simply choosing to not certify the user IDs associated with that key. You don't even need to tell the person you've decided to do so.
  • Take notes in hard copy. It will help you later.

After the keysigning, when you go to actually make your OpenPGP certifications:

  • Make sure you have the same physical document(s) that you had during the keysigning (no, downloading a file from the same URL is not the same thing)
  • Use your notes to decide which keys you actually want to make certifications over.
  • If a key has several user IDs on it, and some of them do not match the person's name, simply don't certify the non-matching user IDs. You should certify only the user IDs you have verified.
  • If a key has a user ID with an e-mail address on it that you aren't absolutely sure belongs to the person in question, mail an encrypted copy of the certification for that User ID to the e-mail address in question. If they don't control that e-mail address, they won't get the certification, and it will never become public. caff (from the signing-party package) should help you to do that.

Feedback welcome!

Tags: keysigning, openpgp, tip

Syndicated 2010-03-23 02:44:00 from Weblogs for dkg

TCP weirdness, IMAP, wireshark, and perdition

This is the story of a weirdly unfriendly/non-compliant IMAP server, and some nice interactions that arose from a debugging session around it.

Over the holidays, i got to do some computer/network debugging for friends and family. One old friend (I'll call him Fred) had a series of problems i managed to help work through, but was ultimately basically stumped based on the weird behavior of an IMAP server. Here's the details (names of the innocent and guilty have been changed), just in case it helps other folks in at least diagnosing similar situations.

the diagnosis

The initial symptom was that Fred's computer was "very slow". Sadly, this was a Windows™ machine, so my list of tricks for diagnosing sluggishness is limited. I went through a series of questions, uninstalling things, etc, until we figured it would be better to just have him do his usual work while i watched, kibitzing on what seemed acceptable and what seemed slow. Quite soon, we hit a very specific failure: Fred's Thunderbird installation (version 2, FWIW) was sometimes hanging for a very long period of time during message retrieval. This was not exhaustion of the CPU, disk, RAM, or other local resource. It was pure network delay, and it was a frequent (if unpredictable) frustrating hiccup in his workflow.

One thought i had was Thunderbird's per-server max_cached_connections setting, which can sometimes cause a TB instance to hang if a remote server thinks Thunderbird is being too aggressive. After sorting out why Thunderbird was resetting the values after we'd set them to 0 (grr, thanks for the confusing UI, folks!), we set it to 1, but still had the same occasional, lengthy (about 2 minutes) hang when transfering messages between folders (including the trash folder!), or when reading new messages. Sending mail was quite fast, except for occasional (similarly lengthy) hangs writing the copy to the sent folder. So IMAP was the problem (not SMTP), and the 2-minute timeouts smelled like an issue with the networking layer to me.

At this point, i busted out wireshark, the trusty packet sniffer, which fortunately works as well on Windows as it does on GNU/Linux. Since Fred was doing his IMAP traffic in the clear, i could actually see when and where in the IMAP session the hang was happening. (BTW, Fred's IMAP traffic is no longer in the clear: after all this happened, i switched him to IMAPS (IMAP wrapped in a TLS session), because although the IMAP server in question actually supports the STARTTLS directive, it fails to advertise it in response to the CAPABILITIES query, so Thunderbird refuses to try it. arrgh.)

The basic sequence of Thunderbird's side of an initial IMAP conversation (using plain authentication, anyway) looks something like this:

1 capability
2 login "user" "pass"
3 lsub "" "*"
4 list "" "INBOX"
5 select "INBOX"
6 UID fetch 1:* (FLAGS)
What i found with this server was that if i issued commands 1 through 5, and then left the connection idle for over 5 minutes, then the next command (even if it was just a 6 NOOP or 6 LOGOUT) would cause the IMAP server to issue a TCP reset. No IMAP error message or anything, just a failure at the TCP level. But a nice, fast, responsive failure -- any IMAP client could recover nicely from that by just immediately opening a new connection. I don't mind busy servers killing inactive connections after a reasonable timeout. If it was just this, though, Thunderbird should have continued to be responsive.

the deep weirdness

But if i issued commands 1 through 6 in rapid succession (the only difference is that extra 6 UID fetch 1:* (FLAGS) command), and then let the connection idle for 5 minutes, then sent the next command: no response of any kind would come from the remote server (not even a TCP ACK or TCP RST). In this circumstance, my client OS's TCP stack would re-send the data repeatedly (staggered at appropriate intervals), until finally the client-side TCP timeout would trigger, and the OS would report the failure to the app, which could turn around and do a simple connection restart to finish up the desired operation. This was the underlying situation causing Fred's Thunderbird client to hang.

In both cases above (with or without the 6th command), the magic window for the idle cutoff was a little more than 300 seconds (5 minutes) of idleness. If the client issued a NOOP at 4 minutes, 45 seconds from the last NOOP, it could keep a connection active indefinitely.

Furthermore, i could replicate the exact same behavior when i used IMAPS -- the state of the IMAP session itself was somehow modifying the TCP session behavior characteristics, whether it was wrapped in a TLS tunnel or not.

One interesting thing about this set of data is that it rules out most common problems in the network connectivity between the two machines. Since none of the hops between the two endpoints know anything about the IMAP state (especially under TLS), and some of the failures are reported properly (e.g. the TCP RST in the 5-command scenario), it's probably safe to say that the various routers, NAT devices, and such were not themselves responsible for the failures.

So what's going on on that IMAP server? The service itself does not announce the flavor of IMAP server, though it does respond to a successful login with You are so in, and to a logout with IMAP server logging out, mate. A bit of digging on the 'net suggests that they are running a perdition IMAP proxy. (clearly written by an Aussie, mate!) But why does it not advertise its STARTTLS capability, even though it is capable? And why do some idle connections end up timing out without so much as an RST, when other idle connections give at least a clean break at the TCP level?

Is there something about issuing the UID command that causes perdition to hand off the connection to some other service, which in turn doesn't do proper TCP error handling? I don't really know anything about the internals of perdition, so i'm just guessing here.

the workaround

I ultimately recommended to Fred to reduce the number of cached connections to 1, and to set Thunderbird's interval to check for new mail down to 4 minutes. Hopefully, this will keep his one connection active enough that nothing will timeout, and will keep the interference to his workflow to a minimum.

It's an unsatisfactory solution to me, because the behavior of the remote server still seems so non-standard. However, i don't have any sort of control over the remote server, so there's not too much i can do to provide a real fix (other than point the server admins (and perdition developers?) at this writeup).

I don't even know the types of backend server that their perdition proxy is balancing between, so i'm pretty lost for better diagnostics even, let alone a real resolution.

some notes

I couldn't have figured out the exact details listed above just using Thunderbird on Windows. Fortunately, i had a machine with a decent OS available, and was able to cobble together a fake IMAP client from a couple files (imapstart contained the lines above, and imapfinish contained 8 LOGOUT), bash , and socat.

Here's the bash snippet i used as a fake IMAP client:

spoolout() { while read foo; do sleep 1 && printf "%s\r\n" "$foo" ; done }

( sleep 2 && spoolout < imapstart && sleep 4 && spoolout < imapfinish && sleep 500 ) | socat STDIO TCP4:imap.fubar.example.net:143
To do the test under IMAPS, i just replaced TCP4:imap.fubar.example.net:143 with OPENSSL:imap.fubar.example.net:993.

And of course, i had wireshark handy on the GNU/Linux machine as well, so i could analyze the generated packets over there.

One thing to note about user empowerment: Fred isn't a tech geek, but he can be curious about the technology he relies on if the situation is right. He was with me through the whole process, didn't get antsy, and never tried to get me to "just fix it" while he did something else. I like that, and wish i got to have that kind of interaction more (though i certainly don't begrudge people the time if they do need to get other things done). I was nervous about breaking out wireshark and scaring him off with it, but it turned out it actually was a good conversation starter about what was actually happening on the network, and how IP and TCP traffic worked.

Giving a crash course like that in a quarter of an hour, i can't expect him to retain any concrete specifics, of course. But i think the process was useful in de-mystifying how computers talk to each other somewhat. It's not magic, there are just a lot of finicky pieces that need to fit together a certain way. And Wireshark turned out to be a really nice window into that process, especially when it displays packets during a real-time capture. I usually prefer to do packet captures with tcpdump and analyze them as a non-privileged user afterward for security reasons. But in this case, i felt the positives of user engagement (how often do you get to show someone how their machine actually works?) far outweighed the risks.

As an added bonus, it also helped Fred really understand what i meant when i said that it was a bad idea to use IMAP in the clear. He could actually see his username and password in the network traffic!

This might be worth keeping in mind as an idea for a demonstration for workshops or hacklabs for folks who are curious about networking -- do a live packet capture of the local network, project it, and just start asking questions about it. Wireshark contains such a wealth of obscure packet dissectors (and today's heterogenous public/open networks are so remarkably chatty and filled with weird stuff) that you're bound to run into things that most (or all!) people in the room don't know about, so it could be a good learning activity for groups of all skill levels.

Tags: debugging, imap, perdition, wireshark

Syndicated 2010-01-21 19:37:00 from Weblogs for dkg

January 2010 Bug-Squashing Party NYC

We're going to have a Bug-Squashing Party at the end of January 2010 in New York City. If you live in or around the tri-state area (or want to visit), are interested in learning about the process, meeting other debian folk, or just squashing some bugs in good company, you should come out and join us!

Brooklyn, New York, USA
January 29th, 30th, and maybe 31st of 2010
Because them bugs need squashing!

If you plan on coming, please either sign up on the wiki page, or at least mail one of the good folks listed there, or pop into #debian-nyc on irc.oftc.net's IRC network.

Syndicated 2009-12-21 20:54:00 from Weblogs for dkg

dd, netcat, and disk throughput

I was trying to dump a large Logical Volume (LV) over ethernet from one machine to another. I found some behavior which surprised me.

fun constraints

  • I have only a fairly minimal debian installation on each machine (which fortunately includes netcat-traditional)
  • The two machines are connected directly by a single (gigabit) ethernet cable, with no other network connection. So no pulling in extra packages.
  • I have serial console access to both machines, but no physical access.
  • The LV being transfered is 973GB in size according to lvs (fairly large, that is), and contains a LUKS volume, which itself contains a basically-full filesystem -- transferring just the "used" bytes is not going to save space/time.
  • I want to be able to check on how the transfer is doing while it's happening.
  • I want the LV to show up as an LV on the target system, and don't have tons of extra room on the target to play around with (so no dumping it to the filesystem as a disk image first).

(how do i get myself into these messes?)

This entry has been truncated read the full entry.

Syndicated 2009-12-21 06:21:00 from Weblogs for dkg

28 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!