Older blog entries for dkg (starting at number 34)

hotmail thinks powerpc means mobile

Apparently, live.com thinks that any browser coming from a ppc architecture is a mobile device. This sucks for the users of the hundreds of thousands of powerpc desktops still in service.

I don't use hotmail myself, but i do support people who use it. I set one of my clients up with debian squeeze on their PPC machine because all the proprietary vendors have basically given up on that architecture -- debian represents the best way to get modern tools on these machines (and other machines too, but that's a different argument).

However, this client couldn't get to their hotmail account, despite using the latest version of iceweasel (3.5.12). They were directed to a crippled interface that didn't include the ability to attach files, and was a gross waste of the desktop screen space. It appears to be the "mobile" version of live.com's services.

However, the same version of iceweasel on an i686 test machine could access the standard version of hotmail with no trouble. My friend jeremyb helpfully suggested fiddling with the User Agent string exported by the browser. Some experimentation shows that the presence of the string "ppc" within any parenthetical expression in the UA makes live.com show the crappy interface. You can try it yourself (if you have a hotmail account) on your x86 or amd64 machine by adding (ppc) to the default valule of general.useragent.extra.firefoxComment in about:config. Stupid stupid stupid.

I'd like to have fixed this by overriding the browser's reported architecture (or simply by removing it -- why does a web server need to know the hardware architecture of my client?). But there doesn't appear to be a way to do that with the way that mozilla constructs the UA. Instead, i needed to add a new string key named general.useragent.override which is not exposed by default in about:config.

This raises some questions:

  • Why are we publishing our hardware architectures from our browsers anyway? This seems like unncessary leakage, and not all browsers do it. For example, Arora doesn't leak this info (despite a poorly-argued request to do so). Browsers are already too identifiable by servers. This information should not be leaked by default.
  • Why does live.com insist on sending ppc users to the crappy "mobile" version? Are they trying to encourage the treadmill of hardware upgrades that proprietary vendors benefit from? Is there some less insidious explanation? Are there actually more powerpc-based mobile devices than desktops?
  • why is there no simple way to tell Firefox/Iceweasel to override or suppress the architecture information? Having to override the useragent string entirely means that when iceweasel does eventually get upgraded, it's going to report the wrong version unless i can remember to update the override myself (i can't reasonably expect a non-techie client who never heard of user agents before today to remember how to do this correctly).

Any ideas?

Tags: browser, hotmail, powerpc, ppc, useragent, wtf

Syndicated 2010-09-21 19:55:00 from Weblogs for dkg

NYC SYEP still requires Microsoft Software

A year ago, i wrote about how New York City's Summer Youth Employment Program (SYEP) requires the use of Internet Explorer to apply online (and it even appears to require IE just to download the PDF of the application!)

Sadly, the situation has not changed, a year later. Today, I'm writing to Dan Garodnick, Chair of The City Council's Committee on Technology (and the rest of the committee members), Carole Post, Commissioner of DoITT (the city's Department of Information Technology and Telecommunications), and Jeanne B. Mullgrav, Commissioner of DYCD (the Department of Youth and Community Development, which runs SYEP).

Here's what i wrote:

For the last two years at least, the DYCD's Summer Youth Employment Program
(SYEP) has been only available to users of Internet Explorer:

 https://application.nycsyep.com/

Internet Explorer (IE) is only made by Microsoft, and is only available for
people running Microsoft operating systems.  Users of other operating systems,
such as GNU/Linux, Macintosh, or others cannot access the SYEP application
process.  Even users of Windows who care about their online security or simply
desire a different web browsing experience might prefer to avoid Internet
Explorer.

Not only is the online form inaccessible from browsers other than IE, even
retrieving a copy of the PDF to print out and fill in manually is unavailable
for web browsers other than IE.

What is the city's policy is on access to government sites?  Is it city policy
to mandate a single vendor's software for access to city resources?  Should NYC
youth be required to purchase software from Microsoft to be able to apply
for the Summer Youth Employment Program?

The sort of data collection needed by such an application is a mainstay of the
standards-based web, and has been so for over 15 years now.  There is no reason
to require particular client on an open platform.  I can point you toward
resources who would be happy to help you make the system functional for users
of *any* web browser, if you like.

I raised this issue over a year ago (see nyc.gov correspondence #1-1-473378926,
and a public weblog posted around the same time [0]), and got no effective
remedy.  It's worrisome to see that this is still a problem.

Please let me know what you plan to do to address the situation.

Regards,

	--dkg

[0] https://www.debian-administration.org/users/dkg/weblog/47
Feel free to send your own message to the folks above (especially helps if you live in or near NYC)

Finally, Carole Post, the head of DoITT will also be present at a panel tonight in Soho, which i'm unfortunately be unable to attend. If you go there, you might ask her about the situation.

Tags: policy

Syndicated 2010-05-19 19:44:00 from Weblogs for dkg

Talks and tracks at debconf 10

I'm helping out on the talks committee for debconf 10 this summer in NYC (so yes, i'm going to be here for it, even though i don't have that little badge thingy). This is a call for interested folks to let us know what you want to see at debconf!

Talks

If you haven't already, submit your proposal for a talk, performance, debate, panel, BoF session, etc! We know you've got good ideas, and the final call for contributions went out yesterday, due in less than a week. Please propose your event soon!

Tracks

Also, we want to introduce Tracks as a new idea for debconf this summer. A good track would thematically group a consecutive set of debconf events (talks, panels, debates, performances, etc) to encourage a better understanding of a broader theme. For this to work, we need a few good people to act as track coordinators for the areas where they are knowledgeable and engaged.

A track coordinator would have a chance to set the tone and scope for their track, schedule events, assemble panels or debates, introduce speakers, and report back at the end of debconf to the larger gathering. We also hope that a coordinator could identify potential good work being done in their area, encourage people to submit relevant events for debconf, and shepherd proposals in their track through the submission process.

Are you interested in coordinating a track on some topic? Or do you have a suggestion for someone else who might do a good job on a topic you want to see well-represented at debconf? You can contact the talk committee privately if you have questions at talks@debconf.org, or you can contact the whole team publicly at debconf-team@lists.debconf.org.

Some ideas about possible tracks:

  • Science and Mathematics in Debian
  • Debian Integration into the Enterprise
  • Media and Arts and Debian
  • Trends and Tools in Debian Packaging
  • Debian Systems and Infrastructure
  • Debian Community Outreach
  • ...your topic here...

We can't guarantee that any particular track will happen at dc10, but we can guarantee that it won't happen if no one proposes it or wrangles the relevant events together. Help us make this the best debconf ever and make sure that your own topical itch gets scratched!

Tags: debconf, debconf10

Syndicated 2010-04-25 22:29:00 from Weblogs for dkg

Avoiding Erroneous OpenPGP certifications

i'm aware that people don't always take proper measures during mass OpenPGP keysignings. Apparently, some keys even get signed with no one at the keysigning present speaking for that key (for example, if the key was submitted to the keysigning via online mechanisms beforehand, but the keyholder failed to show up). Unverified certifications are potentially erroneous, and erroneous certifications are bad for the OpenPGP web of trust. Debian and other projects rely on the OpenPGP web of trust being reasonable and healthy. People should make a habit of doing proper verifications at keysignings. People who make unverified certifications should probably be made aware of better practices.

So for future keysignings, i may introduce a key to the set under consideration and see what sort of OpenPGP certifications that key receives. I won't pretend to hold that key in person, won't speak for it, and it won't have my name attached to it. But it may be on the list.

Depending on the certifications received on that key (and the feedback i get on this blog post), i'll either publish the list of wayward certifiers, or contact the certifiers privately. Wayward certifiers should review their keysigning practices and revoke any certifications they did not adequately verify.

Remember, at a keysigning party, for each key:

  • Check that the fingerprint on your copy exactly matches the one claimed by the person in question
  • Check that the person in question is actually who they say they are (e.g. gov't ID, with a photo that looks like them, with their name matching the name in the key's User ID)
  • If the fingerprints don't match, or you don't have confidence in the name or their identity, or no one stands up to claim the key, there's no harm done in simply choosing to not certify the user IDs associated with that key. You don't even need to tell the person you've decided to do so.
  • Take notes in hard copy. It will help you later.

After the keysigning, when you go to actually make your OpenPGP certifications:

  • Make sure you have the same physical document(s) that you had during the keysigning (no, downloading a file from the same URL is not the same thing)
  • Use your notes to decide which keys you actually want to make certifications over.
  • If a key has several user IDs on it, and some of them do not match the person's name, simply don't certify the non-matching user IDs. You should certify only the user IDs you have verified.
  • If a key has a user ID with an e-mail address on it that you aren't absolutely sure belongs to the person in question, mail an encrypted copy of the certification for that User ID to the e-mail address in question. If they don't control that e-mail address, they won't get the certification, and it will never become public. caff (from the signing-party package) should help you to do that.

Feedback welcome!

Tags: keysigning, openpgp, tip

Syndicated 2010-03-23 02:44:00 from Weblogs for dkg

TCP weirdness, IMAP, wireshark, and perdition

This is the story of a weirdly unfriendly/non-compliant IMAP server, and some nice interactions that arose from a debugging session around it.

Over the holidays, i got to do some computer/network debugging for friends and family. One old friend (I'll call him Fred) had a series of problems i managed to help work through, but was ultimately basically stumped based on the weird behavior of an IMAP server. Here's the details (names of the innocent and guilty have been changed), just in case it helps other folks in at least diagnosing similar situations.

the diagnosis

The initial symptom was that Fred's computer was "very slow". Sadly, this was a Windows™ machine, so my list of tricks for diagnosing sluggishness is limited. I went through a series of questions, uninstalling things, etc, until we figured it would be better to just have him do his usual work while i watched, kibitzing on what seemed acceptable and what seemed slow. Quite soon, we hit a very specific failure: Fred's Thunderbird installation (version 2, FWIW) was sometimes hanging for a very long period of time during message retrieval. This was not exhaustion of the CPU, disk, RAM, or other local resource. It was pure network delay, and it was a frequent (if unpredictable) frustrating hiccup in his workflow.

One thought i had was Thunderbird's per-server max_cached_connections setting, which can sometimes cause a TB instance to hang if a remote server thinks Thunderbird is being too aggressive. After sorting out why Thunderbird was resetting the values after we'd set them to 0 (grr, thanks for the confusing UI, folks!), we set it to 1, but still had the same occasional, lengthy (about 2 minutes) hang when transfering messages between folders (including the trash folder!), or when reading new messages. Sending mail was quite fast, except for occasional (similarly lengthy) hangs writing the copy to the sent folder. So IMAP was the problem (not SMTP), and the 2-minute timeouts smelled like an issue with the networking layer to me.

At this point, i busted out wireshark, the trusty packet sniffer, which fortunately works as well on Windows as it does on GNU/Linux. Since Fred was doing his IMAP traffic in the clear, i could actually see when and where in the IMAP session the hang was happening. (BTW, Fred's IMAP traffic is no longer in the clear: after all this happened, i switched him to IMAPS (IMAP wrapped in a TLS session), because although the IMAP server in question actually supports the STARTTLS directive, it fails to advertise it in response to the CAPABILITIES query, so Thunderbird refuses to try it. arrgh.)

The basic sequence of Thunderbird's side of an initial IMAP conversation (using plain authentication, anyway) looks something like this:

1 capability
2 login "user" "pass"
3 lsub "" "*"
4 list "" "INBOX"
5 select "INBOX"
6 UID fetch 1:* (FLAGS)
What i found with this server was that if i issued commands 1 through 5, and then left the connection idle for over 5 minutes, then the next command (even if it was just a 6 NOOP or 6 LOGOUT) would cause the IMAP server to issue a TCP reset. No IMAP error message or anything, just a failure at the TCP level. But a nice, fast, responsive failure -- any IMAP client could recover nicely from that by just immediately opening a new connection. I don't mind busy servers killing inactive connections after a reasonable timeout. If it was just this, though, Thunderbird should have continued to be responsive.

the deep weirdness

But if i issued commands 1 through 6 in rapid succession (the only difference is that extra 6 UID fetch 1:* (FLAGS) command), and then let the connection idle for 5 minutes, then sent the next command: no response of any kind would come from the remote server (not even a TCP ACK or TCP RST). In this circumstance, my client OS's TCP stack would re-send the data repeatedly (staggered at appropriate intervals), until finally the client-side TCP timeout would trigger, and the OS would report the failure to the app, which could turn around and do a simple connection restart to finish up the desired operation. This was the underlying situation causing Fred's Thunderbird client to hang.

In both cases above (with or without the 6th command), the magic window for the idle cutoff was a little more than 300 seconds (5 minutes) of idleness. If the client issued a NOOP at 4 minutes, 45 seconds from the last NOOP, it could keep a connection active indefinitely.

Furthermore, i could replicate the exact same behavior when i used IMAPS -- the state of the IMAP session itself was somehow modifying the TCP session behavior characteristics, whether it was wrapped in a TLS tunnel or not.

One interesting thing about this set of data is that it rules out most common problems in the network connectivity between the two machines. Since none of the hops between the two endpoints know anything about the IMAP state (especially under TLS), and some of the failures are reported properly (e.g. the TCP RST in the 5-command scenario), it's probably safe to say that the various routers, NAT devices, and such were not themselves responsible for the failures.

So what's going on on that IMAP server? The service itself does not announce the flavor of IMAP server, though it does respond to a successful login with You are so in, and to a logout with IMAP server logging out, mate. A bit of digging on the 'net suggests that they are running a perdition IMAP proxy. (clearly written by an Aussie, mate!) But why does it not advertise its STARTTLS capability, even though it is capable? And why do some idle connections end up timing out without so much as an RST, when other idle connections give at least a clean break at the TCP level?

Is there something about issuing the UID command that causes perdition to hand off the connection to some other service, which in turn doesn't do proper TCP error handling? I don't really know anything about the internals of perdition, so i'm just guessing here.

the workaround

I ultimately recommended to Fred to reduce the number of cached connections to 1, and to set Thunderbird's interval to check for new mail down to 4 minutes. Hopefully, this will keep his one connection active enough that nothing will timeout, and will keep the interference to his workflow to a minimum.

It's an unsatisfactory solution to me, because the behavior of the remote server still seems so non-standard. However, i don't have any sort of control over the remote server, so there's not too much i can do to provide a real fix (other than point the server admins (and perdition developers?) at this writeup).

I don't even know the types of backend server that their perdition proxy is balancing between, so i'm pretty lost for better diagnostics even, let alone a real resolution.

some notes

I couldn't have figured out the exact details listed above just using Thunderbird on Windows. Fortunately, i had a machine with a decent OS available, and was able to cobble together a fake IMAP client from a couple files (imapstart contained the lines above, and imapfinish contained 8 LOGOUT), bash , and socat.

Here's the bash snippet i used as a fake IMAP client:

spoolout() { while read foo; do sleep 1 && printf "%s\r\n" "$foo" ; done }

( sleep 2 && spoolout < imapstart && sleep 4 && spoolout < imapfinish && sleep 500 ) | socat STDIO TCP4:imap.fubar.example.net:143
To do the test under IMAPS, i just replaced TCP4:imap.fubar.example.net:143 with OPENSSL:imap.fubar.example.net:993.

And of course, i had wireshark handy on the GNU/Linux machine as well, so i could analyze the generated packets over there.

One thing to note about user empowerment: Fred isn't a tech geek, but he can be curious about the technology he relies on if the situation is right. He was with me through the whole process, didn't get antsy, and never tried to get me to "just fix it" while he did something else. I like that, and wish i got to have that kind of interaction more (though i certainly don't begrudge people the time if they do need to get other things done). I was nervous about breaking out wireshark and scaring him off with it, but it turned out it actually was a good conversation starter about what was actually happening on the network, and how IP and TCP traffic worked.

Giving a crash course like that in a quarter of an hour, i can't expect him to retain any concrete specifics, of course. But i think the process was useful in de-mystifying how computers talk to each other somewhat. It's not magic, there are just a lot of finicky pieces that need to fit together a certain way. And Wireshark turned out to be a really nice window into that process, especially when it displays packets during a real-time capture. I usually prefer to do packet captures with tcpdump and analyze them as a non-privileged user afterward for security reasons. But in this case, i felt the positives of user engagement (how often do you get to show someone how their machine actually works?) far outweighed the risks.

As an added bonus, it also helped Fred really understand what i meant when i said that it was a bad idea to use IMAP in the clear. He could actually see his username and password in the network traffic!

This might be worth keeping in mind as an idea for a demonstration for workshops or hacklabs for folks who are curious about networking -- do a live packet capture of the local network, project it, and just start asking questions about it. Wireshark contains such a wealth of obscure packet dissectors (and today's heterogenous public/open networks are so remarkably chatty and filled with weird stuff) that you're bound to run into things that most (or all!) people in the room don't know about, so it could be a good learning activity for groups of all skill levels.

Tags: debugging, imap, perdition, wireshark

Syndicated 2010-01-21 19:37:00 from Weblogs for dkg

January 2010 Bug-Squashing Party NYC

We're going to have a Bug-Squashing Party at the end of January 2010 in New York City. If you live in or around the tri-state area (or want to visit), are interested in learning about the process, meeting other debian folk, or just squashing some bugs in good company, you should come out and join us!

Where:
Brooklyn, New York, USA
When:
January 29th, 30th, and maybe 31st of 2010
Why:
Because them bugs need squashing!

If you plan on coming, please either sign up on the wiki page, or at least mail one of the good folks listed there, or pop into #debian-nyc on irc.oftc.net's IRC network.

Syndicated 2009-12-21 20:54:00 from Weblogs for dkg

dd, netcat, and disk throughput

I was trying to dump a large Logical Volume (LV) over ethernet from one machine to another. I found some behavior which surprised me.

fun constraints

  • I have only a fairly minimal debian installation on each machine (which fortunately includes netcat-traditional)
  • The two machines are connected directly by a single (gigabit) ethernet cable, with no other network connection. So no pulling in extra packages.
  • I have serial console access to both machines, but no physical access.
  • The LV being transfered is 973GB in size according to lvs (fairly large, that is), and contains a LUKS volume, which itself contains a basically-full filesystem -- transferring just the "used" bytes is not going to save space/time.
  • I want to be able to check on how the transfer is doing while it's happening.
  • I want the LV to show up as an LV on the target system, and don't have tons of extra room on the target to play around with (so no dumping it to the filesystem as a disk image first).

(how do i get myself into these messes?)

This entry has been truncated read the full entry.

Syndicated 2009-12-21 06:21:00 from Weblogs for dkg

dealing with entropy on a virtual machine

I've been using virtual machines (KVM, these days) as isolated environments to do things like build packages as root. Unfortunately, some of these activities require decent-sized chunks of random data (pulled from /dev/random). But /dev/random pulls from the kernel's entropy pool, which in turn is replenished from "hardware" events. But a virtual machine has no actual hardware, and if it is only doing isolated package builds, there is very little activity to feed the kernel's entropy pool. So the builds and test suites that rely on this randomness all hang for a long long time. :(

My current way to get around this is to replace /dev/random with the /dev/urandom device, which does not block if the entropy pool is depleted:

mknod /dev/newrandom c 1 9
chmod --reference=/dev/random /dev/newrandom
mv -f /dev/newrandom /dev/random
This has the consequence that the "randomness" these commands use doesn't have as much "real" entropy, though some operating systems (like FreeBSD) have a non-blocking /dev/random by default (and it's also questionable what "real" entropy means for a virtual machine in the first place).

I'm also using cowbuilder within these VMs to do package builds. But cowbuilder has its own /dev tree, with its own device nodes, so this needs to be fixed too. So after you have successfully done cowbuilder --create, you need to modify the random device within the cowbuilder chroot:

mknod /var/cache/pbuilder/base.cow/dev/newrandom c 1 9
chmod --reference=/var/cache/pbuilder/base.cow/dev/random /var/cache/pbuilder/base.cow/dev/newrandom
mv -f /var/cache/pbuilder/base.cow/dev/newrandom /var/cache/pbuilder/base.cow/dev/random
Hopefully this will be useful for other people using cowbuilder (or other build strategies) on isolated virtual machines. If you've worked around this problem in other ways (or if there's a security concern about this approach), i'd be happy to hear about the details.

Syndicated 2009-12-12 18:42:00 from Weblogs for dkg

Revoking the Ubuntu Community Code of Conduct

I've just revoked my signature over the Ubuntu Code of Conduct 1.0.1. I did this because Ubuntu's CoC (perhaps jokingly?) singles out Mark Shuttleworth as someone who should be held to a super-human standard (as pointed out recently by Rhonda, as well as earlier in ubuntu bug 53848).

I think that the CoC is a good document, and good guidelines in general for reasonable participation in online communities. When i originally signed the document, i thought the Shuttleworth-exceptionalism was odd, but decided i'd be willing to hold him to a higher standard than the rest of the community, if he wanted me to. That is, i figured his position as project leader meant that he could have made the CoC different than it is, thus he was (perhaps indirectly) asking me to hold him to a higher standard.

Why does this matter to me now? Shuttleworth has apparently signed the Ubuntu Code of Conduct, but as i wrote about earlier, his recent sexist comments at LinuxCon were a Bad Thing for the community, and his apparent lack of an apology or open discussion with the community concerned about it was even worse.

So i'm asking Mark Shuttleworth to abide by the following points in the Code of Conduct that he has signed:

  • Be considerate
  • Be respectful [...] It's important to remember that a community where people feel uncomfortable or threatened is not a productive one.
  • The important goal is not to avoid disagreements or differing views but to resolve them constructively. You should turn to the community and to the community process to seek advice and to resolve disagreements.
  • When you are unsure, ask for help. Nobody knows everything, and nobody is expected to be perfect in the Ubuntu community
I've signed a revised version of the Ubuntu Code of Conduct 1.01 (with the Shuttleworth-exceptionalism clause removed), to reaffirm my commitment to these principles, and to acknowledge that, yes, the SABDFL can make a mistake, and to encourage him to address his mistakes in a fashion befitting a mature participant in this community we both care about.

UPDATE: It seems that Mako and Daniel Holbach have recently revised the CoC resulting in a new version (1.1) which has just been approved by the the Ubuntu Community Council. The new version 1.1 looks good to me (i like its broadening of scope beyond developers, and its lack of superhuman claims for Shuttleworth) and when it is available on Launchpad, i'll most likely sign it there. Thanks to the two of them for their work! I hope Shuttleworth will consider abiding by this new version.

Syndicated 2009-10-20 18:24:00 from Weblogs for dkg

sexist behavior in the free software community

So not even 3 months out from RMS's sexist Gran Canaria virgins remarks, we have another powerful leader in the Free Software Community making sexist remarks in a talk to developers (this time, it's Mark Shuttleworth). It's a shame that these two people have said stupid things that hurt their causes and their communities by perpetuating an unfriendly environment for women. And it's a bigger shame that neither leader appears to care enough about their community to issue a sincere public apology for their screwup (if i'm wrong about this, please point me to the apology — i've looked).

These guys are in a situation which is nowhere near as hard as writing good software or managing complex technical projects: if you make a stupid mistake, own up to it, apologize, and try not to make similar mistakes in the future.

Perhaps worst of all, are the remarkable number of unreasonably fucked-up comments on the blog posts discussing these unfortunate events. If you're in the habit of defending remarks like those made by RMS and Shuttleworth on the 'net, please take a minute and ask yourself a few questions:

  • Do you think that the Free Software community today is overwhelmingly male (even by the standards of the male-dominated IT industry)? If not, thanks for playing. You are living in a fantasy world. Try some basic research.
  • Do you think that the significant under-representation of women is a problem? Let's say there are about three answers here:
    Gender disparity in Free Software is a Good Thing
    If this is your position, please announce it explicitly so we all know. Just so you know: I don't want to be part of your all-boys club. You can stop these questions now, sorry to have bothered you.
    I don't really care about gender disparity in Free Software one way or the other
    You may not care; but a significant subset of the Free Software community thinks that it's a problem and would like to address it. Please keep this in mind as you go to the next question. Also, have you thought much about the idea of privilege and how it might apply to your situation?
    I think gender disparity in Free Software is probably a Bad Thing
    Great, glad we agree on that.
  • People in our community have a problem with the current state of affairs, and point out some specific behavior that makes the bad situation worse. What should you do?
    Shout them down or attack them
    Gee, it sure is upsetting to hear people talk about problems in the community. It's almost as upsetting as getting bug reports about problems in our software. Shall we shout them down too? Maybe we should attack them! Condescension is also great. Those silly bug reporters!
    Argue them out of having a problem
    This just doesn't work very well. Someone has already volunteered to tell you about a problem that you hadn't noticed. You are unlikely to convince them that they were imagining things.
    Take them seriously
    Yes! It seems to be surprising to some commentators that this is not a witch hunt or a lynch mob (interesting that these terms often-used in defense of white men connote specific historical traditions of the exercise of male privilege and white privilege, respectively). Well-meaning people have respectfully raised good-faith concerns about the state of our community, and made very simple suggestions about what to do to make the community more welcoming to women: lay off the sexist remarks at conferences, apologize when some nonsense does slip through — we're all struggling with various kinds of internalized oppression, you won't be perfect — and try not to do it again. Why not listen to these people? Why not support them?

Please read the Geek Feminism wiki and blog. Even if you don't agree with everything on those sites (hey, it's a wiki! and a blog! you don't have to agree with everything!), people are at least trying to address the problem of sexism in our community there. Engage constructively and don't hide or ignore problems!

Syndicated 2009-10-01 20:32:00 from Weblogs for dkg

25 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!