Older blog entries for dkg (starting at number 52)

Please use unambiguous tag names in your DVCS

One of the nice features of a distributed version control system (DVCS) like git is the ability to tag specific states of your project, and to cryptographically sign your tags.

Many projects use simple tag names with the version string like "0.35". This is a plea to ensure that the tags you make explicitly reference the project you're working on. For example, if you are releasing version 0.35 of project Foo, please make your tag "foo_0.35" for security and for future disambiguation.

There is more than one reason to care about unambiguous tags. I'll give two reasons below, but they come from the same fundamental observation: All git repositories are, in some sense, the same git repository; some just store different commits than others.

it's entirely possible to merge two disjoint git repositories, and to have two unassociated threads of development in the same repo. you can also merge threads of developement later if the projects converge.

Avoid tag replay attacks
Let's assume Alice works on two projects, Foo and Bar. She wraps up work on a new version of Foo, and creates and signs a simple tag "0.32". She publishes this tag to the Foo project's public git repo.

Bob is trying to attack the Bar project, which is currently at version 0.31. Bob can actually merge Alice's work on Foo into the Bar repository, including Alice's new tag.

Now looks like there is a tag for version 0.32 of project Bar, and it has been cryptographically-signed by Alice, a known active developer!

If she had named her tag "foo_0.32" (and if all Bar tags were of the form "bar_X.XX"), it would be clear that this tag did not belong to the Bar project.

Be able to merge projects and keep full history
Consider two related projects with separate development histories that later decide to merge (e.g. a library and its most important downstream application). If they merge their git repos, but both projects have a tag "0.1", then one tag must be removed to make room for the other.

If all tags were unambiguously named, the two repos could be merged cleanly without discarding or rewriting history.

I noticed this because of a general principle i try to keep in mind: when making a cryptographic signature, ensure that the thing you are signing is context-independent -- that is, that it cannot be easily misinterpreted when placed in a different context. For example, do not sign e-mails that just say "I approve" -- say specifically what you are approving of in the body of the signed mail. Otherwise, someone could re-send your "I approve" e-mail In-Reply-To a topic that you do not really approve of.

By extension, signing a simple tag is like saying "the source tree in this specific state (and with this specific history) is version 0.3". A close inspection of the state and the history by a sensitive/intelligent human skilled in the art of looking at source code can probably figure out what the project is from its contents. But it's much less ambiguous to say "the source tree in this specific state (and with this specific history) is version 0.3 of project Foo".

Once you start using unambiguous tag names, you make it safe for people to set up simple tools that do automated scans of your repository and can take action when a new signed tag appears. And you respect (and help preserve) the history of any future project that gets merged with yours.

Tags: git, security

Syndicated 2011-05-06 21:05:00 from Weblogs for dkg

test-driven development: refactoring, difficulties

liw recently wrote about test-driven development:

This all sounds like a lot of bureaucratic nonsense, but what I get out of this is this: once all tests pass, I have a strong confidence that the software works. As soon as I've added all the features I want to have for a release, I can push a button to push it out
To my mind, this is only one of the benefits. He doesn't describe another major benefit, which is the confidence with which you can take on re-factoring in projects with well-developed test infrastructure.

If your project has no test infrastructure at all, and you make a deep and/or potentially invasive change, you might well produce something that is heinously broken for users who have a different pattern of using the tool than you do.

But if you have a well-developed, reasonable test suite with fairly wide coverage, you can make a deep or invasive change, and be confident that -- if the tests all pass -- the stuff you release isn't going to be too horrific. And if you do break something with a change which the test suite didn't cover, that's an indication that the test suite is lacking. Hopefully, you can factor out a problem report into its own test, so that future changes will ensure that the behavior doesn't regress too.

The upshot of more-confident re-factoring is that your development can be much bolder, you can roll out new features more quickly, and you can spend less time agonizing about whether you've got the various abstraction layers exactly right the first time through. These are all good things (though i do think the agony of abstraction perfectionism is well-warranted in some contexts, like API definitions, and wouldn't want test-driven development to make people give up on that necessary task).

when testing can't cover everything

Some things are just hard to test well. for example, if you have 20 different boolean options to a command-line tool, you can't realistically test them in all combinations and permutations; that would be over a million tests.

User experience is also notoriously difficult to test, as are tools that rely heavily on network interaction, which can be flakey and unpredictable.

other downsides

Test suites themselves require maintenance, as the components they rely on can change over time. More than once, i've had a test suite failure that was really a failure of the test infrastructure, not the code itself. But in those same projects, that's usually followed or preceded by a test suite failure that picks out a particularly nasty or subtle bug in the tested code that might have persisted for quite a while unnoticed. So the test suite does in some sense create more work all around.

more and better testing is good for Debian

Even when we know that coverage isn't perfect, and even with its additional overhead, well-integrated tests (at the unit level and more generally) are worth the tradeoffs. It's worth doing because of the robustness and guarantees that we can give each other with regularly-tested code. It's a more stable foundation which surprisingly also gives us more flexibility going forward. This is good for free software in general. It also helps us find our bugs before our users do, so it's better for our users. So more and better testing directly supports the two main priorities outlined in the Debian Social Contract.

Syndicated 2011-05-04 23:11:00 from Weblogs for dkg

USAA Deposit@Home: bad engineering and terrible UX

I use USAA for some of my finances. They specialize in remote banking (i've never been in a physical branch). Sadly, they can still be pretty clueless about how to use the web properly. My latest frustration with them was trying to use their Deposit@Home service, where you scan your checks and upload them.

No problem, right? I've got a scanner and a web browser and i know how to use them both. Ha ha. Upon first connecting, i'm rejected, and i find the absurd System Requirements -- only Windows and Mac, and only certain versions of certain browsers. You also need Sun's Java plugin, apparently.

Deliberately naïve, i call their helpdesk and ask them if they could just give me a link to let me upload my scanned checks. They tell me that they want 200dpi images, and then give an absurd runaround that includes references to the Gramm-Leach-Bliley Act as the reason they need to limit the system to Windows or Mac, and that security is the reason they need to control the scanner directly (apparently your browser can control your local scanner on Windows? yikes). But they let slip that Mac users don't have the scanner controlled directly, and can just upload images (apparently the federal law doesn't cover them or something). Preposterous silliness.

The Workaround

Of course, it turns out you can get it working on Debian GNU/Linux, mainly by telling them what they want to hear ("yes sir, i'm running Mac OS"), but you'll also have to run Sun's Java (non-free software) to do it, since their Java uploader fails with nasty errors when using icedtea6-plugin.

I set up a dedicated system user for this goofiness, since i'm going to be running apparently untrustworthy applets on top of non-free software. I run a separate instance of iceweasel as that user; all configuration described is for that user's homedir. If you do this yourself, You'll need to decide if you want the same level of isolation for yourself.

So i have the choice of installing sun-java6-plugin from non-free and having the plugin installed for all web browsers, or just doing a per-user install of java for my dedicated user (and avoiding the plugin for my normal user). I opted for the latter. As the dedicated user, I fetched the self-extracting variant from java.com, unpacked it, and added it to the iceweasel plugins:

chmod a+x ~/Download/jre-6u25-linux-i586.bin
mkdir -p ~/lib ~/.mozilla/plugins
cd ~/lib
~/Download/jre-6u25-linux-i586.bin
ln -s ~/lib/jre*/lib/*/libnpjp2.so ~/.mozilla/plugins/
Then i closed iceweasel and restarted it. In the relaunched iceweasel sesson, I told Iceweasel 4 to pretend that it was actually Firefox 3.6 on a Mac. I did this by going to about:config (checking the box that says i know what i'm doing), right-clicking, and choosing "new >> string". The new variable name is general.useragent.override and i found that setting it to the following (insane, false) value worked for me:
Mozilla/5.0 (Macintosh; U; PPC Mac OS X Mach-O; en-US; rv:1.9.2.3) Gecko/20060426 Firefox/3.5.18
Note that this configuration might make hotmail think that you are a mobile device. :P If you try to use this browser profile for anything other than visiting USAA, you might want to remove this setting, or install xul-ext-useragentswitcher to be able to set it more easily than using about:config.

Once these changes were made, I was able to log into USAA, use the Deposit@Home service to upload my 200dpi grayscale scans. I guess they think i'm a Mac user now.

The Followup

After completing the upload, I wrote them this review (i doubt they'll post it):
The Deposit@Home service has great potential. Unfortunately, it currently appears to be overengineered and unnecessarily restrictive.

The service requires two scans (front and back) of the deposited check, at 200dpi, in grayscale or black-and-white, in jpeg format, reasonably-cropped. The simplest way to do this would be to show some examples of good scans and bad scans, and provide two file upload forms.

Once the user uploaded their images, the web site could run its own verification, and display them back for the user to confirm, optionally using a simple javascript-based image-cropper if any image seems wrong-sized. This would work fine with any reasonable browser on any OS.

Instead, Deposit@Home requires the user to present a User-Agent header claiming to be from specific versions of Mac or Windows, running certain (older) versions of certain browsers, and requires the use of Sun's Java plugin.

Entirely unnecessary system requirements to do a simple task. Disappointing. :(

Acknowledgements

I found good background for this approach on the ubuntu forums.

The Takeaway

I continue to be frustrated and annoyed by organizations that haven't yet embraced the benefits of the open web. Clearly, USAA has spent a lot of money engineering what they think is a certain experience for their users. However, they didn't design with standard web browsers in mind, so they appear to have boxed themselves into a corner where they think they have to test and approve of the entire software stack running on the end-user's machine.

This is not only foolish -- it's impossible. When you're designing a web-based application, just design it for the web. Keep it simple. If you want to offer some snazzy java-hooked-into-your-scanner insanity, i will only have a mild objection: it seems like a waste of time and engineering effort. My objection is much stronger if your snazzy/incompatible absurdity is the only way to use your service. A simple, web-based, browser-agnostic interface should be available to all your clients.

Even more aggravating is the claim that they don't think they should engineer for everyone. I was told during the runaround that they would only support Linux if 4% of their users were using Linux (which they don't think is the case at the moment -- if you are a USAA customer, and you use something other than Mac and Windows, please tell them so). I tried to tell them that I wasn't asking for Linux support; i was asking for everyone support. If you just use generic engineering in the first place, there's no extra expense for special-casing other users. But they couldn't understand.

And now, since i'll need to lie to them in my User Agent string every time i want to deposit a check online, those visits won't even show up in their logs to be counted. "Our web site deliberately disables itself for $foo users; we haven't written it for them; but that's OK, we don't have any $foo users anyway" is a nasty self-fulfilling prophecy. Why would you do that?

Syndicated 2011-04-28 06:59:00 from Weblogs for dkg

The bleeding edge: systemd

Curious about these shiny new things i keep hearing about, i set up a test desktop system using using systemd as the init system (yes, that means using systemd-sysv in place of sysvinit -- so i had to remove an essential package for this to work).

A system-crippling bug was naturally the result of living on the bleeding edge, but fortunately it was resolved with a trivial patch.

In all, i'm pretty happy with some of the things that systemd seems to get right. For example, native supervision of daemon processes, clean process states, elimination of initscript copy-paste-isms, and socket-based service initiation are all obvious, marked improvements over the existing sysvinit legacy cruft.

But i'm a bit concerned about other parts. For example, all the above-mentioned features fit really well within a tightly-tuned, well-managed server. But systemd also appears to rely heavily on complex userland systems like dbus and policykit that would be much more at home on a typical desktop machine. I've never seen a well-managed server installation that warranted either policykit or dbus. Also, given the bug i ran into -- when PID 1 aborts due to a silly assertion, your system is well-and-truly horked. Shouldn't a lot more attention to detail be happening? I'd think that a "recover gracefully from failed assertions" approach would be the first thing to target for a would-be PID 1.

I'm also concerned about the Linux-centricism of systemd. I understand that features like cgroups and reliance on the spiffiness of inotify are part of the appeal, but i also really don't want to see us stuck with only One Single Kernel as an option. The kFreeBSD folks (and the HURD folks) have done a lot of work to get us close to having some level of choice at this critical layer of infrastructure. It'd be good to see that possibility realized, to help us avoid the creeping monoculture. I worry that systemd's over-reliance on Linux-specific features is heading in the wrong direction.

So my question is: why is this all being presented as a package deal? I'd be pretty happy if i could get just the "server-side" features without incurring the dbus/policykit/etc bloat. I already run servers with runit as pid 1 -- they're lean and quite nice, but runit doesn't have systemd's socket-based initiation (or the level of heavyweight backing that systemd seems to have picked up, for that matter).

I understand that Lennart is resistant to UNIX's traditional "do one thing; do it well" philosophy. I can understand some of his reasoning, but i think he might be doing his work and his tools a disservice by taking it this far. Wouldn't systemd be better if it was clearer how to take advantage of parts of it without having to subscribe to the entire thing?

Of course, i might be missing some nice ways that systemd can be effectively tuned and pared down. But i've read his blog posts about systemd and i haven't seen how to get some of the nice features without the parts i don't want. I'd love to be pointed to some explanations that show me how i'm wrong :)

Syndicated 2011-04-27 08:07:00 from Weblogs for dkg

EDAC i5000 non-fatal errors

I've got a Debian GNU/Linux lenny installation (2.6.26-2-vserver-amd64 kernel) running on a Dell Poweredge 2950 with BIOS 2.0.1 (2007-10-27).

It has two Intel(R) Xeon(R) CPU 5160 @ 3.00GHz processors (according to /proc/cpuinfo, 8 1GiB 667MHz DDR2 ECC modules (part number HYMP512F72CP8N3-Y5), according to dmidecode, and an Intel Corporation 5000X Chipset Memory Controller Hub (rev 12) according to lspci.

The machine has been running stably for many months.

On the morning of March 31st, i started getting the following messages from the kernel, on the order of one pair of lines every 3 seconds:

Mar 31 07:04:38 zamboni kernel: [16883514.141275] EDAC i5000 MC0: NON-FATAL ERRORS Found!!! 1st NON-FATAL Err Reg= 0x800
Mar 31 07:04:38 zamboni kernel: [16883514.141278] EDAC i5000: 	NON-Retry  Errors, bits= 0x800
A bit of digging turned up a redhat bug report that seems to suggest that these warnings are just noise, and should be ignorable. Another link thinks it's a conflict with IPMI, though i don't think this model actually has an IPMI subsystem.

However, i also notice from munin logs that at the same time the error messages started, the machine exhibited a marked change in CPU activity (including in-kernel activity) and local timer interrupts: [Individual interrupts - by month]

[CPU Usage - by month]

I also note that more rescheduling interrupts started happening, and fewer megasas interrupts at about the same time. I'm not sure what this means.

A review of other logs and graphs on the system turns up no other evidence of interaction that might cause this kind of elevated activity.

One thought was that the elevated activity was just due to writing out a bunch more logs. So i tried removing the i5000_edac module just to keep dmesg and /var/log/kern.log cleaner. Leaving that turned off doesn't lower the CPU utilization or change the interrupts, though.

Any suggestions on what might be going on, or further diagnostics i should run? The machine is in production, and I'd really rather not take down the machine for an extended period of time to do a lengthy memory test. But i also don't want to see this kind of extra CPU usage (more than double the machine's baseline).

Tags: i5000_edac, troubleshooting

Syndicated 2011-04-09 00:00:00 from Weblogs for dkg

Debian on Thecus N8800Pro

I recently set up debian on a Thecus N8800Pro. It seems to be a decent rackmount 2U chassis with 8 3.5" hot-swap SATA drives, dual gigabit NICs, a standard DB9 RS-232 serial port, one eSATA port, a bunch of USB 2.0 connectors, and dual power supplies. I'm happy to be able to report that Thecus appears to be attempting to fulfill their obligations under the GPL with this model (i'm fetching their latest source tarball now to have a look).

Internally, it's an Intel Core 2 Duo processor, two DIMMS of DDR RAM, and a PCIe slot for extensions if you like. It has two internally-attached 128MiB SSDs that it wants to boot off of.

The downsides as i see them:

BIOS
The BIOS only runs over VGA, which is not easy to access. I wish it ran over the serial port. It's also not clear how one would find an upgrade to the BIOS. dmidecode reports the BIOS as:
        Vendor: Phoenix Technologies, LTD
        Version: 6.00 PG
        Release Date: 02/24/2009
RAM
The mainboard has only two DIMM sockets. It came pre-populated with two 2GiB DDR DIMMs. memtest86+ reports those DIMMs clearly, but sees only 3070MiB in aggregate. The linux kernel also sees ~3GiB of RAM, despite dmidecode and lshw reporting that the two DIMM modules each are only 1GiB in size. I'm assuming this is a bios problem. Anyone know how to get access to the full 4GiB? Should i be reporting bugs on any of these packages?
SSDs
The internal SSDs are absurdly small and very slow for both reading and writing. This isn't a big deal because i'm basically only using them for the bootloader, kernel, and initramfs. But i'm surprised that they could even find 128MiB devices in these days of $10 1GiB flash units.
processor
/proc/cpuinfo reports two cores of a Intel(R) Core(TM)2 Duo CPU T5500 @ 1.66GHz, each with ~3333 bogomips, and apparently without hardware virtualization support. It's not clear to me whether the lack of virtualization support is something that could be fixed with a BIOS upgrade.

I used Andy Millar's nice description of how to get access to the BIOS -- I found i didn't need a hacksaw to modify the VGA cable i had. just pliers to remove the outer metal shield on the side of the connector i plugged into the open socket.

lshw sees four SiI 3132 Serial ATA Raid II Controller PCI devices, each of which appears to support two SATA ports. I'm currently using only four of the 8 SATA bays, so i've tried to distribute the disks so that each of them is attached to an independent PCI device (instead of having two disks on one PCI device, and another PCI device idle). I don't know if this makes a difference, though, in the RAID10 configuration i'm using.

There's also a neat 2-line character-based LCD display on the front panel which apparently can be driven by talking to a serial port, though i haven't tried. Apparently there are also LEDs that might be controllable directly from the kernel via ICH7 and GPIO, but i haven't tried that yet either.

Any suggestions on how to think about or proceed with a BIOS upgrade to get access to the full 4GiB of RAM, BIOS-over-serial, and/or enable hardware virtualization?

Tags: n8800pro, thecus

Syndicated 2011-04-05 20:07:00 from Weblogs for dkg

auto-built debirf images

jrollins and i recently did a bunch of cleanup work on debirf, with the result that debirf 0.30 can now build all the shipped example profiles without error (well, as long as debootstrap --variant=fakechroot is working properly -- apprently that's not the case for fakechroot 2.9 in squeeze right now, which is why i've uploaded a backport of 2.14).

To try to avoid getting into a broken state again, we set up an autobuilder to create the images from the three example profiles (minimal, rescue, and xkiosk) for amd64 systems. The logs for these builds are published (with changes) nightly at:

git://debirf.cmrg.net/debirf-autobuilder-logs
But even better, we are also publishing the auto-generated debirf images themselves. So, for example, if you've got an amd64-capable machine with a decent amount of RAM (512MiB is easily enough), you can download a rescue kernel and image into /boot/debirf/, add a stanza to your bootloader, and be able to reboot to it cleanly, without having to sort out the debirf image creation process yourself.

We're also providing ISOs so people who still use optical media don't have to format their own.

Please be sure to verify the checksums of the files you download. The checksums themselves are signed by the OpenPGP key for Debirf Autobuilder <debirf@cmrg.net>, which i've certified and published to the keyserver network.

What's next? It would be nice to have auto-built images for i386 and other architectures. And if someone has a good idea for a new example profile that we should also be auto-building, please submit a bug to the BTS so we can try to sort it out.

Tags: debirf

Syndicated 2011-03-29 08:00:00 from Weblogs for dkg

CryptLib and the OpenSSL License

I spent part of today looking at packaging Peter Gutmann's CryptLib for debian. My conclusion: it may not be worth my time.

One of the main reasons i wanted to package it for debian is because i would like to see more OpenPGP-compliant free software available to debian users and developers, particularly if the code is GPL-compatible.

Cryptlib makes it quite clear that it intends to be GPL-compatible:

cryptlib is distributed under a dual license that allows free, open-source use under a GPL-compatible license and closed-source use under a standard commercial license.
However, a significant portion of the cryptlib codebase (particularly within the bn/ and crypt/ directories) appears to derive directly from OpenSSL, and it retains Eric Young's copyright and licensing. This licensing retains the so-called "advertising clause" that is generally acknowledged to be deliberately incompatible with the GPL. (A common counterargument for this incompatibility is that OpenSSL should be considered a "System Library" for GPL'ed tools that link against it; whether or not you believe this for tools linked against OpenSSL, this counterargument clearly does not hold for a project that embeds and ships OpenSSL code directly, as CryptLib does)

This does not mean that CryptLib is not free software (it is!), nor does it mean that you cannot link it against GPL'ed code (you can!). However, you probably can't distribute the results of linking CryptLib against any GPL'ed code, because the GPL is incompatible with the OpenSSL license.

The un-distributability of derived GPL-covered works makes the CryptLib package much less appealing to me, since i want to be able to write distributable code that links against useful libraries, and many of the libaries i care about happen to be under the GPL.

I also note that Peter Gutmann and CryptLib Security Software (apparently some sort of company set up to distribute CryptLib) may be in violation of Eric Young's License, which states:

 * 3. All advertising materials mentioning features or use of this software
 *    must display the following acknowledgement:
 *    "This product includes cryptographic software written by
 *     Eric Young (eay@cryptsoft.com)"
 *    The word 'cryptographic' can be left out if the rouines from the library
 *    being used are not cryptographic related :-).
A Google search for Eric Young's name on cryptlib.com yields no hits. I do find his name on page 340 of the Cryptlib manual, but that hardly seems like it covers "All advertising materials mentioning features or use of this software". I suppose it's also possible that CryptLib has negotiated an alternate license with Eric Young that I simply don't know about.

In summary: i think CryptLib could go into debian, but its incorporation of OpenSSL-licensed code makes it less useful than i was hoping it would be.

I have a few other concerns about the package after looking it over in more detail. I suspect these are fixable one way or another, but i haven't sorted them out yet. I'm not sure i'll spend the time to do so based on the licensing issues i sorted out above.

But in case anyone feels inspired, the other concerns i see are:

embedded code copies
In addition to embedded (and slightly-modified) copies of bn/ and crypt/ from OpenSSL, CryptLib contains an embedded copy of zlib. Debian has good reasons for wanting to avoid embedded code copies. I've got it CryptLib building against the system copy of zlib, but the bits copied from OpenSSL seem to be more heavily patched, which might complicate relying on the system version.
executable stack
lintian reports shlib-with-executable-stack for the library after it is built. I don't fully understand what this means or how to fix it, but i suspect it derives from the following set of object files that also have an executable stack:
0 dkg@pip:~/src/cryptlib/cryptlib$ for foo in $(find . -iname '*.o'); do
>  if ! readelf -S "$foo" | grep -q .note.GNU-stack ; then
>   echo "$foo"
>  fi
> done
./shared-obj/desenc.o
./shared-obj/rc5enc.o
./shared-obj/md5asm.o
./shared-obj/castenc.o
./shared-obj/bfenc.o
./shared-obj/rmdasm.o
./shared-obj/sha1asm.o
./shared-obj/bn_asm.o
./shared-obj/rc4enc.o
0 dkg@pip:~/src/cryptlib/cryptlib$ 
They probably all need to be fixed, unless there's a good reason for them to be that way (in which case they need to be documented).
non-PIC code
lintian reports shlib-with-non-pic-code on the library after it is built. I don't fully understand what this means, or how to resolve it cleanly, but i imagine this is just a question of sorting out the details.
Anyone interested in using my packaging attempts as a jumping off point can start with:
git clone git://lair.fifthhorseman.net/~dkg/cryptlib
Oh, and please correct me about anything in this post; I'd be especially happy to hear that my analysis of the licensing issues above is wrong :)

Tags: cryptlib, gpl, licensing, openssl

Syndicated 2011-03-16 04:40:00 from Weblogs for dkg

PHP, MySQL, and SSL?

Is there a way to access a MySQL database over the network from PHP and be confident that

  • the connection is actually using SSL, and
  • the server's X.509 certificate was successfully verified?

As far as i can tell, when using the stock MySQL bindings, setting MYSQL_CLIENT_SSL is purely advisory (i.e. it won't fail if the server doesn't advertise SSL support). This means it won't defend against an active network attacker performing the equivalent of sslstrip.

Even if MYSQL_CLIENT_SSL was stronger than an advisory flag, i can't seem to come up with a way to tell PHP's basic MySQL bindings the equivalent of the --ssl-ca flag to the mysql client binary. Without being able to configure this, a "man-in-the-middle" should be able to intercept the connection by offering their own certificate on their endpoint, and otherwise relaying the traffic. A client that does not verify the server's identity would be none the wiser.

One option to avoid a MITM attack would be for the server to require client-side certs via the REQUIRE option for a GRANT statement, but the basic MySQL bindings for php don't seem to support that either.

PHP's mysqli bindings (MySQL Improved, i think) feature a command called ssl_set() which appears to allow client-side certificate support. But its documentation isn't clear on how it handles an invalid/expired/revoked server certificate (let alone a server that announces that it doesn't support SSL), and it also mentions:

This function does nothing unless OpenSSL support is enabled.
Given that debian MySQL packages don't use OpenSSL because of licensing incompatibilities with the GPL, i'm left wondering if packages built against yaSSL support this feature. And i'm more than a little bit leery that i have no way of telling whether my configuration request succeeded, or whether this function just happily did nothing because the interpreter got re-built with the wrong flags. Shouldn't the function fail explicitly if it cannot meet the user's request?

Meanwhile, the PDO bindings for MySQL apparently don't support SSL connections at all.

What's going on here? Does no one use MySQL over the network via PHP? Given the number of LAMP-driven data centers, this seems pretty unlikely. Do PHP+MySQL users just not care about privacy or integrity of their data?

Or (please let this be the case) have i just somehow missed the obvious documentation?

Tags: mysql, php, question, security, ssl, tls

Syndicated 2011-03-07 06:02:00 from Weblogs for dkg

Turning my back on the IEEE

I used to maintain a membership with IEEE. I no longer do.

The latest nonsense to come from them is a change for the worse in their copyright assignment policy, which i thought was already problematic.

Other engineering societies (IETF, USENIX, etc) continue to do socially relevant, useful work without attempting to control copyright of work contributed to them. This just makes IEEE look like a power- and money-hungry organization, rather than a force for positive advancement of technology.

For shame, IEEE.

I will not consider reinstating my membership unless the organization changes their copyright assignment and publication policy to better reflect the spirit of scientific inquiry and technological advancement they should stand for.

Tags: freedom, ieee, policy

Syndicated 2011-03-01 18:32:00 from Weblogs for dkg

43 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!