glyph is currently certified at Master level.

Name: Glyph Lefkowitz
Member since: 2001-03-12 20:33:03
Last Login: 2015-07-07 07:26:30

FOAF RDF Share This



I don't use my advogato diary here much. Mostly I just troll about the trust metric and flame other developers for being whiny. If you find this tiresome and want to see me saying something a little more intelligent, you can take a look at my vanity mailing list.


Articles Posted by glyph

Recent blog entries by glyph

Syndication: RSS 2.0

Stop Working So Hard

Recently, I saw this tweet where John Carmack posted to a thread on Hacker News about working hours. As this post propagated a good many bad ideas about working hours, particularly in the software industry, I of course had to reply. After some further back-and-forth on Twitter, Carmack followed up.

First off, thanks to Mr. Carmack for writing such a thorough reply in good faith. I suppose internet arguments have made me a bit cynical in that I didn't expect that. I still definitely don't agree, but I think there's a legitimate analysis of the available evidence there now, at least.

When trying to post this reply to HN, I was told that the comment was too long, and I suppose it is a bit long for a comment. So, without further ado, here are my further thoughts on working hours.

... if only the workers in Greece would ease up a bit, they would get the productivity of Germany. Would you make that statement?

Not as such, no. This is a hugely complex situation mixing together finance, culture, management, international politics, monetary policy, and a bunch of other things. That study, and most of the others I linked to, is interesting in that it confirms the general model of ability-to-work (i.e. "concentration" or "willpower") as a finite resource that you exhaust throughout the day; not in that "reduction in working hours" is a panacea solution. Average productivity-per-hour-worked would definitely go up.

However, I do believe (and now we are firmly off into interpretation-of-results territory, I have nothing empirical to offer you here) that if the average Greek worker were less stressed to the degree of the average German one, combining issues like both overwork and the presence of a constant catastrophic financial crisis in the news, yes; they'd achieve equivalent productivity.

Total net productivity per worker, discounting for any increases in errors and negative side effects, continues increasing well past 40 hours per week. ... Only when you are so broken down that even when you come back the following day your productivity per hour is significantly impaired, do you open up the possibility of actually reducing your net output.

The trouble here is that you really cannot discount for errors and negative side effects, especially in the long term.

First of all, the effects of overwork (and attendant problems, like sleep deprivation) are cumulative. While productivity on a given day increases past 40 hours per week, if you continue to work more, you productivity will continue to degrade. So, the case where "you come back the following day ... impaired" is pretty common... eventually.

Since none of this epidemiological work tracks individual performance longitudinally there are few conclusive demonstrations of this fact, but lots of compelling indications; in the past, I've collected quantitative data on myself (and my reports, back when I used to be a manager) that strongly corroborates this hypothesis. So encouraging someone to work one sixty-hour week might be a completely reasonable trade-off to address a deadline; but building a culture where asking someone to work nights and weekends as a matter of course is inherently destructive. Once you get into the area where people are losing sleep (and for people with other responsibilities, it's not hard to get to that point) overwork starts impacting stuff like the ability to form long-term memories, which means that not only do you do less work, you also consistently improve less.

Furthermore, errors and negative side effects can have a disproportionate impact.

Let me narrow the field here to two professions I know a bit about and are germane to this discussion; one, health care, which the original article here starts off by referencing, and two, software development, with which we are both familiar (since you already raised the Mythical Man Month).

In medicine, you can do a lot of valuable life-saving work in a continuous 100-hour shift. And in fact residents are often required to do so as a sort of professional hazing ritual. However, you can also make catastrophic mistakes that would cost a person their life; this happens routinely. Study after study confirms this, and minor reforms happen, but residests are still routinely abused and made to work in inhumane conditions that have catastrophic outcomes for their patients.

In software, defects can be extremely expensive to fix. Not only are they hard to fix, they can also be hard to detect. The phenomenon of the Net Negative Producing Programmer also indicates that not only can productivity drop to zero, it can drop below zero. On the anecdotal side, anyone who has had the unfortunate experience of cleaning up after a burnt-out co-worker can attest to this.

There are a great many tasks where inefficiency grows significantly with additional workers involved; the Mythical Man Month problem is real. In cases like these, you are better off with a smaller team of harder working people, even if their productivity-per-hour is somewhat lower.

The specific observation from the Mythical Man Month was that the number of communication links on a fully connected graph of employees increases geometrically whereas additional productivity (in the form of additional workers) increases linearly. If you have a well-designed organization, you can add people without requiring that your communication graph be fully connected.

But of course, you can't always do that. And specifically you can't do that when a project is already late: you already figured out how the work is going to be divided. Brooks' Law is formulated as: "Adding manpower to a late software project makes it later." This is indubitable. But one of the other famous quotes from this book is "The bearing of a child takes nine months, no matter how many women are assigned."

The bearing of a child also takes nine months no matter how many hours a day the woman is assigned to work on it. So "in cases like these" my contention is that you are not "better off with ... harder working people": you're just screwed. Some projects are impossible and you are better off acknowledging the fact that you made unrealistic estimates and you are going to fail.

You called my post “so wrong, and so potentially destructive”, which leads me to believe that you hold an ideological position that the world would be better if people didn’t work as long. I don’t actually have a particularly strong position there; my point is purely about the effective output of an individual.

I do, in fact, hold such an ideological position, but I'd like to think that said position is strongly justified by the data available to me.

But, I suppose calling it "so potentially destructive" might have seemed glib, if you are really just looking at the microcosm of what one individual might do on one given week at work, and not at the broader cultural implications of this commentary. After all, as this discussion shows, if you are really restricting your commentary to a single person on a single work-week, the case is substantially more ambiguous. So let me explain why I believe it's harmful, as opposed to merely being incorrect.

First of all, the problem is that you can't actually ignore the broader cultural implications. This is Hacker News, and you are John Carmack; you are practically a cultural institution yourself, and by using this site you are posting directly into the broader cultural implications of the software industry.

Software development culture, especially in the USA, suffers from a long-standing culture of chronic overwork. Startup developers in their metaphorical (and sometimes literal) garages are lionized and then eventually mythologized for spending so many hours on their programs. Anywhere that it is celebrated, this mythology rapidly metastasizes into a severe problem; the Death March

Note that although the term "death march" is technically general to any project management, it applies "originally and especially in software development", because this problem is worse in the software industry (although it has been improving in recent years) than almost anywhere else.

So when John Carmack says on Hacker News that "the effective output of an individual" will tend to increase with hours worked, that sends a message to many young and impressionable software developers. This is the exact same phenomenon that makes pop-sci writing terrible: your statement may be, in some limited context, and under some tight constraints, empirically correct, but it doesn't matter because when you expand the parameters to the full spectrum of these people's careers, it's both totally false and also a reinforcement of an existing cognitive bias and cultural trope.

I can't remember the name of this cognitive bias (and my Google-fu is failing me), but I know it exists. Let me call it the "I'm fine" bias. I know it exists because I have a friend who had the opportunity to go on a flight with NASA (on the Vomit Comet), and one of the more memorable parts of this experience that he related to me was the hypoxia test. The test involved basic math and spatial reasoning skills, but that test wasn't the point: the real test was that they had to notice and indicate when the oxygen levels were dropping and indicate that to the proctor. Concentrating on the test, many people failed the first few times, because the "I'm fine" bias makes it very hard to notice that you are impaired.

This is true of people who are drunk, or people who are sleep deprived, too. Their abilities are quantifiably impaired, but they have to reach a pretty severe level of impairment before they notice.

So people who are overworked might feel generally bad but they don't notice their productivity dropping until they're way over the red line.

Combine this with the fact that most people, especially those already employed as developers, are actually quite hard-working and earnest (laziness is much more common as a rhetorical device than as an actual personality flaw) and you end up in a scenario where a good software development manager is responsible much more for telling people to slow down, to take breaks, and to be more realistic in their estimates, than to speed up, work harder, and put in more hours.

The trouble is this goes against the manager's instincts as well. When you're a manager you tend to think of things in terms of resources: hours worked, money to hire people, and so on. So there's a constant nagging sensation for a manager to encourage people to work more hours in a day, so you can get more output (hours worked) out of your input (hiring budget). The problem here is that while all hours are equal, some hours are more equal than others. Managers have to fight against their own sense that a few more worked hours will be fine, and their employees' tendency to overwork because they're not noticing their own burnout, and upper management's tendency to demand more.

It is into this roiling stew of the relentless impulse to "work, work, work" that we are throwing our commentary about whether it's a good idea or not to work more hours in the week. The scales are weighted very heavily on one side already - which happens to be the wrong side in the first place - and while we've come back from the unethical and illegal brink we were at as an industry in the days of ea_spouse, software developers still generally work far too much.

If we were fighting an existential threat, say an asteroid that would hit the earth in a year, would you really tell everyone involved in the project that they should go home after 35 hours a week, because they are harming the project if they work longer?

Going back to my earlier explanation in this post about the cumulative impact of stress and sleep deprivation - if we were really fighting an existential threat, the equation changes somewhat. Specifically, the part of the equation where people can have meaningful downtime.

In such a situation, I would still want to make sure that people are as well-rested and as reasonably able to focus as they possibly can be. As you've already acknowledged, there are "increases in errors" when people are working too much, and we REALLY don't want the asteroid-targeting program that is going to blow apart the asteroid that will wipe out all life on earth to have "increased errors".

But there's also the problem that, faced with such an existential crisis, nobody is really going to be able to go home and enjoy a fine craft beer and spend some time playing with their kids and come back refreshed at 100% the next morning. They're going to be freaking out constantly about the comet, they're going to be losing sleep over that whether they're working or not. So, in such a situation, people should have the option to go home and relax if they're psychologically capable of doing so, but if the option for spending their time that makes them feel the most sane is working constantly and sleeping under their desk, well, that's the best one can do in that situation.

This metaphor is itself also misleading and out of place, though. There is also a strong cultural trend in software, especially in the startup ecosystem, to over-inflate the importance of what the company is doing - it is not "changing the world" to create a website for people to order room-service for their dogs - and thereby to catastrophize any threat to that goal. The vast majority of the time, it is inappropriate to either to sacrifice -- or to ask someone else to sacrifice -- health and well-being for short-term gains. Remember, given the cumulative effects of overwork, that's all you even can get: short-term gains. This sacrifice often has a huge opportunity cost in other areas, as you can't focus on more important things that might come along later.

In other words, while the overtime situation is complex and delicate in the case of an impending asteroid impact, there's also the question of whether, at the beginning of Project Blow Up The Asteroid, I want everyone to be burnt out and overworked from their pet-hotel startup website. And in that case, I can say, unequivocally, no. I want them bright-eyed and bushy-tailed for what is sure to be a grueling project, no matter what the overtime policy is, that absolutely needs to happen. I want to make sure they didn't waste their youth and health on somebody else's stock valuation.

Syndicated 2016-01-17 04:06:00 from Deciphering Glyph

Taking Issue With Paul Graham’s Premises

Paul Graham has recently penned an essay on income inequality. Holly Wood wrote a pretty good critique of this piece, but it is addressing a huge amount of pre-existing context, as well as ignoring large chunks of the essay that nominally agree.

Eevee has already addressed how the “simplified version” didn’t substantively change anything that the longer one was saying, so I’m not going to touch on that much. However, it’s worth noting that the reason Paul Graham says he wrote that is that he thinks that “adventurous interpretations” are to blame for criticism of his “controversial” writing; in other words, that people are misinterpreting his argument because the conclusion is politically unacceptable.

Personally, I am deeply ambivalent about the political implications of his writing. I believe, strongly, in the power of markets to solve social problems that planning cannot. I don’t think “capitalist” is a slur. But, neither do I believe that markets are inherently good; capitalist economic theory assumes an environment of equal initial opportunity which demonstrably does not exist. I am, personally, very open to ideas like the counter-intuitive suggestion that economic inequality might not be such a bad thing, if the case were well-made. I say this because I want to be clear that what bothers me about Paul Graham’s writing is not its “controversial” content.

What bothers me about Paul Graham’s writing is that the reasoning is desperately sloppy. I sometimes mentor students on their writing, and if this mess were handed to me by one of my mentees, I would tell them to rewrite it from scratch. Although the “thanks” section at the end of each post on his blog implies that he gets editing feedback, it must be so uncritical of his assumptions as to be useless.

Initially, my entirely subjective impression is that Paul Graham is not a credible authority on the topic of income inequality. He doesn’t demonstrate any grasp of its causes, or indeed the substance of any proposed remedy. I would say that he is attacking a straw-man, but he doesn’t even bother to assemble the straw-man first, saying only:

... the thing that strikes me most about the conversations I overhear ...

What are these “conversations” he “overhears”? What remedies are they proposing which would target income inequality by eliminating any possible reward for entrepreneurship? Nothing he’s arguing against sounds like anything I’ve ever heard someone propose, and I spend a lot of time in the sort of conversation that he imagines overhearing.

His claim to credentials in this area doesn’t logically follow, either:

I’ve become an expert on how to increase economic inequality, and I’ve spent the past decade working hard to do it. ... In the real world you can create wealth as well as taking it from others.

This whole passage is intended to read logically as: “I increase economic inequality, which you might assume is bad, but it’s not so bad, because it creates wealth in the process!”.

Hopefully what PG has actually been trying to become an expert in is creating wealth (a net positive), not in increasing economic inequality (a negative, or, at best, neutral by-product of that process). If he is focused on creating wealth, as so much of the essay purports he is, then it does not necessarily follow that the startup founders will be getting richer than their customers.

Of course, many goods and services provide purely subjective utility to their consumers. But in a properly functioning market, the whole point of of engaging in transactions is to improve efficiency.

To borrow from PG’s woodworker metaphor:

A woodworker creates wealth. He makes a chair, and you willingly give him money in return for it.

I might be buying that chair to simply appreciate its chair-ness and bask in the sublime beauty of its potential for being sat-in. But equally likely, I’m buying that chair for my office, where I will sit in it, and produce some value of my own while thusly seated. If the woodworker hadn’t created that chair for me, I’d have to do it myself, and it (presumably) would have been more expensive in terms of time and resources. Therefore, by producing the chair more efficiently, the woodworker would have increased my wealth as well as his own, by increasing the delta between my expenses (which include the chair) and my revenue (generated by tripping the light pythonic or whatever).

Note that “more efficient” doesn’t necessarily mean “lower cost”. Sitting in a chair is a substantial portion of my professional activity. A higher-quality chair that costs the same amount might improve the quality of my sitting experience, which might improve my own productivity at writing code, allowing me to make more income myself.

Even if the value of the chair is purely subjective, it is still an expense, and making it more efficient to make chairs would still increase my net worth.

Therefore, if startups really generated wealth so reliably, rather than simply providing a vehicle for transferring it, we would expect to see decreases in economic inequality, as everyone was able to make the most efficient use of their own time and resources, and was able to make commensurately more money.

... variation in productivity is accelerating ...

Counterpoint: no it isn’t. It’s not even clear that it’s increasing, let alone that its derivative is increasing. This doesn’t appear to be something that much data is collected on, and in the absence of any citation, I have to assume that it is a restatement of the not only false, but harmful, frequently debunked 10x programmer myth.

Most people who get rich tend to be fairly driven.

This sounds obvious: of course, if you “get” rich, you have to be doing something to “get” that way. First of all, this ignores many people who simply are rich, who get their wealth from inheritance or rent-seeking, which I think is discounting a pretty substantial number of rich people.

But it is implicitly making a bolder claim: that people who get rich are more driven than other people; i.e. those who don’t get rich.

In my personal experience, the opposite is true. People who get rich do work hard, and are determined, but really poor people work a lot harder and are a lot more determined. A startup founder who is eating rice and beans to try to keep their burn rate low and their runway long may indeed be making sacrifices and working hard. They may be experiencing emotional turmoil. But implicitly, such a person always has the safety net of high-value skills they can use to go find another job if their attempt doesn’t work out.

But don’t take my word for it; think about it for yourself. Consider a single mother working three minimum-wage jobs and eating rice and beans because that’s the only way she can feed her children. Would you imagine she is less determined and will work less hard to keep her children alive than our earlier hypothetical startup founder would work to keep their valuation high?

One of the most important principles in Silicon Valley is that “you make what you measure.” It means that if you pick some number to focus on, it will tend to improve, but that you have to choose the right number, because only the one you choose will improve

A closely-related principle from outside of Silicon Valley is Goodhart’s Law. It states, “When a measure becomes a target, it ceases to be a good measure”. If you pick some number to focus on, the number as measured will improve, but since it’s often cheaper to subvert the mechanisms for measuring than to actually make progress, the improvement will often be meaningless. It is a dire mistake to assume that as long as you select the right metric in a political process that you can really improve it.

The Silicon Valley version - assuming the number will genuinely increase, and all you have to do is choose the right one - really only works when the things producing the numbers are computers, and the people collecting them have clearly circumscribed reasons not to want to cheat. This is why people tend to select numbers like income inequality to optimize: it gives people a reason to want to avoid cheating.

It’s still possible to get rich by buying politicians (though even that is harder than it was in 1880)

The sunlight foundation published a report in 2014, indicating that the return on investment of political spending is approximately 76,000%. While the sunlight foundation didn't exist in 1880, a similar report in 2009 suggested this number was 22,000% a few years ago, suggesting this number is going up, not down; i.e. over time, it is getting easier, not harder, to get rich by buying politicians.

Meanwhile, the ROI of venture capital, while highly variable, is, on average, at least two orders of magnitude lower than that. While outright “buying” a politican is a silly straw-man, manipulating goverment remains a far more reliable and lucrative source of income than doing anything productive, with technology or otherwise.

The rate at which individuals can create wealth depends on the technology available to them, and that grows exponentially.

In what sense does technology grow “exponentially”? Let’s look at a concrete example of increasing economic output that’s easy to quantify: wheat yield per acre. What does the report have to say about it?

Winter wheat yields have trended higher since 1960. We find that a linear trend is the best fit to actual average yields over that period and that yields have increased at a rate of 0.4 bushel per acre per year...

(emphasis mine)

In other words, when I go looking for actual, quantifiable evidence of the benefit of improving technology, it is solidly linear, not exponential.

What have we learned?

Paul Graham frequently writes essays in which he makes quantifiable, falsifiable claims (technology growth is “exponential”, an “an exponential curve that has been operating for thousands of years”, “there are also a significant number who get rich by creating wealth”) but rarely, if ever, provides any data to back up those claims. When I look for specific examples to test his claims, as with the crop yield examples above, it often seems to me that his claims are exaggerated, entirely imagined, or, worse yet, completely backwards from the truth of the matter.

Graham frequently uses the language of rationality, data, science, empiricism, and mathematics. This is a bad habit shared by many others immersed in Silicon Valley culture. However, simply adopting an unemotional tone and co-opting words like “exponential” and “factor”, or almost-quantifiable weasel words like “most” and “significant”, is no substitute for actually doing the research, assembling the numbers, fitting the curves, and trying to understand if the claims are valid.

This continues to strike me as a real shame, because PG’s CV clearly shows he is an intelligent and determined fellow, and he certainly has a fair amount of money, status, and power at this point. More importantly, his other writings clearly indicate he cares a lot about things like “niceness” and fairness. If he took the trouble to more humbly approach socioeconomic problems like income inequality and poverty, really familiarize himself with existing work in the field, he could put his mind to a solution. He might be able to make some real change. Instead, he continues to use misleading language and rhetorical flourishes to justify decisions he’s already made. In doing so, he remains, regrettably, a blowhard.

Syndicated 2016-01-05 16:29:00 from Deciphering Glyph

Your Text Editor Is Malware

Are you a programmer? Do you use a text editor? Do you install any 3rd-party functionality into that text editor?

If you use Vim, you’ve probably installed a few vimballs from, a website only available over HTTP. Vimballs are fairly opaque; if you’ve installed one, chances are you didn’t audit the code.

If you use Emacs, you’ve probably installed some packages from ELPA or MELPA using package.el; in Emacs’s default configuration, ELPA is accessed over HTTP, and until recently MELPA’s documentation recommended HTTP as well.

When you install un-signed code into your editor that you downloaded over an unencrypted, unauthenticated transport like HTTP, you might as well be installing malware. This is not a joke or exaggeration: you really might be.1 You have no assurance that you’re not being exploited by someone on your local network, by someone on your ISP’s network, the NSA, the CIA, or whoever else.

The solution for Vim is relatively simple: use vim-plug, which fetches stuff from GitHub exclusively via HTTPS. I haven’t audited it conclusively but its relatively small codebase includes lots of https:// and no http:// or git://2 that I could see.

I’m relatively proud of my track record of being a staunch advocate for improved security in text editor package installation. I’d like to think I contributed a little to the fact that MELPA is now available over HTTPS and instructs you to use HTTPS URLs.

But the situation still isn’t very good in Emacs-land. Even if you manage to get your package sources from an authenticated source over HTTPS, it doesn’t matter, because Emacs won’t verify TLS.

Although package signing is implemented, practically speaking, none of the packages are signed.3 Therefore, you absolutely cannot trust package signing to save you. Plus, even if the packages were signed, why is it the NSA’s business which packages you’re installing, anyway? TLS is shorthand for The Least Security (that is acceptable); whatever other security mechanisms, like package signing, are employed, you should always at least have HTTPS.

With that, here’s my unfortunately surprise-filled step-by-step guide to actually securing Emacs downloads, on Windows, Mac, and Linux.

Step 1: Make Sure Your Package Sources Are HTTPS Only

By default, Emacs ships with its package-archives list as '(("gnu" . "")), which is obviously no good. You will want to both add MELPA (which you surely have done anyway, since it’s where all the actually useful packages are) and change the ELPA URL itself to be HTTPS. Use M-x customize-variable to change package-archives to:

`(("gnu" . "")
  ("melpa" . ""))

Step 2: Turn On TLS Trust Checking

There’s another custom variable in Emacs, tls-checktrust, which checks trust on TLS connections. Go ahead and turn that on, again, via M-x customize-variable tls-checktrust.

Step 3: Set Your Trust Roots

Now that you’ve told Emacs to check that the peer’s certificate is valid, Emacs can’t successfully fetch HTTPS URLs any more, because Emacs does not distribute trust root certificates. Although the set of cabforum certificates are already probably on your computer in various forms, you still have to acquire them in a format usable by Emacs somehow. There are a variety of ways, but in the interests of brevity and cross-platform compatibility, my preferred mechanism is to get the certifi package from PyPI, with python -m pip install --user certifi or similar. (A tutorial on installing Python packages is a little out of scope for this post, but hopefully my little website about this will help you get started.)

At this point, M-x customize-variable fails us, and we need to start just writing elisp code; we need to set tls-program to a string computed from the output of running a program, and if we want this to work on Windows we can’t use Bourne shell escapes. Instead, do something like this in your .emacs or wherever you like to put your start-up elisp:4

(let ((trustfile
        "\\\\" "/"
         "\n" ""
         (shell-command-to-string "python -m certifi")))))
  (setq tls-program
         (format "gnutls-cli%s --x509cafile %s -p %%p %%h"
                 (if (eq window-system 'w32) ".exe" "") trustfile))))

This will run gnutls-cli on UNIX, and gnutls-cli.exe on Windows.

You’ll need to install the gnutls-cli command line tool, which of course varies per platform:

  • On OS X, of course, Homebrew is the best way to go about this: brew install gnutls will install it.
  • On Windows, the only way I know of to get GnuTLS itself over TLS is to go directly to this mirror. Download one of these binaries and unzip it next to Emacs in its bin directory.
  • On Debian (or derivatives), apt-get install gnutls-bin
  • On Fedora (or derivatives), yum install gnutls-utils

Great! Now we’ve got all the pieces we need: a tool to make TLS connections, certificates to verify against, and Emacs configuration to make it do those things. We’re done, right?



It turns out there are two ways to tell Emacs to really actually really secure the connection (really), but before I tell you the second one or why you need it, let’s first construct a little test to see if the connection is being properly secured. If we make a bad connection, we want it to fail. Let’s make sure it does.

This little snippet of elisp will use the helpful site to give you some known-bad and known-good certificates (assuming nobody’s snooping on your connection):

(if (condition-case e
          (url-retrieve ""
                        (lambda (retrieved) t))
          (url-retrieve ""
                        (lambda (retrieved) t))
      ('error nil))
    (error "tls misconfigured")
  (url-retrieve ""
                (lambda (retrieved) t)))

If you evaluate it and you get an error, either your trust roots aren’t set up right and you can’t connect to a valid site, or Emacs is still blithely trusting bad certificates. Why might it do that?

Step 5: Configure the Other TLS Verifier

One of Emacs’s compile-time options is whether to link in GnuTLS or not. If GnuTLS is not linked in, it will use whatever TLS program you give it (which might be gnutls-cli or openssl s_client, but since only the most recent version of openssl s_client can even attempt to verify certificates, I’d recommend against it). That is what’s configured via tls-checktrust and tls-program above.

However, if GnuTLS is compiled in, it will totally ignore those custom variables, and honor a different set: gnutls-verify-error and gnutls-trustfiles. To make matters worse, installing the packages which supply the gnutls-cli program also install the packages which might satisfy Emacs’s dynamic linking against the GnuTLS library, which means this code path could get silently turned on because you tried to activate the other one.

To give these variables the correct values as well, we can re-visit the previous trust setup:

(let ((trustfile
        "\\\\" "/"
         "\n" ""
         (shell-command-to-string "python -m certifi")))))
  (setq tls-program
         (format "gnutls-cli%s --x509cafile %s -p %%p %%h"
                 (if (eq window-system 'w32) ".exe" "") trustfile)))
  (setq gnutls-verify-error t)
  (setq gnutls-trustfiles (list trustfile)))

Now it ought to be set up properly. Try the example again from Step 4 and it ought to work. It probably will. Except, um...

Appendix A: Windows is Weird

Presently, the official Windows builds of Emacs seem to be linked against version 3.3 of GnuTLS rather than the latest 3.4. You might need to download the latest micro-version of 3.3 instead. As far as I can tell, it’s supposed to work with the command-line tools (and maybe it will for you) but for me, for some reason, Emacs could not parse gnutls-cli.exe’s output no matter what I did. This does not appear to be a universal experience, others have reported success; your mileage may vary.


We nerds sometimes mock the “normals” for not being as security-savvy as we are. Even if we’re considerate enough not to voice these reactions, when we hear someone got malware on their Windows machine, we think “should have used a UNIX, not Windows”. Or “should have been up to date on your patches”, or something along those lines.

Yet, nerdy tools that download and execute code - Emacs in particular - are shockingly careless about running arbitrary unverified code from the Internet. And we are often equally shockingly careless to use them, when we should know better.

If you’re an Emacs user and you didn’t fully understand this post, or you couldn’t get parts of it to work, stop using package.el until you can get the hang of it. Get a friend to help you get your environment configured properly. Since a disproportionate number of Emacs users are programmers or sysadmins, you are a high-value target, and you are risking not only your own safety but that of your users if you don’t double-check that your editor packages are coming from at least cursorily authenticated sources.

If you use another programmer’s text editor or nerdy development tool that is routinely installing software onto your system, make sure that if it’s at least securing those installations with properly verified TLS.

  1. Technically speaking of course you might always be installing malware; no defense is perfect. And HTTPS is a fairly weak one at that. But is significantly stronger than “no defense at all”. 

  2. Never, ever, clone a repository using git:// URLs. As explained in the documentation: “The native transport (i.e. git:// URL) does no authentication and should be used with caution on unsecured networks.”. You might have heard that git uses a “cryptographic hash function” and thought that had something to do with security: it doesn’t. If you want security you need signed commits, and even then you can never really be sure

  3. Plus, MELPA accepts packages on the (plain-text-only) Wiki, which may be edited by anyone, and from CVS servers, although they’d like to stop that. You should probably be less worried about this, because that’s a link between two datacenters, than about the link between you and MELPA, which is residential or business internet at best, and coffee-shop WiFi at worst. But still maybe be a bit worried about it and go comment on that bug. 

  4. Yes, that let is a hint that this is about to get more interesting... 

Syndicated 2015-11-12 08:51:00 from Deciphering Glyph

Python Option Types

NULL has, rightly, been called a “billion dollar mistake”. If that is so, then None is a hundred million dollar mistake, at least.

Forgetting, for the moment, about the numerous pitfalls of a C-style NULL, Python’s None has a very significant problem of its own. Of course, the problem is not None itself; the fact that the default return value of a function is None is (in my humble opinion, at least) fine; it’s just a marker that means “nothing to see here, move along”. The problem arises from values which might be None, or might be some other, useful thing.

APIs present values in a number of ways. A value might be exposed as the return value of a method, an attribute of an object, or an entry in a collection data structure such as a list or dictionary. If a value presented in an API might be None or it might be something else, every single client of that API needs to check the type of the value that it’s calling before doing anything.

Since it is rude to use a “simple suite” (a line of code like if x: y() with no newline after the colon), that means the minimum number of lines of code for interacting with your API is now 4: one for the if statement, one for the then clause, and one for the else clause.

Worse than the code-bloat required here, the default behavior, if your forget to do this checking, is that it works sometimes (like when you’re testing it), and that other times (like when you put it into production), you get an unhelpful exception like this:

Traceback (most recent call last):
  File "<your code>", line 1, in <module>
AttributeError: 'NoneType' object has no attribute 'method'

Of course NoneType doesn’t have an attribute called method, but why is value a NoneType? Science may never know.

In languages with static type declarations, there’s a concept of an Option type. Simply put, in a language with option types, the API declares its result value as “maybe something, maybe null”, and then if the caller fails to account for the “null” case, it is a compile-time error.

Python doesn’t have this kind of ahead-of-time checking though, so what are we to do? In order of my own personal preference, here are three strategies for getting rid of maybe-None-maybe-not data types in your Python code.

1: Just Say No

Some APIs - especially those that require building deeply complex nested trees of data structures - use None as a way to provide a convenient mechanism for leaving a space for a future value to be filled out. Using such an API sometimes looks like this:

value = MyValue() = 1 = 2
value.baz = 3

In this case, the way to get rid of None is simple: just stop doing that. Instead of .foo having an implicit type of “int or None”, just make it always be int, like this:

value = MyValue(
    foo=1, bar=2, baz=3

Or, if do_something is the only method you’re going to call with this data structure, opt for the even simpler:

do_something_with_value(foo=1, bar=2, baz=3)

If MyValue has dozens of fields that need to be initialized with different subsystems, so you actually want to pass around a partially-initialized value object, consider the Builder pattern, which would make this code look like the following:

builder = MyValueBuilder()
foo = builder.with_foo(1)
bar = foo.with_bar(2)
baz = bar.with_baz(3)
value =

This acknowledges that the partially-constructed MyValueBuilder is a different type than MyValue, and, crucially, if you look at its API documentation, it does not misleadingly appear to support the do_something operation which in fact requires foo, bar, and baz all be initialized.

Wherever possible, just require values of the appropriate type be passed in in the first place, and don’t ever default to None.

2: Make The Library Handle The Different States, Not The Caller

Sometimes, None is a placeholder indicating an implicit state machine, where the states are “initialized” and “not initialized”.

For example, imagine an RPC Client which may or may not be connected. You might have an API you have to use like this:

def ask_question(self):
    message = {"question":
               "what is the air speed velocity of an unladen swallow?"}
    if self.rpc_client.connection is None:

By leaking through the connection attribute, rpc_client is providing an incomplete abstraction and foisting off too much work to its callers. Instead, callers should just have to do this:

def ask_question(self):
    message = {"question":
               "what is the air speed velocity of an unladen swallow?"}

Internally, rpc_client still has to maintain a private _connection attribute which may or may not be present, but by hiding this implementation detail, we centralize the complexity associated with managing that state in one place, rather than polluting every caller with it, which makes for much better API design.

Hopefully you agree that this is a good idea, but this is more what to do rather than how to do it, so here are two strategies for achieving “make the library do it”:

2a: Use Placeholder Implementations

However, rather than using None as the _connection attribute, rpc_client’s internal implementation could instead use a placeholder which provides the same interface. Let’s the expected interface of _connection in this case is just a send method that takes some bytes. We could initialize it initially with this:

class NotConnectedConnection(object):
    def __init__(self):
        self._buffer = b""
    def send(self, data):
        self._buffer += b

This allows the code within rpc_client itself to blindly call self._connection.send whether it’s actually connected already or not; upon connection, it could un-buffer that data onto the ready connection.

2b: Use an Explicit State Machine

Sometimes, you actually have quite a few states you need to manage, and this starts looking like an ugly proliferation of lots of weird little flags; various values which may be True or False, or None or not-None.

In those cases it’s best to be clear about the fact that there are multiple states, and enumerating the valid transitions between them. Then, expose a method which always has the same signature and return type.

Using a state machine library like ClusterHQ’s “machinist” or my Automat can allow you to automate the process of checking all the states.

Automat, in particular, goes to great lengths to make your objects look like plain old Python objects. Providing an input is just calling a method on your object: my_state_machine.provide_an_input() and receiving an output is just examining its return value. So it’s possible to refactor your code away from having to check for None by using this library.

For example, the connection-handling example above could be dealt with in the RPC client using Automat like so:

class RPCClient(object):
    _machine = MethodicalMachine()
    def _connected(self):
        "We have a connection."
    def _not_connected(self):
        "We have no connection."
    def send_message(self, message):
        "Send a message now if we're connected, or later if not."
    def _send_message_now(self, message):
        "Send a message immediately."
        # ...
    def _send_message_later(self, message):
        "Enqueue a message for when we are connected."
        # ...
    def _send_queued_messages(self, connection):
        "send all messages enqueued by _send_message_later"
        # ...
    def connection_established(self, connection):
        "A connection was established."
    _connected.upon(send_message, enter=_connected, output=[_send_message_now])
    _not_connected.upon(send_message, enter=_connected,
    _not_connected.upon(connection_established, enter=_connected,

3: Make The Caller Account For All Cases With Callbacks

The absolute lowest-level way to deal with multiple possible states is to, instead of exposing an attribute that the caller has to retrieve and test, expose a function which takes multiple callbacks, one for each case. This way you can provide clear and immediate error feedback if the caller forgets to handle a case - meaning that they forgot to pass a callback. This is only really suitable if you can’t think of any other way to handle it, but it does at least provide a very clear expectation of the interface.

To re-use our connection-handling logic above, you might do something like this:

def try_to_send(self):
    def connection_present(connection):
    def connection_not_present():

Notice that while this is slightly awkward, it has the nice property that the connection_present callback receives the value that it needs, whereas the connection_not_present callback doesn’t receive anything, because there’s nothing for it to receive.

The Zeroth Strategy

Of course, the best strategy, if you can get away with it, may be the non-strategy: refuse the temptation to provide a maybe-None, just raise an exception when you are in a state where you can’t handle. If you intentionally raise a specific, meaningful exception type with a good error message, it will be a lot more pleasant to use your API than if return codes that the caller has to check for pop up all over the place, None or otherwise.

The Principle Of The Thing

The underlying principle here is the same: when designing an API, always provide a consistent interface to your callers. An API is 1 to N: you have 1 API implementation to N callers. As N→∞, it becomes more important that any task that needs performing frequently is performed on the “1” side of the equation, and that you don’t force callers to repeat the same error checking over and over again. None is just one form of this, but it is a particularly egregious form.

Syndicated 2015-09-17 22:04:00 from Deciphering Glyph

Connecting To A Server On The Internet [Nightmare Difficulty]

The Setup

I recently switched internet service providers, from Comcast to WebPass. Comcast had been progressively upgrading our connection, and I honestly didn’t have any complaints, but WebPass is four times faster. Eight times faster, maybe? It’s so fast that I could demonstrate that my router hardware was the bottleneck and I needed to buy something physically capable of more bits per second. There’s gigabit Ethernet coming straight out of my wall, no modem required. So that’s exciting. The not-exciting part is that WebPass uses what is euphemistically called “carrier grade” NAT to deliver IPv4 service. This means I have no public IPv4 address at all.

This is exactly the grim meathook future that I predicted over a decade ago.

One interesting consequence of having no public IPv4 address is that I could no longer make inbound connections to my home network. When I’m roving around with my laptop, I like to be able to check in with my home servers. Plus, I’m like, a networking dude, or something, so I should know how this stuff works, and being able to connect to networks is a good first step to doing interesting things with them.

The Gear

I’m using OpenWrt (Chaos Calmer) firmware on my home router (although this also works on Barrier Breaker). If you configure its built-in IPv6 stuff naively, connecting to your home network is no problem at all! Every device on your network suddenly has its own public IP address on the Internet.

The Problem

In case it’s not clear: don’t do that. I can guarantee you that all of your computers and some things which aren’t obviously computers (game consoles, “smart” scales, various handheld devices, your “cloud” thermostat, probably your refrigerator, maybe some knives or guns that you didn’t know had internet access: anything that can connect to WiFi) are listening on a surprising array of ports, and every single one of them has some interesting security vulnerability which an attacker could potentially use to do bad stuff. You should of course keep your operating systems and device firmware updated to make sure you’ve always got the latest security patches, but even if you’re perfect at that, many devices simply don’t get updates. So the implicit firewall that NAT gave us is still important; except now we need it as an explicit firewall that actually blocks traffic, and not as an accident of the fact that devices don’t have their own externally-available addresses.

The Requirements

Given these parameters, the desired state of my router (and, I hope, some of yours) is:

  1. Explicitly deny all inbound IPv6 connections to all devices that aren’t expecting them, such as embedded devices and laptops.
  2. Explicitly allow all inbound IPv6 connections to specific devices and ports that are expecting them, like home servers.

Casual Difficulty: Rejecting Incoming Traffic

OpenWrt has a web interface, called “LuCI”, which has a nice firewall configuration application. You’ll want to use it to reject IPv6 inputs, like so:

Reject IPv6 Inputs

It also has a “port forwarding” interface, but forwarding a port is something that only really makes sense in an IPv4 NAT scenario. In an IPv6 scenario you don’t need to “forward” since you already have your own address. Accordingly, the interface populates the various IP address selectors with only your internal IPv4 addresses, and doesn’t work on IPv6 interfaces on the router.

Now we get to the really tricky part: allowing inbound traffic to that known device.

Normal Difficulty: Home Server Naming

First, we need to be able to identify that device. Since residential IPv6 prefix allocations are generally temporary, you can’t rely on your IP address to stay the same forever. This means we need to have some kind of dynamic DNS that updates periodically.

Personally, I have accomplished this with a little script using Apache libcloud that I have run in a crontab. The key part of that script is this command line, which works only on Linux:

ip -o -d -6 addr show primary scope global br0

that enumerates all globally-routable IPv6 addresses on a given interface (where the interface is br0 in that script, but may be something else on your computer; most likely eth0).

In my crontab, I have something like this:

@hourly ./.virtualenvs/libcloud/bin/python .../ \
    rackspace_us my.dns.user

And I ran the script once manually to initialize the keyring entry with my API key in it.

Note: I would strongly recommend creating a restricted user that only has access to DNS records for this, if you’re doing it on a general cloud provider like Rackspace or Amazon. On Rackspace, this can be accomplished by going to the “Account” menu at the top right, “User Management”, “Create User”, and selecting the “Custom” radio button in the “Product Access” section. Then, log in as that user, go to the “Account Settings”, and copy the API key.

Nightmare Difficulty: ip6tables suffix matching

Now we can access the address of from the public Internet, but it doesn’t do us much good; it’s still firewalled off. And since the prefix allocation changes regularly, we can’t simply open up a port in the firewall with a hard-coded address as you might in a regular data-center environment.

However, we can use a quirk of IPv6 addressing to our advantage.

In case you’re not already familiar, let me briefly explain the structure of IPv6 addresses. An IPv6 address is a 128-bit number. That is a really, really big number, which means a really, really big set of addresses; unlike IPv4, your ISP doesn’t need to jealously hoard them and hand them out sparingly, one at a time.

One of the features of IPv6 is the ability to allocate your own individual addresses. When your provider gives you an IPv6 address, they give yo a so-called “prefix allocation”, which is the first half or so - 64 bits - of an address. Devices on your network may then generate their own addresses that begin with those 64 bits.

Devices on your network then decide on their addresses in one of two ways:

  1. If they’re making an outgoing connection, they choose a random 64-bit number and use that as the suffix; a so-called “temporary” address.
  2. If they’re listening for incoming connections, they generate a 64-bit number based on their Ethernet card’s MAC address; a so-called “primary” address.

It is this primary address that the above ip command will print out. (If you see two addresses, and one starts with an f, it’s the other one; this post is already long enough without explaining the significance of that other address.)

Within iptables, the --source and --destination (or -s and -d) matching options are documented as having the syntax “address[/mask]”. The “mask” there is further documented as “... either an ... network mask (for iptables) or a plain number, specifying the number of 1’s at the left side of the network mask.” Normally, sources and destinations are matching network prefixes, because that’s what you would normally want to allow or deny traffic from: a prefix representing a network which in turn represents a specific group of computers. But, given that the IPv6 address for an interface on a given Ethernet card with a given MAC address will have a stable suffix derived from that MAC address, and a prefix who changes based on your current (dynamic) prefix allocation, we can write an ip6tables rule to allow connections based on the suffix by constructing an address that has a mask matching the last 64 bits of the address.

So, let’s say your ip command, above, output an address like 2001:1111:2222:3333:1234:5678:abcd:ef01. This means the stable part of your address is that trailing 1234:5678:abcd:ef01. And that means we want iptables to allow inbound traffic to a specific port. To allow incoming HTTPS on port 443, in the OpenWRT control panel, go to Network → Firewall, click the “Custom Rules” tab, and input a line something like this:

ip6tables -A forwarding_rule \
    -d ::1234:5678:abcd:ef01/::ffff:ffff:ffff:ffff \
    -p tcp -m tcp \
    --dport 443 \
    -m comment --comment "Inbound-HTTPS" \
    -o br-lan -j ACCEPT

This says that we are going to append to the forwarding_rule chain (a chain specific to OpenWRT), a rule matching packets whose destination is our suffix-matching mask with the suffix of the server we want to forward to, for the TCP protocol using the tcp matcher, whose destination port is 443, giving it a comment of “Inbound-HTTPS” so it’s identifiable when we’re looking at the firewall later, sending it out the br-lan interface (again, somewhat specific to OpenWRT), and jumping to the ACCEPT target, because we should let this through.

Game Over

Congratulations! You can now connect to a server on a port, in the wonderful world of tomorrow. I hope you look forward as much as I do to the day when this blog post will be totally irrelevant, and all home routers will simply come with a little UI that lets you connect a little picture of a device to a little picture of a cloud representing the internet, and maybe type a port number in.

Syndicated 2015-09-13 06:30:00 from Deciphering Glyph

17 older entries...


glyph certified others as follows:

  • glyph certified glyph as Journeyer
  • glyph certified jhermann as Journeyer
  • glyph certified Nafai77 as Apprentice
  • glyph certified carmstro as Journeyer
  • glyph certified washort as Journeyer
  • glyph certified moshez as Journeyer
  • glyph certified itamar as Journeyer
  • glyph certified dnm as Journeyer
  • glyph certified faassen as Journeyer
  • glyph certified Krelin as Journeyer
  • glyph certified spiv as Journeyer
  • glyph certified z3p as Journeyer
  • glyph certified jml as Master
  • glyph certified raph as Master
  • glyph certified jwz as Master
  • glyph certified eikeon as Journeyer
  • glyph certified mulix as Journeyer
  • glyph certified mglazer as Apprentice
  • glyph certified wichert as Master
  • glyph certified calderone as Master
  • glyph certified rasmus as Apprentice
  • glyph certified Jerub as Master

Others have certified glyph as follows:

  • glyph certified glyph as Journeyer
  • carmstro certified glyph as Journeyer
  • jhermann certified glyph as Journeyer
  • washort certified glyph as Journeyer
  • superuser certified glyph as Journeyer
  • lerdsuwa certified glyph as Journeyer
  • moshez certified glyph as Journeyer
  • echuck certified glyph as Journeyer
  • demoncrat certified glyph as Journeyer
  • faassen certified glyph as Master
  • dnm certified glyph as Journeyer
  • Krelin certified glyph as Journeyer
  • grant certified glyph as Apprentice
  • Nafai77 certified glyph as Master
  • aaronsw certified glyph as Journeyer
  • ghaering certified glyph as Master
  • itamar certified glyph as Master
  • Cantanker certified glyph as Master
  • spiv certified glyph as Master
  • z3p certified glyph as Master
  • jdub certified glyph as Master
  • ChrisMcDonough certified glyph as Master
  • jml certified glyph as Master
  • raph certified glyph as Master
  • eikeon certified glyph as Journeyer
  • calderone certified glyph as Master
  • mglazer certified glyph as Journeyer
  • realblades certified glyph as Journeyer
  • fxn certified glyph as Journeyer
  • pasky certified glyph as Journeyer
  • Omnifarious certified glyph as Journeyer
  • nbm certified glyph as Master
  • strider certified glyph as Master
  • garym certified glyph as Journeyer
  • gt3 certified glyph as Master
  • juancpaz certified glyph as Master
  • mesozoic certified glyph as Master
  • oubiwann certified glyph as Master
  • karlb certified glyph as Master

[ Certification disabled because you're not logged in. ]

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

Share this page