Recent blog entries

20 Apr 2014 etbe   » (Master)

Sociological Images 2012

In 2011 I wrote a post that was inspired by the Sociological Images blog [1]. After some delay here I’ve written another one. I plan to continue documenting such things.

Playground

gender segregated playground in 1918

In 2011 I photographed a plaque at Flagstaff Gardens in Melbourne. It shows a picture of the playground in 1918 with segregated boys and girls sections. It’s interesting that the only difference between the two sections is that the boys have horizontal bars and a trapeze. Do they still have gender segregated playgrounds anywhere in Australia? If so what is the difference in the sections?

Aborigines

The Android game Paradise Island [2] has a feature where you are supposed to stop Aborigines from stealing, it plays on the old racist stereotypes about Aborigines which are used to hide the historical record that it’s always been white people stealing from the people that they colonise.

Angry face icons over Aborigines Aborigines described as thieves

There is also another picture showing the grass skirts. Nowadays the vast majority of Aborigines don’t wear such clothing, the only time they do is when doing some sort of historical presentation for tourists.

I took those pictures in 2012, but apparently the game hasn’t changed much since then.

Lemonade

lemonade flavored fizzy drink

Is lemonade a drink or a flavour? Most people at the party where I took the above photo regard lemonade as a drink and found the phrase “Lemonade Flavoured Soft Drink” strange when it was pointed out to them. Incidentally the drink on the right tastes a bit like the US version of lemonade (which is quite different from the Australian version). For US readers, the convention in Australia is that “lemonade” has no flavor of lemons.

Not Sweet

maybe gender queer people on bikes

In 2012 an apple cider company made a huge advertising campaign featuring people who might be gender queer, above is a picture of a bus stop poster and there were also TV ads. The adverts gave no information at all about what the drink might taste like apart from not being “as sweet as you think”. So it’s basically an advertising campaign with no substance other than a joke about people who don’t conform to gender norms.

Also it should be noted that some women naturally grow beards and have religious reasons for not shaving [3].

Episode 2 of the TV documentary series “Am I Normal” has an interesting interview of a woman with a beard.

Revolution

communist revolution Schweppes drinks

A violent political revolution is usually a bad thing, using such revolutions to advertise sugar drinks seems like a bad idea. But it seems particularly interesting to note the different attitudes to such things in various countries. In 2012 Schweppes in Australia ran a marketing campaign based on imagery related to a Communist revolution (the above photo was taken at Southern Cross station in Melbourne), I presume that Schweppes in the US didn’t run that campaign. I wonder whether global media will stop such things, presumably that campaign has the potential to do more harm in the US than good in Australia.

Racist Penis Size Joke at Southbank

racist advert in Southbank paper

The above advert was in a free newspaper at Southbank in 2012. Mini Movers thought that this advert was a good idea and so did the management of Southbank who approved the advert for their paper. Australia is so racist that people don’t even realise they are being racist.

Related posts:

  1. Sociological Images I’ve recently been reading the Sociological Images blog [1]. That...
  2. LCA 2012 LCA 2013 [1] is starting so it seems like time...
  3. Links July 2012 The New York Times has an interesting article about “hacker...

Syndicated 2014-04-20 02:00:31 from etbe - Russell Coker

19 Apr 2014 Stevey   » (Master)

I was beaten to the punch, but felt nothing

A while back I mented github-backed DNS hosting.

Turns out NameCast.net does that already, and there is an interesting writeup on the design of something similar, from the same authors in 2009.

Fun to read.

In other news applying for jobs is a painful annoyance.

Should anybody wish to employ an Edinburgh-based system administrator, with a good Debian record, then please do shout at me. Remote work is an option, as is a local office, if you're nearby.

Now I need to go hide from the sun, lest I get burned again...

Good news? Going on holiday to Helsinki in a week or so, for Vappu. Anybody local who wants me should feel free to grab me, via the appropriate channels.

Syndicated 2014-04-19 16:27:32 (Updated 2014-04-19 19:14:14) from Steve Kemp's Blog

19 Apr 2014 joey   » (Master)

propellor-driven DNS and backups

Took a while to get here, but Propellor 0.4.0 can deploy DNS servers and I just had it deploy mine. Including generating DNS zone files.

Configuration is dead simple, as far as DNS goes:

     & Dns.secondary hosts "joeyh.name"
                & Dns.primary hosts "example.com"
                        ( Dns.mkSOA "ns1.example.com" 100
                                [ NS (AbsDomain "ns1.example.com")
                                , NS (AbsDomain "ns2.example.com")
                                ]
                        ) []

The awesome thing is that propellor fills in all the other information in the zone file by looking at the properties of the hosts it knows about.

 , host "blue.example.com"
        & ipv4 "192.168.1.1"
        & ipv6 "fe80::26fd:52ff:feea:2294"

        & alias "example.com"
        & alias "www.example.com"
        & alias "example.museum"
        & Docker.docked hosts "webserver"
            `requres` backedup "/var/www"

When it sees this host, Propellor adds its IP addresses to the example.com DNS zone file, for both its main hostname ("blue.example.com"), and also its relevant aliases. (The .museum alias would go into a different zone file.)

Multiple hosts can define the same alias, and then you automaticlly get round-robin DNS.

The web server part of of the blue.example.com config can be cut and pasted to another host in order to move its web server to the other host, including updating the DNS. That's really all there is to is, just cut, paste, and commit!

I'm quite happy with how that worked out. And curious if Puppet etc have anything similar.


One tricky part of this was how to ensure that the serial number automtically updates when changes are made. The way this is handled is Propellor starts with a base serial number (100 in the example above), and then it adds to it the number of commits in its git repository. The zone file is only updated when something in it besides the serial number needs to change.

The result is nice small serial numbers that don't risk overflowing the (so 90's) 32 bit limit, and will be consistent even if the configuration had Propellor setting up multiple independent master DNS servers for the same domain.


Another recent feature in Propellor is that it can use Obnam to back up a directory. With the awesome feature that if the backed up directory is empty/missing, Propellor will automcatically restore it from the backup.

Here's how the backedup property used in the example above might be implemented:

backedup :: FilePath -> Property
backedup dir = Obnam.backup dir daily
    [ "--repository=sftp://rsync.example.com/~/webserver.obnam"
    ] Obnam.OnlyClient
    `requires` Ssh.keyImported SshRsa "root"
    `requires` Ssh.knownHost hosts "rsync.example.com" "root"
    `requires` Gpg.keyImported "1B169BE1" "root"

Notice that the Ssh.knownHost makes root trust the ssh host key belonging to rsync.example.com. So Propellor needs to be told what that host key is, like so:

 , host "rsync.example.com"
        & ipv4 "192.168.1.4"
        & sshPubKey "ssh-rsa blahblahblah"

Which of course ties back into the DNS and gets this hostname set in it. But also, the ssh public key is available for this host and visible to the DNS zone file generator, and that could also be set in the DNS, in a SSHFP record. I haven't gotten around to implementing that, but hope at some point to make Propellor support DNSSEC, and then this will all combine even more nicely.


By the way, Propellor is now up to 3 thousand lines of code (not including Utility library). In 20 days, as a 10% time side project.

Syndicated 2014-04-19 07:08:45 from see shy jo

19 Apr 2014 etbe   » (Master)

Swap Space and SSD

In 2007 I wrote a blog post about swap space [1]. The main point of that article was to debunk the claim that Linux needs a swap space twice as large as main memory (in summary such advice is based on BSD Unix systems and has never applied to Linux and that most storage devices aren’t fast enough for large swap). That post was picked up by Barrapunto (Spanish Slashdot) and became one of the most popular posts I’ve written [2].

In the past 7 years things have changed. Back then 2G of RAM was still a reasonable amount and 4G was a lot for a desktop system or laptop. Now there are even phones with 3G of RAM, 4G is about the minimum for any new desktop or laptop, and desktop/laptop systems with 16G aren’t that uncommon. Another significant development is the use of SSDs which dramatically improve speed for some operations (mainly seeks).

As SATA SSDs for desktop use start at about $110 I think it’s safe to assume that everyone who wants a fast desktop system has one. As a major limiting factor in swap use is the seek performance of the storage the use of SSDs should allow greater swap use. My main desktop system has 4G of RAM (it’s an older Intel 64bit system and doesn’t support more) and has 4G of swap space on an Intel SSD. My work flow involves having dozens of Chromium tabs open at the same time, usually performance starts to drop when I get to about 3.5G of swap in use.

While SSD generally has excellent random IO performance the contiguous IO performance often isn’t much better than hard drives. My Intel SSDSC2CT12 300i 128G can do over 5000 random seeks per second but for sustained contiguous filesystem IO can only do 225M/s for writes and 274M/s for reads. The contiguous IO performance is less than twice as good as a cheap 3TB SATA disk. It also seems that the performance of SSDs aren’t as consistent as that of hard drives, when a hard drive delivers a certain level of performance then it can generally do so 24*7 but a SSD will sometimes reduce performance to move blocks around (the erase block size is usually a lot larger than the filesystem block size).

It’s obvious that SSDs allow significantly better swap performance and therefore make it viable to run a system with more swap in use but that doesn’t allow unlimited swap. Even when using programs like Chromium (which seems to allocate huge amounts of RAM that aren’t used much) it doesn’t seem viable to have swap be much bigger than 4G on a system with 4G of RAM. Now I could buy another SSD and use two swap spaces for double the overall throughput (which would still be cheaper than buying a PC that supports 8G of RAM), but that still wouldn’t solve all problems.

One issue I have been having on occasion is BTRFS failing to allocate kernel memory when managing snapshots. I’m not sure if this would be solved by adding more RAM as it could be an issue of RAM fragmentation – I won’t file a bug report about this until some of the other BTRFS bugs are fixed. Another problem I have had is when running Minecraft the driver for my ATI video card fails to allocate contiguous kernel memory, this is one that almost certainly wouldn’t be solved by just adding more swap – but might be solved if I tweaked the kernel to be more aggressive about swapping out data.

In 2007 when using hard drives for swap I found that the maximum space that could be used with reasonable performance for typical desktop operations was something less than 2G. Now with a SSD the limit for usable swap seems to be something like 4G on a system with 4G of RAM. On a system with only 2G of RAM that might allow the system to be usable with swap being twice as large as RAM, but with the amounts of RAM in modern PCs it seems that even SSD doesn’t allow using a swap space larger than RAM for typical use unless it’s being used for hibernation.

Conclusion

It seems that nothing has significantly changed in the last 7 years. We have more RAM, faster storage, and applications that are more memory hungry. The end result is that swap still isn’t very usable for anything other than hibernation if it’s larger than RAM.

It would be nice if application developers could stop increasing the use of RAM. Currently it seems that the RAM requirements for Linux desktop use are about 3 years behind the RAM requirements for Windows. This is convenient as a PC is fully depreciated according to the tax office after 3 years. This makes it easy to get 3 year old PCs cheaply (or sometimes for free as rubbish) which work really well for Linux. But it would be nice if we could be 4 or 5 years behind Windows in terms of hardware requirements to reduce the hardware requirements for Linux users even further.

Related posts:

  1. Swap Space There is a wide-spread myth that swap space should be...
  2. Modern Swap Use A while ago I wrote a blog post debunking the...
  3. Killing Servers with Virtualisation and Swap The Problem: A problem with virtual machines is the fact...

Syndicated 2014-04-19 04:58:15 from etbe - Russell Coker

19 Apr 2014 etbe   » (Master)

Phone Based Lectures

Early this month at a LUV meeting I gave a talk with only my mobile phone to store notes. I used Google Keep to write the notes as it’s one of the easiest ways of writing a note on a PC and quickly transferring it to a phone – if I keep doing this I will find some suitable free software for this task. Owncloud seems promising [1], but at the moment I’m more concerned with people issues than software.

Over the years I’ve experimented with different ways of presenting lectures. I’m now working with the theory that presenting the same data twice (by speaking and text on a projector) distracts the audience and decreases learning.

Editing and Viewing Notes

Google Keep is adequate for maintaining notes, it’s based on notes that are a list of items (like a shopping list) which is fine for lecture notes. It probably has lots of other functionality but I don’t care much about that. Keep is really fast at updating notes, I can commit a change on my laptop and have it visible on my phone in a few seconds over 3G.

Most of the lectures that I’ve given have involved notes on a laptop. My first laptop was a Thinkpad 385XD with a 12.1″ display and all my subsequent laptops have had a bigger screen. When a laptop with a 12″ or larger screen is on a lectern I can see the notes at a glance without having to lean forward when 15 or fewer lines of text are displayed on the screen. 15 lines of text is about the maximum that can be displayed on a slide for the audience to read and with the width of a computer display or projector is enough for a reasonable quantity of text.

When I run Keep on my Galaxy Note 2 it displays about 20 rather short lines of text in a “portrait” orientation (5 points for a lecture) and 11 slightly longer lines in a “landscape” orientation (4 points). In both cases the amount of text displayed on a screen is less than that with a laptop while the font is a lot smaller. My aim is to use free software for everything, so when I replace Keep with Owncloud (or something similar) I will probably have some options for changing the font size. But that means having less than 5 points displayed on screen at a time and thus a change in the way I present my talks (I generally change the order of points based on how well the audience seem to get the concepts so seeing multiple points on screen at the same time is a benefit).

The Samsung Galaxy Note 2 has a 5.5″ display which is one of the largest displays available in a phone. The Sony Xperia X Ultra is one of the few larger phones with a 6.44″ display – that’s a large phone but still not nearly large enough to have more than a few points on screen with a font readable by someone with average vision while it rests on a lectern.

The most obvious solution to the problem of text size is to use a tablet. Modern 10″ tablets have resolutions ranging from 1920*1080 to 2560*1600 and should be more readable than the Thinkpad I used in 1998 which had a 12″ 800*600 display. Another possibility that I’m considering is using an old phone, a Samsung Galaxy S weighs 118 to 155 grams and is easier to hold up than a Galaxy Note 2 which weighs 180g. While 60g doesn’t seem like much difference if I’m going to hold a phone in front of me for most of an hour the smaller and lighter phone will be easier and maybe less distracting for the audience.

Distributing URLs

When I give a talk I often want to share the addresses of relevant web sites with the audience. When I give a talk with the traditional style lecture notes I just put the URLs on the final page (sometimes using tinyurl.com) for people to copy during question time. When I use a phone I have to find another way.

I did a test with QR code recognition and found that a code that takes up most of the width of the screen of my Galaxy Note 2 can be recognised by a Galaxy S at a distance of 50cm. If I ran the same software on a 10″ tablet then it would probably be readable at a distance of a meter, if I had the QR code take up the entire screen on a tablet it might be readable at 1.5m away, so it doesn’t seem plausible to hold up a tablet and allow even the first few rows of the audience to decode a QR code. Even if newer phones have better photographic capabilities than the Galaxy S that I had available for testing there are still lots of people using old phones who I want to support. I think that if QR codes are to be used they have to be usable by at least the first three rows of the audience for a small audience of maybe 50 people as that would allow everyone who’s interested to quickly get in range and scan the code at the end.

Chris Samuel has a photo (taken at the same meeting) showing how a QR code from a phone could be distributed to a room [2]. But that won’t work for all rooms.

One option is to just have the QR code on my phone and allow audience members to scan it after the lecture. As most members of the audience won’t want the URLs it should be possible for the interested people to queue up to scan the QR code(s).

Another possibility I’m considering is to use a temporary post on my documents blog (which isn’t syndicated) for URLs. The WordPress client for Android works reasonably well so I could edit the URL list at any time. That would work reasonably well for talks that have lots of URLs – which is quite rare for me.

A final option is to use Twitter, at the end of a talk I could just tweet the URLs with suitable descriptions. A good portion of the Tweets that I have written is URLs for web sites that I find interesting so this isn’t a change. This is probably the easiest option, but with the usual caveat of using a proprietary service as an interim measure until I get a free software alternative working.

Any suggestions?

Please comment if you have any ideas about ways of addressing these issues.

Also please let me know if anyone is working on a distributed Twitter replacement. Please note that anything which doesn’t support followers on multiple servers and re-tweets and tweeting to users on other servers isn’t useful in this regard.

Related posts:

  1. Questions During Lectures An issue that causes some discussion and debate is the...
  2. Choosing an Android Phone My phone contract ends in a few months, so I’m...
  3. Sex and Lectures about Computers I previously wrote about the appropriate references to porn in...

Syndicated 2014-04-19 03:49:21 from etbe - Russell Coker

17 Apr 2014 marnanel   » (Journeyer)

I was just at Tesco

I was just at Tesco. I did not previously know the checkout person.

CHECKOUT PERSON: So, that'll be £16.48.
MARN: (long pause) What happened in 1648? I thought it was the Spanish Armada. But that sounds like it should have been in 1548.
CHECKOUT PERSON: Yeah, that's definitely the Tudors. It was under Henry, wasn't it? The Mary Rose and all that.
MARN: I thought it was Elizabeth. Didn't Philip of Spain send the Armada because he wanted her to marry him?
CHECKOUT PERSON: Well, what you've gotta remember is, Spain as such didn't exist at the time. There were, like, two or three different states there, and then you've got the Holy Roman Empire making things more complicated...

(discussion continues for a while)

More of this, please.

This entry was originally posted at http://marnanel.dreamwidth.org/294738.html. Please comment there using OpenID.

Syndicated 2014-04-17 20:19:11 (Updated 2014-04-17 21:12:31) from Monument

17 Apr 2014 Rich   » (Master)

ApacheCon NA 2014 Keynotes

This year at ApacheCon, I had the unenviable task of selecting the keynotes. This is always difficult, because you want to pick people who are inspirational, exciting speakers, but people who haven't already been heard by everyone at the event. You also need to give some of your sponsors the stage for a bit, and hope that they don't take the opportunity to bore the audience with a sales pitch.

I got lucky.

(By the way, videos of all of these talks will be on the Apache YouTube channel very soon - https://www.youtube.com/user/TheApacheFoundation)

We had a great lineup, covering a wide range of topics.

Day One:

0022_ApacheCon

We started with Hillary Mason, talking about Big Data. Unlike a lot of droney Big Data talks, she defined Big Data in terms of using huge quantities of data to solve actual human problems, and gave a historical view of Big Data going back to the first US Census. Good stuff.

0084_ApacheCon

Next, Samisa Abeysinghe talked about Apache Stratos, and the services and products that WSO2 is building on top of them. Although he had the opportunity to do nothing more than promote his (admittedly awesome) company, Samisa talked more about the Stratos project and the great things that it's doing in the Platform As A Service space. We love WSO2.

0127_ApacheCon

And to round out the first day of keynotes, James Watters from Pivotal talked about the CloudFoundry foundation that he's set up, and why he chose to do that rather than going with an existing foundation. Among other things. I had talked some with James prior to the conference about his talk, and he came through with a really great talk.

Day Two:

0602.ApacheCon

Day Two started with something a little different. Upayavira talked about the tool that geeks seldom mention - their minds - and how to take care of it. He talked about mindfullness - the art of being where you are when you are, and noticing what is going on around you. He then led us through several minutes of quiet contemplation and focusing of our minds. While some people thought this was a little weird, most people I talked with appreciated this calm centering way to start the morning.

0635.ApacheCon

Mark Hinkle, from Citrix, talked about community and code, and made a specific call to the foundation to revise its sponsorship rules to permit companies like Citrix to give us more money in a per-project targeted fashion.

0772.ApacheCon

And Jim Zemlin rounded out the day two keynotes by talking about what he does at the Linux Foundation, and how different foundations fill different niches in the Open Source software ecosystem. This is a talk I personally asked him to do, so I was very pleased with how it turned out. Different foundations do things differently, and I wanted him to talk some about why, and why some projects may fit better in one or another.

At the end of day three, we had two closing keynotes. We've done closing keynotes before with mixed results - a lot of people leave before. But we figured that with more content on the days after that, people would stay around. So it was disappointing to see how empty the rooms were. But the talks were great.

1052_ApacheCon

Allison Randal, a self-proclaimed Unix Graybeard (no, really!) talked about the cloud, and how it's just the latest incarnation of a steady series of small innovations over the last 50 years or so, and what we can look for in the coming decade. She spoke glowingly about Apache and its leadership role in that space.

1105_ApacheCon

Then Jason Hibbets finished up by talking about his work in Open Source Cities, and how Open Source methodologies can work in real-world collaboration to make your home town so much better. I'd heard this presentation before, but it was still great to hear the things that he's been doing in his town, and how they can be done in other places using the same model.

So, check the Apache YouTube channel in a week or so - https://www.youtube.com/user/TheApacheFoundation - and make some time to watch these presentations. I was especially pleased with Hillary and Upayavira's talks, and recommend you watch those if you are short on time and want to pick just a few.

Syndicated 2014-04-17 16:05:50 from Notes In The Margin

17 Apr 2014 sye   » (Journeyer)

http://www.advogato.org/person/dkg/diary/80.html

17 Apr 2014 badvogato   » (Master)

writing an 'Op-Ed' for Good Friday post. Here's the 'Preamble'..


'DON'T they consult the 'Victims,' though?"

I said, "They should, by rights,

Give them a chance - because, you know,

The tastes of people differ so,

Especially in Sprites."

The Phantom shook his head and smiled.

"Consult them? Not a bit!

'Twould be a job to drive one wild,

To satisfy one single child

There'd be no end to it! "

"Of course you can't leave children free,"

Said I, "to pick and choose:

But, in the case of men like me,

I think 'Mine Host' might fairly be

Allowed to state his views."
He said "It really wouldn't pay --

Folk are so full of fancies.

We visit for a single day,

And whether then we go, or stay,

Depends on circumstances.

"And, though we don't consult ' Mine Host'

Before the thing's arranged,

Still, if he often quits his post,

Or is not a well-manner Ghost,

Then you can have him changed.

"But if the host's a man like you --

I mean a man of sense;

And if the house is not too new --"

"Why, what has that ," said I, "to do

With Ghost's convenience ? "

"A new house does not suit, you know --

It's such a job to trim it:

But, after twenty years or so,

The wainscotings begin to go,

So twenty is the limit."

"To trim" was not a phrase I could

Remember having heard:

"perhaps," I said, "you'll be so good

As tell me what is understood

Exactly by that word?"

"It means the loosening all the doors,"

The Ghost replied, and laughed:

"It means the drilling holes by scores

In all the skirting-boards and floors,

To make a thorough draught.

"You'll sometimes find that one or two

Are all you really need

To let the wind come whistling through --

But here there'll be a lot to do!"

I faintly gasped "Indeed!

"If I'd been rather later, I'll

Be bound," I added, trying

(Most unsuccessfully) to smile,

"You'd have been busy all this while,

Trimming and beautifying?"

"Why, no," said he; "perhaps I should

Have stayed another minute --

But still no Ghost, that's any good,

Without an introduction would

Have ventured to begin it.

"The proper thing, as you were late,

Was certainly to go:

But, with the roads in such a state,

I got the Knight-Mayor's leave to wait

For half an hour or so."

"Who's the Knight-Mayor?" I cried. Instead

Of answering my question,

"Well, if you don't know that ," he said,

"Either you never go to bed,

Or you've a grand digestion !

"He goes about and sits on folk

That eat too much at night:

His duties are to pinch, and poke,

And squeeze them till they nearly choke."

( I said "It serves them right! )

"And folk who sup on things like these -- "

He muttered, "eggs and bacon --

Lobster -- and duck -- and toasted cheese --

If they don't get an awful squeeze,

I'm very much mistaken!

"He is immensely fat, and so

Well suits the occupation:

In point of fact, if you must know,

We used to call him years ago,

The Mayor and Corporation!

"The day he was elected Mayor

I know that every Sprite meant

To vote for me, but did not dare --

He was so frantic with despair

And furious with excitement.

"When it was over, for a whim,

He ran to tell the King;

And being the reverse of slim,

A two-mile trot was not for him

A very easy thing.

"So, to reward him for his run

(As it was baking hot,

And he was over twenty stone),

The King proceeded, half in fun,

To knight him on the spot."

"'Twas a great liberty to take! "

( I fired up like a rocket.)

"He did it just for punning's sake:

' The man," says Johnson, 'that would make

A pun, would pick a pocket! '"

"A man," said he, "is not a King."

I argued for a while,

And did my best to prove the thing --

The Phantom merely listening

With a contemptuous smile.

At last, when, breath and patience spent,

I had recourse to smoking --

"Your aim," he said, "is excellent:

But - when you call it argument --

Of course you're only joking?"

Stung by his cold and snaky eye,

I roused myself at length

To say, "At least I do defy

The veriest sceptic to deny

That union is strength!"

"That's true enough," said he, "yet stay --"

I listened in all meekness --

" Union is strength, I'm bound to say;

In fact, the thing's as clear as day;

But onions are a weakness."

16 Apr 2014 marnanel   » (Journeyer)

Aubergine song

Most of my set last night wasn't quite this lewd, but this was the only song that got recorded!



This entry was originally posted at http://marnanel.dreamwidth.org/294578.html. Please comment there using OpenID.

Syndicated 2014-04-16 22:03:12 from Monument

16 Apr 2014 caolan   » (Master)

Printing comments in margins

Because a fellow RedHat employee requested it on Friday, LibreOffice Writer 4.3 will be able to print comments in the margin effectively as they appear on screen, which should take care of the old fdo#36815 feature request. There is now an additional "place comments in margin" option in the print dialog (and writer print options). On screen the comments are placed outside the real page area, so to actually get them onto the paper when printing, the contents of the page needs to be scaled down by approximately 75% of its original size to make space to fit the comments in.

Here's the additional comment place option in the print dialog


Here's some sample pdf output

Syndicated 2014-04-16 10:16:00 (Updated 2014-04-16 10:16:17) from Caolán McNamara

16 Apr 2014 vicious   » (Master)

Putin vs. Godwin

I call Godwin’s law on Russia.  So, by the rules of Usenet, Russia has lost the argument.

I think the security council should adopt Godwin’s law.  Any time you call anyone a Nazi during an argument, you lose your veto power for that issue.


Syndicated 2014-04-16 05:18:22 from The Spectre of Math

16 Apr 2014 hypatia   » (Journeyer)

The Sydney Project: Powerhouse Museum

This year is my son’s last year before he begins full time schooling in 2015. Welcome to our year of child-focussed activities in Sydney.

This was our second visit to the Powerhouse Museum, both times on a Monday, a day on which it is extremely quiet.

Bendy mirror

The Powerhouse seems so promising. It’s a tech museum, and we’re nerd parents, which ought to make this a family paradise. But not so. Partly, it’s that V is not really a nerdy child. His favourite activities involve things like riding his bike downhill at considerable speeds and dancing. He is not especially interested in machinery, intricate steps of causation, or whimsy, which removes a lot of the interest of the Powerhouse. Museums are also a surprising challenge in conveying one fundamental fact about recent history: that the past was not like the present in significant ways. V doesn’t really seem to know this, nor is he especially interested in it, which removes a lot of the hooks one could use in explaining, eg, the steam powered machines exhibit.

We started at The Oopsatoreum, a fictional exhibition by Shaun Tan about the works of failed inventor Henry Mintox. This didn’t last long; given that V doesn’t understand the fundamental conceit of museums and is not especially interested in technology, an exhibit that relies on understanding museums and having affection for technology and tinkering was not going to hold his attention. He enjoyed the bendy mirrors and that’s about it.

V v train

I was hoping to spend a moment in The Oopsatoreum, but he dragged me straight back out to his single favourite exhibit: the steam train parked on the entrance level. But it quickly palled too, because he wanted to climb on and in it, and all the carriages have perspex covering their doors so you can see it but not get in. There’s a bigger exhibit of vehicles on the bottom floor, including — most interestingly to me — an old-fashioned departures board showing trains departing to places that don’t even have lines any more, but we didn’t spend long there because V’s seen it before. He also sped through the steam machines exhibit pretty quickly, mostly hitting the buttons that set off the machines and then getting grumpy at the amount of noise they make.

Gaming, old-style

He was much more favourably struck with the old game tables that are near the steam train. He can’t read yet, and parenting him recently has been a constant exercise in learning exactly how many user interfaces assume literacy (TV remote controls, for example, and their UIs now as well). The games were like this to an extent too; he can’t read “Press 2 to start” and so forth, so I kept having to start the games for him. He didn’t do so well as he didn’t learn to operate the joystick and press a button to fire at the same time. He could only do one or the other. And whatever I was hoping V would get out of this visit, I don’t think marginally improved gaming skills were it, much as I think they’re probably going to be useful to him soon.

Big red car

We spent the most time in the sinkhole of the Powerhouse, the long-running Wiggles exhibition. This begins with the annoying feature that prams must be left outside, presumably because on popular days one could hardly move in there for prams. But we were the only people in there and it was pretty irritating to pick up my two month old baby and all of V’s and her various assorted possessions and lump them all inside with me. I’m glad V is not much younger, or I would have been fruitlessly chasing him around in there with all that stuff in my arms.

Car fixing

It’s also, again, not really the stereotypical educational museum experience. There’s a lot of memorabilia that’s uninteresting to children, such as their (huge) collection of gold and platinum records and early cassette tapes and such. There’s also several screens showing Wiggles videos, which is what V gravitates to. If I wanted him to spend an hour watching TV, I can organise that without leaving my house. He did briefly “repair” a Wiggles car by holding a machine wrench against it.

Overall, I think we’re done with the Powerhouse for a few years.

Cost: $12 adults, $6 children 4 and over, younger children free.

Recommended: for my rather grounded four year old, no. Possibly more suited to somewhat older children, or children who have an interest in a specific exhibit. (If that interest is steam trains, I think Train Works at Thirlmere is a better bet, although we cheated last year by going to a Thomas-franchise focussed day.)

More information: Powerhouse website.

Syndicated 2014-04-16 03:41:59 from lecta

14 Apr 2014 tampe   » (Journeyer)

guile-log 0.4.1 released

I'm really proud of this release. It sports an implementation of a logic programming environment that's previously had a interface designed by myself and the famous kanren interface that you grok if you read the reasoned schemer. In this release a fairly complete implementation of an iso-prolog have been churn out. That was a huge effort, but the ride was interesting and gave me a lot of new insights in computer programming. This also sports proper namespace handling, proper closures, proper delimited continuation goals. the kanren interleaving constructs, a framework that is enabled by functional data-structures and state handling, vhashes, vlists, and the coolest thing of all you can save state and restore state quite cheaply and seamlessly and with great power if you learn the system. by seamlessly I mean that we do not have proper functional data structures everywhere due to semantic needs in especially accumulators and delimited continuations goals, and the logical variables may also be used in a mutative fashion for two reasons 1. to enable GC of prolog variables. 2 it is maybe 3-4 times faster compared to a vhash based version that is also possible. The vhash version is thread safe (I'm not using guile's internal vhash, but a modded version in C)
Anyhow to seamlessly handle state in all this is really a delicate affair. Cheaply refers to the fact that I tried hard to enable state storage and state retrieval in algorithms meaning that a save is much more intelligent than saving the whole state of the prolog engine. In all I strongly recommend anybody interesting in logic programming to study the features more deeply. I believe there is some good lessons to learn there. And finally by power I mean that the system has designed an internal tool that makes difficult algorithm possible.

Let's play with it


scheme@(guile-user)> ,L prolog
Happy hacking with Prolog! To switch back, type `,L scheme'.
prolog@(guile-user)> .[use-modules (logic guile-log iso-prolog)]
prolog@(guile-user)> .[use-modules (logic guile-log guile-prolog interpreter)]
prolog@(guile-user)> user_set(1,1),stall,user_set(1,2),stall.
stalled
/* We are at the first stall */
prolog@(guile-user)> .h

HELP FOR PROLOG COMMANDS
---------------------------------------------------------------------
(.n ) try to find n solutions
(.all | .* ) try to find all solutions
(.once | .1 ) try to find one solution
(.mute | .m ) no value output is written.
---------------------------------------------------------------------
(.save | .s ) associate current state with name ref
(.load | .l ) restore associate state with name ref
(.cont | .c ) continue the execution from last stall point
(.lold | .lo) restore the last state at a stall
(.clear ) clear the prolog stack and state
---------------------------------------------------------------------
(.ref ) get value of reference user variable ref
(.set ) set user variable ref to value val
---------------------------------------------------------------------
prolog@(guile-user)> .ref 1
$1 = 1
prolog@(guile-user)> .s 1
prolog@(guile-user)> .c
$2 = stalled
/* we are at the second stall */
prolog@(guile-user)> .ref 1
$3 = 2
prolog@(guile-user)> .s 2
prolog@(guile-user)> .c
yesmore (y/n/a) > n
$4 = ()
prolog@(guile-user)> .l 1
prolog@(guile-user)> .ref 1
$5 = 1
prolog@(guile-user)> .l 2
prolog@(guile-user)> .ref 1
$6 = 2
prolog@(guile-user)> .c
yesmore (y/n/a) > n
$7 = ()
prolog@(guile-user)>

To play with it checkout the v0.4.1 tag at guile-log and read the manual at manual

Have fun

14 Apr 2014 dkg   » (Master)

OTR key replacement (heartbleed)

I'm replacing my OTR key for XMPP because of heartbleed (see below).

If the plain ASCII text below is mangled beyond verification, you can retrieve a copy of it from my web site that should be able to be verified.

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

OTR Key Replacement for XMPP dkg@jabber.org
===========================================
Date: 2014-04-14

My main XMPP account is dkg@jabber.org.

I prefer OTR [0] conversations when using XMPP for private
discussions.

I was using irssi to connect to XMPP servers, and irssi relies on
OpenSSL for the TLS connections.  I was using it with versions of
OpenSSL that were vulnerable to the "Heartbleed" attack [1].  It's
possible that my OTR long-term secret key was leaked via this attack.

As a result, I'm changing my OTR key for this account.

The new, correct OTR fingerprint for the XMPP account at dkg@jabber.org is:

  F8953C5D 48ABABA2 F48EE99C D6550A78 A91EF63D

Thanks for taking the time to verify your peers' fingerprints.  Secure
communication is important not only to protect yourself, but also to
protect your friends, their friends and so on.

Happy Hacking,

  --dkg  (Daniel Kahn Gillmor)

Notes:

[0] OTR: https://otr.cypherpunks.ca/
[1] Heartbleed: http://heartbleed.com/
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQJ8BAEBCgBmBQJTTBF+XxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w
ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXRFQjk2OTEyODdBN0FEREUzNzU3RDkxMUVB
NTI0MDFCMTFCRkRGQTVDAAoJEKUkAbEb/fpcYwkQAKLzEnTV1lrK6YrhdvRnuYnh
Bh9Ad2ZY44RQmN+STMEnCJ4OWbn5qx/NrziNVUZN6JddrEvYUOxME6K0mGHdY2KR
yjLYudsBuSMZQ+5crZkE8rjBL8vDj8Dbn3mHyT8bAbB9cmASESeQMu96vni15ePd
2sB7iBofee9YAoiewI+xRvjo2aRX8nbFSykoIusgnYG2qwo2qPaBVOjmoBPB5YRI
PkN0/hAh11Ky0qQ/GUROytp/BMJXZx2rea2xHs0mplZLqJrX400u1Bawllgz3gfV
qQKKNc3st6iHf3F6p6Z0db9NRq+AJ24fTJNcQ+t07vMZHCWM+hTelofvDyBhqG/r
l8e4gdSh/zWTR/7TR3ZYLCiZzU0uYNd0rE3CcxDbnGTUS1ZxooykWBNIPJMl1DUE
zzcrQleLS5tna1b9la3rJWtFIATyO4dvUXXa9wU3c3+Wr60cSXbsK5OCct2KmiWY
fJme0bpM5m1j7B8QwLzKqy/+YgOOJ05QDVbBZwJn1B7rvUYmb968yLQUqO5Q87L4
GvPB1yY+2bLLF2oFMJJzFmhKuAflslRXyKcAhTmtKZY+hUpxoWuVa1qLU3bQCUSE
MlC4Hv6vaq14BEYLeopoSb7THsIcUdRjho+WEKPkryj6aVZM5WnIGIS/4QtYvWpk
3UsXFdVZGfE9rfCOLf0F
=BGa1
-----END PGP SIGNATURE-----

Syndicated 2014-04-14 18:43:00 from Weblogs for dkg

14 Apr 2014 Rich   » (Master)

ApacheCon North America 2014

Last week I had the honor of chairing ApacheCon North America 2014 in Denver Colorado. I could hardly be any prouder of what we were able to do on such an incredibly short timeline. Most of the credit goes to Angela Brown and her amazing team at the Linux Foundation who handled the logistics of the event.

My report to the Apache Software Foundation board follows:

ApacheCon North America 2014 was held April 7-9 in Denver, Colorado, USA. Despite the very late start, we had higher attendance than last year, and almost everyone that I have spoken with has declared it an enormous success. Attendees, speakers and sponsors have all expressed approval of the job that Angela and the Linux Foundation did in the production of the event. Speaking personally, it was the most stress-free ApacheCon I have ever had.

Several projects had dedicated hackathon spaces, while the main hackathon room was unfortunately well off of the beaten path, and went unnoticed by many attendees. We plan to have the main hackathon space much more prominently located in a main traffic area, where it cannot be missed, in Budapest, as I feel that the hackathon should remain a central part of the event, for its community-building opportunities.

Speaking of Budapest, on the first day of the event, we announced ApacheCon Europe, which will be held November 17-21 2014 in Budapest. The website for that is up at http://apachecon.eu/ and the CFP is open, and will close June 25, 2014. We plan to announce the schedule on July 28, 2014, giving us nearly 4 months lead time before the conference. We have already received talk submissions, and a few conference registrations. I will try to provide statistics each month between now and the conference.

As with ApacheCon NA, there will be a CloudStack Collaboration Conference co-located with ApacheCon. We are also discussing the possibility of a co-located Apache OpenOffice user-focused event on the 20th and 21st, or possibly just one day.

We eagerly welcome proposals from other projects which wish to have similar co-located events, or other more developer- or PMC-focused events like the Traffic Server Summit, which was held in Denver.

Discussion has begun regarding a venue for ApacheCon North America 2015, with Austin and Las Vegas early favorites, but several other cities being considered.

I'll be posting several more things abut it, because they deserve individual attention. Also, we'll be posting video and audio from the event on the ApacheCon website in the very near future.

Syndicated 2014-04-14 16:03:27 from Notes In The Margin

14 Apr 2014 yeupou   » (Master)

March 31th, Karen Sandler: “Financially the (GNOME) Foundation is in good shape”

I wanted to post his as a side note. But that’s a bit too much.

I dropped GNOME years ago. Back in the days when they dropped tons of cash on people creating shitty confusing companies like Eazel and HelixCode. I said Nautilus would never amount to anything and it never did. I said Miguel de Icaza was taking a very questionable path and he ended writing proprietary software. If it werent so sad, it would be kind of funny to see that nothing changed since then. Their Foundation is going more or less bankrupt while their financial reports shows that, for instance in 2012, they spent 1/4 of their resources to the pet project of their “executive director” Karen Sandler, some sexist bullshit called “Women’s Outreach” (I’m waiting for the “Black’s Outreach”, etc).

You don’t know who is Karen Sandler? Typical GNOME character. That’s just someone that never achieved anything related to computing but has been selected to be some sort of speaker nonetheless. I’m not saying only people that produced something that actually serve or served a purpose are entitled to speak. But to put people in position of “director”/whatever, at some point, there should be some knowledge, abilities, even just ideas, that makes the person stand out to be entitled to represent or lead the others.

So what could she speak of? About bad management?

More like, on GNOME.org “Announcing her departure, Karen said: “Working as the GNOME Foundation Executive Director has been one of the highlights of my career.” She also spoke of the achievements during her time as Executive Director: “I’ve helped to recruit two new advisory board members… and we have run the last three years in the black. We’ve held some successful funding campaigns, particularly around privacy. We have a mind-blowingly fantastic Board of Directors, and the Engagement team is doing amazing work. The GNOME.Asia team is strong, and we’ve got an influx of people, more so than I’ve seen in some time.”” 

Typical GNOME bullshit? Indeed: pompous titles, bragging, claiming. “Successful funding campaings”? Seriously? “Amazing work”. “Mind blowing”. It’s sad for the few GNOME developers that are worth it, because the main thing is a fucking joke.  It’s just empty words, no damn facts that matter that are even slightly true.

Not convinced? Too harsh maybe? Keep on reading. On her blog you’ll get her statement. The one quoted on GNOME.org.

“I think I have made some important contributions to the project while I have been Executive Director. I’ve helped to recruit two new advisory board members, and we recently received a one time donation of considerable size (the donor did not want to be identified). Financially the Foundation is in good shape, and we have run the last three years in the black. We’ve held some successful funding campaigns, particularly around privacy and accessibility. We have a mind-blowingly fantastic Board of Directors, and the Engagement team is doing amazing work. The GNOME.Asia team is strong, and we’ve got an influx of people, more so than I’ve seen in some time.
I hope that I have helped us to get in touch with our values during my time as ED, and I think that GNOME is more aware of its guiding mission than ever before.”

Yes, you can skip the fact that she consider recruiting advisory board members as an achievement (!!!). It seems that she thinks that a Foundation should focus on itself and not on the project it is derived of, seems that she does not even for a second mention anything that the software project GNOME would benefit of directly.

GNOME.org quoted her putting three dots and skipping “Financially the Foundation is in good shape”, and this just one week before we’re told they are definitely not.

She’s right one one thing though: now GNOME is definitely “more aware of its guiding mission than ever before”, since they are forced to cut on all unnessary expenses like the one she promoted.

I’m not sure to understand why someone smart as Bradley Kuhn recruited her at the Software Freedom Conservancy.


Syndicated 2014-04-14 15:15:24 from # cd /scratch

14 Apr 2014 yeupou   » (Master)

Synchronizing your (Roundcube) webmail and (KDE) desktop with a (Android) phone

So I finally got an Android-based phone. I thought waiting for Ubuntu/Firefox stuff to be released but my current one (Bada-based: never ever) died.

First, I learned that actually you need to lock your phone with a Google account for life. It just confirmed that the sane proper first steps with this is too remove anything linked to Google.

First place to go is to F-Droid. From there, instead of getting tons of shitty freeware from Google Play/Apps/whatever, you get Free Software, as in freedom even though I like free beer.

Using ownCloud? From F-Droid, get DavDroid. Yes, that works perfectly and is easy to set up, unlike the Dav-related crap on Google Apps. The only thing you have to take care of, if your SSL certificate (trendy topic theses days) is self signed, is to make a certificate the specific way Android accepts them. For now, they recommends to do it like:

#http://vimeo.com/89205175

KEY=fqdn.servername.net

openssl req -new -x509 -days 3550 -nodes -out $KEY.pem -keyout $KEY.key
openssl x509 -in $KEY.pem -outform der -out $KEY.crt

Apart from that, everything is straight-forward. You just add your IMAPS, CalDav and CardDav info like you did with KDE and Roundcube. And can obviously also use mozilla sync through your ownCloud.


Syndicated 2014-04-14 14:21:54 from # cd /scratch

14 Apr 2014 salmoni   » (Master)

Still working on Salstat from time to time. Latest work involves charting and importing from spreadsheets using xlrd (for Excel files) and ezodf (for Libre Office Calc files). Both libraries had similar interfaces so I cobbled together a lot of common code for both rather than having 2 separate routines.

I've also coded a CSV importer. Python's CSV file only seems to allow a single delimiter but my users sometimes need to handle multiple ones (particularly with files composed of several files from different sources). I wrote my own CSV parser than handles multiple delimiters and key characters within quotes too. The core routine is in here as a Gist (heavily commented too for when I have to trudge my lonely way back to the code to change it). It's not the fastest importer but it does the job accurately with some of the gnarly test data I threw at it.

Salstat code at GitHub

14 Apr 2014 mjg59   » (Master)

Real-world Secure Boot attacks

MITRE gave a presentation on UEFI Secure Boot at SyScan earlier this month. You should read the the presentation and paper, because it's really very good.

It describes a couple of attacks. The first is that some platforms store their Secure Boot policy in a run time UEFI variable. UEFI variables are split into two broad categories - boot time and run time. Boot time variables can only be accessed while in boot services - the moment the bootloader or kernel calls ExitBootServices(), they're inaccessible. Some vendors chose to leave the variable containing firmware settings available during run time, presumably because it makes it easier to implement tools for modifying firmware settings at the OS level. Unfortunately, some vendors left bits of Secure Boot policy in this space. The naive approach would be to simply disable Secure Boot entirely, but that means that the OS would be able to detect that the system wasn't in a secure state[1]. A more subtle approach is to modify the policy, such that the firmware chooses not to verify the signatures on files stored on fixed media. Drop in a new bootloader and victory is ensured.

But that's not a beautiful approach. It depends on the firmware vendor having made that mistake. What if you could just rewrite arbitrary variables, even if they're only supposed to be accessible in boot services? Variables are all stored in flash, connected to the chipset's SPI controller. Allowing arbitrary access to that from the OS would make it straightforward to modify the variables, even if they're boot time-only. So, thankfully, the SPI controller has some control mechanisms. The first is that any attempt to enable the write-access bit will cause a System Management Interrupt, at which point the CPU should trap into System Management Mode and (if the write attempt isn't authorised) flip it back. The second is to disable access from the OS entirely - all writes have to take place in System Management Mode.

The MITRE results show that around 0.03% of modern machines enable the second option. That's unfortunate, but the first option should still be sufficient[2]. Except the first option requires on the SMI actually firing. And, conveniently, Intel's chipsets have a bit that allows you to disable all SMI sources[3], and then have another bit to disable further writes to the first bit. Except 40% of the machines MITRE tested didn't bother setting that lock bit. So you can just disable SMI generation, remove the write-protect bit on the SPI controller and then write to arbitrary variables, including the SecureBoot enable one.

This is, uh, obviously a problem. The good news is that this has been communicated to firmware and system vendors and it should be fixed in the future. The bad news is that a significant proportion of existing systems can probably have their Secure Boot implementation circumvented. This is pretty unsurprisingly - I suggested that the first few generations would be broken back in 2012. Security tends to be an iterative process, and changing a branch of the industry that's historically not had to care into one that forms the root of platform trust is a difficult process. As the MITRE paper says, UEFI Secure Boot will be a genuine improvement in security. It's just going to take us a little while to get to the point where the more obvious flaws have been worked out.

[1] Unless the malware was intelligent enough to hook GetVariable, detect a request for SecureBoot and then give a fake answer, but who would do that?
[2] Impressively, basically everyone enables that.
[3] Great for dealing with bugs caused by YOUR ENTIRE COMPUTER BEING INTERRUPTED BY ARBITRARY VENDOR CODE, except unfortunately it also probably disables chunks of thermal management and stops various other things from working as well.

comment count unavailable comments

Syndicated 2014-04-14 03:22:28 from Matthew Garrett

14 Apr 2014 Stevey   » (Master)

Is lumail a stepping stone?

I'm pondering a rewrite of my console-based mail-client.

While it is "popular" it is not popular.

I suspect "console-based" is the killer.

I like console, and I ssh to a remote server to use it, but having different front-ends would be neat.

In the world of mailpipe, etc, is there room for a graphic console client? Possibly.

The limiting factor would be the lack of POP3/IMAP.

Reworking things such that there is a daemon to which a GUI, or a console client, could connect seems simple. The hard part would obviously be working the IPC and writing the GUI. Any toolkit selected would rule out 40% of the audience.

In other news I'm stalling on replying to emails. Irony.

Syndicated 2014-04-14 23:21:20 from Steve Kemp's Blog

13 Apr 2014 nutella   » (Master)

Well that makes a change
Everything centred is a little better than everything in bold.

13 Apr 2014 zeenix   » (Journeyer)

Location hackfest

I'm organising a hackfest in London from May 23 to 25 2014. The plan is to improve our location-related components and to get them useful to other OSs: KDE, Jolla and hopefully also Ubuntu phone. If you are (or want to) doing anything related to location and want to attend, please do add yourself to wikipage as soon as possible so I can notify our hosts if we'd need a bigger room.

Oh and if you need a place to stay, do contact me!

I'm thankful to awesome Mozilla folks for hosting this event and providing an awesome open geolocation service to everyone.



13 Apr 2014 dmarti   » (Master)

Surveillance Marketing pays

Katrina Lerman of Communispace explains how surveillance marketing pays. First of all, people don't like being tracked in general.

We found that consumers overwhelmingly prefer anonymity online: 86 percent of consumers would click a “do not track” button if it were available and 30 percent of consumers would actually pay a 5 percent surcharge if they could be guaranteed that none of their information would be captured.

What would get them over their resistance? Discounts, of course.

On the flip side, consumers may be willing to share their data if there’s a clear value exchange: 70 percent said they would voluntarily share personal data with a company in exchange for a 5 percent discount.

Got it? This is some heavy Chief-Marketing-Officer-level stuff here, so pay attention. Yes, you'll be spending a lot of money on Big Data and all the highly paid surveillance marketing consultants and IT experts who go with it. (Big Data experts are a rare breed, and feed primarily on between-sessions croissants at Big Data conferences.)

But look what you get for that increase in the marketing budget. You get to cut your price to get people to sign up for it.

Somewhere this all makes sense. Maybe Bob Hoffman can explain it.

Syndicated 2014-04-13 14:52:44 from Don Marti

13 Apr 2014 sye   » (Journeyer)

12 Apr 2014 etbe   » (Master)

Replacement Credit Cards and Bank Failings

I just read an interesting article by Brian Krebs about the difficulty in replacing credit cards [1].

The main reason that credit cards need to be replaced is that they have a single set of numbers that is used for all transactions. If credit cards were designed properly for modern use (IE since 2000 or so) they would act as a smart-card as the recommended way of payment in store. Currently I have a Mastercard and an Amex card, the Mastercard (issued about a year ago) has no smart-card feature and as Amex is rejected by most stores I’ve never had a chance to use the smart-card part of a credit card. If all American credit cards had a smart card feature which was recommended by store staff then the problems that Brian documents would never have happened, the attacks on Target and other companies would have got very few card numbers and the companies that make cards wouldn’t have a backlog of orders.

If a bank was to buy USB smart-card readers for all their customers then they would be very cheap (the hardware is simple and therefore the unit price would be low if purchasing a few million). As banks are greedy they could make customers pay for the readers and even make a profit on them. Then for online banking at home the user could use a code that’s generated for the transaction in question and thus avoid most forms of online banking fraud – the only possible form of fraud would be to make a $10 payment to a legitimate company become a $1000 payment to a fraudster but that’s a lot more work and a lot less money than other forms of credit card fraud.

A significant portion of all credit card transactions performed over the phone are made from the customer’s home. Of the ones that aren’t made from home a significant portion would be done from a hotel, office, or other place where a smart-card reader might be conveniently used to generate a one-time code for the transaction.

The main remaining problem seems to be the use of raised numbers. Many years ago it used to be common for credit card purchases to involve using some form of “carbon paper” and the raised numbers made an impression on the credit card transfer form. I don’t recall ever using a credit card in that way, I’ve only had credit cards for about 18 years and my memories of the raised numbers on credit cards being used to make an impression on paper only involve watching my parents pay when I was young. It seems likely that someone who likes paying by credit card and does so at small companies might have some recent experience of “carbon paper” payment, but anyone who prefers EFTPOS and cash probably wouldn’t.

If the credit card number (used for phone and Internet transactions in situations where a smart card reader isn’t available) wasn’t raised then it could be changed by posting a sticker with a new number that the customer could apply to their card. The customer wouldn’t even need to wait for the post before their card could be used again as the smart card part would never be invalid. The magnetic stripe on the card could be changed at any bank and there’s no reason why an ATM couldn’t identify a card by it’s smart-card and then write a new magnetic stripe automatically.

These problems aren’t difficult to solve. The amounts of effort and money involved in solving them are tiny compared to the costs of cleaning up the mess from a major breach such as the recent Target one, the main thing that needs to be done to implement my ideas is widespread support of smart-card readers and that seems to have been done already. It seems to me that the main problem is the incompetence of financial institutions. I think the fact that there’s no serious competitor to Paypal is one of the many obvious proofs of the incompetence of financial companies.

The effective operation of banks is essential to the economy and the savings of individuals are guaranteed by the government (so when a bank fails a lot of tax money will be used). It seems to me that we need to have national banks run by governments with the aim of financial security. Even if banks were good at their business (and they obviously aren’t) I don’t think that they can be trusted with it, an organisation that’s “too big to fail” is too big to lack accountability to the citizens.

Related posts:

  1. Football Cards and Free Kittens My cousin Greg Coker has created an eBay auction for...
  2. The Millennium Seed Bank Jonathan Drori gave an interesting TED talk about the Millenium...
  3. systemd – a Replacement for init etc The systemd projecct is an interesting concept for replacing init...

Syndicated 2014-04-12 00:25:39 from etbe - Russell Coker

11 Apr 2014 Stevey   » (Master)

Putting the finishing touches to a nodejs library

For the past few years I've been running a simple service to block blog/comment-spam, which is (currently) implemented as a simple JSON API over HTTP, with a minimal core and all the logic in a series of plugins.

One obvious thing I wasn't doing until today was paying attention to the anchor-text used in hyperlinks, for example:

  <a href="http://fdsf.example.com/">buy viagra</a>

Blocking on the anchor-text is less prone to false positives than blocking on keywords in the comment/message bodies.

Unfortunately there seem to exist no simple nodejs modules for extracting all the links, and associated anchors, from a random Javascript string. So I had to write such a module, but .. given how small it is there seems little point in sharing it. So I guess this is one of the reasons why there often large gaps in the module ecosystem.

(Equally some modules are essentially applications; great that the authors shared, but virtually unusable, unless you 100% match their problem domain.)

I've written about this before when I had to construct, and publish, my own cidr-matching module.

Anyway expect an upload soon, currently I "parse" HTML and BBCode. Possibly markdown to follow, since I have an interest in markdown.

Syndicated 2014-04-11 14:14:32 from Steve Kemp's Blog

11 Apr 2014 joey   » (Master)

propellor introspection for DNS

In just released Propellor 0.3.0, I've improved improved Propellor's config file DSL significantly. Now properties can set attributes of a host, that can be looked up by its other properties, using a Reader monad.

This saves needing to repeat yourself:

hosts = [ host "orca.kitenet.net"
        & stdSourcesList Unstable
        & Hostname.sane -- uses hostname from above

And it simplifies docker setup, with no longer a need to differentiate between properties that configure docker vs properties of the container:

 -- A generic webserver in a Docker container.
    , Docker.container "webserver" "joeyh/debian-unstable"
        & Docker.publish "80:80"
        & Docker.volume "/var/www:/var/www"
        & Apt.serviceInstalledRunning "apache2"

But the really useful thing is, it allows automating DNS zone file creation, using attributes of hosts that are set and used alongside their other properties:

hosts =
    [ host "clam.kitenet.net"
        & ipv4 "10.1.1.1"

        & cname "openid.kitenet.net"
        & Docker.docked hosts "openid-provider"

        & cname "ancient.kitenet.net"
        & Docker.docked hosts "ancient-kitenet"
    , host "diatom.kitenet.net"
        & Dns.primary "kitenet.net" hosts
    ]

Notice that hosts is passed into Dns.primary, inside the definition of hosts! Tying the knot like this is a fun haskell laziness trick. :)

Now I just need to write a little function to look over the hosts and generate a zone file from their hostname, cname, and address attributes:

extractZoneFile :: Domain -> [Host] -> ZoneFile
extractZoneFile = gen . map hostAttr
  where gen = -- TODO

The eventual plan is that the cname property won't be defined as a property of the host, but of the container running inside it. Then I'll be able to cut-n-paste move docker containers between hosts, or duplicate the same container onto several hosts to deal with load, and propellor will provision them, and update the zone file appropriately.


Also, Chris Webber had suggested that Propellor be able to separate values from properties, so that eg, a web wizard could configure the values easily. I think this gets it much of the way there. All that's left to do is two easy functions:

overrideAttrsFromJSON :: Host -> JSON -> Host

exportJSONAttrs :: Host -> JSON

With these, propellor's configuration could be adjusted at run time using JSON from a file or other source. For example, here's a containerized webserver that publishes a directory from the external host, as configured by JSON that it exports:

demo :: Host
demo = Docker.container "webserver" "joeyh/debian-unstable"
    & Docker.publish "80:80"
    & dir_to_publish "/home/mywebsite" -- dummy default
    & Docker.volume (getAttr dir_to_publish ++":/var/www")
    & Apt.serviceInstalledRunning "apache2"

main = do
    json <- readJSON "my.json"
    let demo' = overrideAttrsFromJSON demo
    writeJSON "my.json" (exportJSONAttrs demo')
    defaultMain [demo']

Syndicated 2014-04-11 05:05:54 from see shy jo

10 Apr 2014 Stevey   » (Master)

A small assortment of content

Today I took down my KVM-host machine, rebooting it and restarting all of my guests. It has been a while since I'd done so and I was a little nerveous, as it turned out this nerveousness was prophetic.

I'd forgotten to hardwire the use of proxy_arp so my guests were all broken when the systems came back online.

If you're curious this is what my incoming graph of email SPAM looks like:

I think it is obvious where the downtime occurred, right?

In other news I'm awaiting news from the system administration job I applied for here in Edinburgh, if that doesn't work out I'll need to hunt for another position..

Finally I've started hacking on my console based mail-client some more. It is a modal client which means you're always in one of three states/modes:

  • maildir - Viewing a list of maildir folders.
  • index - Viewing a list of messages.
  • message - Viewing a single message.

As a result of a lot of hacking there is now a fourth mode/state "text-mode". Which allows you to view arbitrary text, for example scrolling up and down a file on-disk, to read the manual, or viewing messages in interesting ways.

Support is still basic at the moment, but both of these work:

  --
  -- Show a single file
  --
  show_file_contents( "/etc/passwd" )
  global_mode( "text" )

Or:

function x()
   txt = { "${colour:red}Steve",
           "${colour:blue}Kemp",
           "${bold}Has",
           "${underline}Definitely",
           "Made this work" }
   show_text( txt )
   global_mode( "text")
end

x()

There will be a new release within the week, I guess, I just need to wire up a few more primitives, write more of a manual, and close some more bugs.

Happy Thursday, or as we say in this house, Hyvää torstai!

Syndicated 2014-04-10 15:34:02 from Steve Kemp's Blog

10 Apr 2014 joey   » (Master)

Kite: a server's tale

My server, Kite, is finishing its 20th year online.

It started as kite.resnet.cornell.edu, a 486 under the desk in my dorm room. Early on, it bounced around the DNS -- kite.ithaca.ny.us, kite.ml.org, kite.preferred.com -- before landing on kite.kitenet.net. The hardware has changed too, from a succession of desktop machines, it eventually turned into a 2u rack-mount server in the CCCP co-op. And then it went virtual, and international, spending a brief time in Amsterdam, before relocating to England and the kvm-hosting co-op.

Through all this change, and no few reinstalls from scratch, it's had a single distinct personality. This is a multi-user unix system, of the old school, carefully (and not-so-carefully) configured and administered to perform a grab-bag of functions. Whatever the users need.

I read the olduse.net hacknews newsgroup, and I see, in their descriptions of their server in 1984, the prototype of Kite and all its ilk.

It's consistently had a small group of users, a small subset of my family and friends. Not quite big enough to really turn into a community, and we wall and talk less than we once did.


Exhibit: Kite as it appeared in the 90's

[Intentionally partially broken, being able to read the cgi source code is half the fun.]

Kite was an early server on the WWW, and garnered mention in books and print articles. Not because it did anything important, but because there were few enough interesting web sites that it slightly stood out.


Many times over these 20 years I've wondered what will be the end of Kite's story. It seemed like I would either keep running it indefinitely, or perhaps lose interest. (Or funding -- it's eaten a lot of cash over the years, especially before the current days of $5/month VPS hosting.) But I failed to anticipate what seems to really be happening to it. Just as I didn't fathom, when kite was perched under my desk, that it would one day be some virtual abstract machine in a unknown computer in anther country.

Now it seems that what will happen to Kite is that most of the important parts of it will split off into a constellation of specialized servers. The website, including the user sites, has mostly moved to branchable.com. The DNS server, git server and other crucial stuff is moving to various VPS instances and containers. (The exhibit above is just one more automatically deployed, soulless container..) A large part of Kite has always been about me playing with bleeding-edge stuff and installing random new toys; that has moved to a throwaway personal server at cloudatcost.com which might be gone tomorrow (or might keep running for free for years).

What it seems will be left is a shell box, with IMAP access to a mail server, and a web server for legacy /~user/ sites, and a few tools that my users need (including that pine program some of them are still stuck on.)

Will it be worth calling that Kite?


[ Kite users: This transition needs to be done by December when the current host is scheduled to be retired. ]

Syndicated 2014-04-10 15:17:38 from see shy jo

10 Apr 2014 joolean   » (Journeyer)

I've been working on game-related stuff, time permitting. I'm at a point where I can roughly synchronize the movement of a little naked guy walking around a green field (thanks, Liberated Pixel Cup!) between the server and connected clients, and I wanted to add some spatial occlusion to the mix: Areas of the map that both the client and the server understand to be blocked. I knew this wasn't a trivial problem to solve efficiently, so I started doing research on spatial indexing, and found out about...

R-trees

An R-tree is a container structure for rectangles and associated user data. You search the tree by specifying a target rectangle and a visitor function that gets called for every rectangle in the tree that overlaps your target. Like all tree-based structure, the advantage you get when searching an R-tree derives from the use of branches to hierarchically partition the search space. R-trees use intermediate, covering rectangles to recursively group clusters of spatially-related rectangles. If your target rectangle overlaps a given covering rectangle, it may also overlap one of its covered leaf rectangles; if it doesn't overlap that rectangle, you can safely prune that branch from the search. The secret sauce of a particular R-tree implementation is in the rebalancing algorithm, which generates these covering nodes. A common approach seems to be to iteratively generate some number of covering rectangles that partition their underlying set of constituent rectangles as evenly as possible while minimizing the overlap of the covering set.

I whipped up a couple of implementations -- one in C with GLib dependencies, one in Scheme in terms of gzochi managed records -- based on my reading of the source code by Melinda Green, available here.

r6rs-protobuf

My own usage of this library uncovered another embarrassing issue: Deserializing a message with an embedded message field in r6rs-protobuf 0.6 doesn't work reliably, on account of the way the Protocol Buffers wire protocol directs the deserializer to handle what it perceives as unknown fields (throw 'em away). The solution is that you have to tell a delegate message deserializer exactly how much of a stream it's allowed to read, either explicitly (by passing the delimited length) or by preemptively consuming those bytes and wrapping an in-memory port around them -- which is what I did, to get a patch out as quickly as possible. Find version 0.7 here, if you need it.

10 Apr 2014 marnanel   » (Journeyer)

airlock

It's an airlock!



yes, I am pathetic.

This entry was originally posted at http://marnanel.dreamwidth.org/294314.html. Please comment there using OpenID.

Syndicated 2014-04-09 23:25:50 from Monument

9 Apr 2014 Skud   » (Master)

You don’t need to change all your passwords

This is probably going to be a wildly unpopular opinion and IDGAF. So many of my non-technical friends are freaking out that I feel the need to provide a bit of reassurance/reality.

First, an analogy.

In 2005 we learned that you can open a Kryptonite U-lock with a ballpoint pen. Everyone freaked out and changed their bike locks ASAP. Remember that?

Now, I wasn’t riding a bike at the time, but I started riding a bike a few years later in San Francisco, and I know how widespread bike theft is there. I used multiple levels of protection for my bike: a good lock, fancy locking posts on the seat and handlebars, and I parked my bike somewhere secure (work, home) about 90% of the time and only locked it up in public for short periods. Everywhere I went I saw sad, dismembered bike frames hanging forlornly from railings, reminding me of the danger. Those were paranoid times, and if I’d been riding in SF in 2005 you can bet I would have been first in line to replace my U-lock.

These days I live in Ballarat, a country town in Victoria, Australia. Few people ride bikes here and even fewer steal them. I happily leave my bike unlocked on friends’ front porches, dump it under a tree while I watch birds on the lake, lean it against the front of a shop just locked to itself while I grab a coffee, or park it outside divey music venues while I attend gigs late at night. I have approximately zero expectation of anything happening to it. If I heard that my bike lock had been compromised, I wouldn’t be in too desperate a hurry to change it.

Here’s the thing: if you are an ordinary Jane or Joe living the Internet equivalent of my cycling life in Ballarat, you don’t need to freak out about this thing.

Here are some websites I use where I’m not going to bother changing my password:

  • The place where I save interesting recipes
  • The one I go to to look at gifs of people in bands
  • That guitar forum
  • The one with the cool jewelry
  • The wiki I edit occasionally
  • The social network I only signed up for out of a sense of obligation but never use

Why? Because a) probably nobody’s going to bother trying to steal the passwords from there, and b) even if they did, so what?

This Heartbleed bug effectively reduces the privacy of an SSL-protected site (one whose URL starts with https://, which will probably show a lock in your browser’s address bar) to that of one without. Would you login to a site without SSL? Do you even know if the site uses SSL? If you’d login to your pet/recipe/knitting/music site anyway — if you’d do it from a coffee shop or airport — if you’d do it from a laptop or tablet or phone doesn’t have a strong password on it — if you don’t use two-factor authentication or don’t know what that means — then basically this won’t matter to you.

(I’m not saying it shouldn’t matter. You should probably set strong passwords and use VPNs and two-factor authentication. Just like you should probably lock your bike up everywhere you go, floss, and get your pap smears on the regular. Right? Right? *crickets*)

So if you’re a regular Jane — not working in IT security, not keeping state secrets, etc — here’s where you really need to change your passwords:

  • Any site you use to login to other sites (eg. Google, Facebook)
  • Any site that gives access to a good chunk of your money with just your password (eg. your bank, PayPal, Amazon)

(To do this: use this site to check if the site in question is affected, then if it’s “all clear” change your password. Don’t bother changing your password on a still-affected site, as that defeats the purpose. Oh, and you should probably change your passwords on those sites semi-regularly anyway, like maybe when you change the batteries in your smoke alarm. Which I just realised I should have done the other day and didn’t. Which tells you everything, really.)

Beyond those couple of key websites, you need to do a little risk assessment. Ask yourself questions like:

  • Has anyone ever heard of this site? Does anyone care? Is it likely to be a target of ominous dudes in balaclavas?
  • If I lost my login to this site, or someone could snoop what I had on that account, what is the worst that could happen?

If your answer is “I’d lose my job” or “I absolutely cannot survive without my extensive collection of Bucky/Steve fanart” then by all means change your password.

If your answer is “Eh, I’d sign up for a new one” or “Wait, even I’d forgotten that site existed” then you can probably stop freaking out quite so much.


DISCLAIMER: I am not an Internet security expert, just a moderately well-informed techhead. Some people, including better-informed ones, will disagree with me. You take this advice at your own risk. La la la what the fuck ever, you’ll most likely be fine.

Syndicated 2014-04-09 00:21:46 from Infotropism

8 Apr 2014 Skud   » (Master)

Seeking a volunteer for 3000 Acres (Melbourne, Australia)

As you might know, I’ve been working on 3000 Acres over the last few months. My time there is almost up and they’re looking for volunteers to continue developing the site. If anyone in the Melbourne area is interested in working with me on this, and then taking it over, please get in touch! It would be a great way to get involved in a tech project for sustainability/social good, and the 3000 Acres team are lovely people with a great vision. Feel free to drop me an email or ping me via whatever other means is convenient, and please help us get the word out.


3000 Acres connects people with vacant land to help them start community gardens. In 2013 3000 Acres was the winner of the VicHealth Seed Challenge, and is supported by VicHealth and The Australian Centre for Social Innnovation (TACSI) along with a range of partners from the sustainability, horticulture, and urban planning fields. We are in the process of incorporating as a non-profit.

Our website, which is the main way people interact with us, launched in February 2014. The site helps people map vacant lots, connect with other community members, and find community garden resources. Since our launch we have continued to improve and add features to our site.

So far, our web development has been done by one part-time developer. We are looking for another (or multiple) volunteer developers to help us continue to improve the site, and to help make our code ready to roll out to other cities.

We’re looking for someone with the following skills and experience:

  • Intermediate level Rails experience (or less Rails experience but strong backend web experience in general). You should be comfortable using an MVC framework, designing data structures, coding complex features, etc.
  • Comfort with CSS and Javascript (we mostly use Bootstrap 3.0 and Leaflet.js) and with light design work (eg. layout, icons)
  • Familiarity with agile software development, including iteration planning, test driven development, continuous integration, etc.
  • Strong communication skills: you’ll particularly use them for writing web copy, advising on information architecture, and project management.
  • You should be in Melbourne or able to travel regularly to Melbourne to meet with us. Phone, Skype, and screen sharing may also be used — our current developer is based in Ballarat.

We welcome applications from people of diverse backgrounds, and are flexible in our requirements; if you think you have skills that would work, even if they don’t match the above description exactly, please get in touch.

We envision this role being around 8 hours a week ongoing (somewhat flexible, and mostly from your own location). Initially you will work closely with our current developer, who can provide in-depth training/mentoring and documentation on our existing infrastructure and processes. Over the next 3 months you will become increasingly independent, after which time you will be expected to be able to create and maintain high-quality code without close technical supervision.

For more information you can check out:

If you’re interested in working with us, please drop Alex an email at skud@growstuff.org. No resume required — just let us know a bit about yourself, your experience, and why you want to work with us. If you can show us an example of some relevant work you’ve done in the past, that would be fantastic.

Syndicated 2014-04-08 04:12:33 from Infotropism

7 Apr 2014 sye   » (Journeyer)

it's Dr. Kaul, not Karl.

to read

6 Apr 2014 StevenRainwater   » (Master)

Road Trip to the Future

Ed Emshmiller cover art from the 1962 edition of Mario Zimmer Bradley’s “The Planet Savers”

Susan and I make the drive to work together at least three days a week. Lately we’ve been listening to audio books for fun. We started out with the 1973 BBC radio dramatization of Asimov’s Foundation Trilogy. It’s available at no-cost (not free as in free-speech, however, it’s still under a proprietary license). The audio has not held up well and we found some parts of it wholly unintelligible. Fortunately, having read it a few times, I knew it so well I could fill in the missing bits for Susan from memory.

From there we moved on to a more modern audio book, Graphic Audio’s full scale dramatization of Texas author Elizabeth Moon’s series, Vatta’s War. It’s a series of five books with a total audio running time of 57 hours, so it kept us entertained for a quite a while. The series is hard science fiction and all the more enjoyable because Elizabeth Moon has a military background and has put a good deal of thought into the strategies which might evolve when managing large space battles with the limits of light speed communications. How, for example, do you deal with multi-minute light lag that would affect not only communications but sensor data? Once a space battle is started, how do you keep track of the expanding spheres of debris that create navigational hazards as dangerous as enemy weapons?

Also, bonus points for being the first science fiction book I can recall with mention of a Shiner Bock beer. The audio quality of the Graphic Audio production was excellent and it’s a complex production with multiple actors voicing the characters as well as sound effects and music. I highly recommend either the audiobook or printed versions of the Vatta’s War series.

Our most recent audio book is a LibriVox production of The Planet Savers, Marrion Zimmer Bradley’s first Darkover novel, which seems to have passed into the public domain already despite being published in 1958. This audio book is truly free (both as in “free beer” and as in “free speech”). It’s a reasonably high quality production but more primitive than the Graphic Audio productions. It’s just a simple recording of someone reading the book.

If anyone else has an audio book recommendation, comments are welcome.

Syndicated 2014-04-06 22:08:10 from Steevithak of the Internet

6 Apr 2014 Stevey   » (Master)

So that distribution I'm not-building?

The other week I was toying with using GNU stow to build an NFS-share, which would allow remote machines to boot from it.

It worked. It worked well. (Standard stuff, PXE booting with an NFS-root.)

Then I started wondering about distributions, since in one sense what I'd built was a minimal distribution.

On that basis yesterday I started hacking something more minimal:

  • I compiled a monolithic GNU/Linux kernel.
  • I created a minimal initrd image, using busybox.
  • I built a static version of the tcc compiler.
  • I got the thing booting, via KVM.

Unfortunately here is where I ran out of patience. Using tcc and the static C library I can compile code. But I can't link it.

$ cat > t.c <>EOF
int main ( int argc, char *argv[] )
{
        printf("OK\n" );
        return 1;
}
EOF
$ /opt/tcc/bin/tcc t.c
tcc: error: file 'crt1.o' not found
tcc: error: file 'crti.o' not found
..

Attempting to fix this up resulted in nothing much better:

$ /opt/tcc/bin/tcc t.c -I/opt/musl/include -L/opt/musl/lib/

And because I don't have a full system I cannot compile t.c to t.o and use ld to link (because I have no ld.)

I had a brief flirt with the portable c-compiler, pcc, but didn't get any further with that.

I suspect the real solution here is to install gcc onto my host system, with something like --prefix=/opt/gcc, and then rsync that into my (suddenly huge) intramfs image. Then I have all the toys.

Syndicated 2014-04-06 14:35:27 from Steve Kemp's Blog

6 Apr 2014 etbe   » (Master)

Finding Corrupt Files that cause a Kernel Error

There is a BTRFS bug in kernel 3.13 which is triggered by Kmail and causes Kmail index files to become seriously corrupt. Another bug in BTRFS causes a kernel GPF when an application tries to read such a file, that results in a SEGV being sent to the application. After that the kernel ceases to operate correctly for any files on that filesystem and no command other than “reboot -nf” (hard reset without flushing write-back caches) can be relied on to work correctly. The second bug should be fixed in Linux 3.14, I’m not sure about the first one.

In the mean time I have several systems running Kmail on BTRFS which have this problem.

(strace tar cf – . |cat > /dev/null) 2>&1|tail

To discover which file is corrupt I run the above command after a reboot. Below is a sample of the typical output of that command which shows that the file named “.trash.index” is corrupt. After discovering the file name I run “reboot -nf” and then delete the file (the file can be deleted on a clean system but not after a kernel GPF). Of recent times I’ve been doing this about once every 5 days, so on average each Kmail/BTRFS system has been getting disk corruption every two weeks. Fortunately every time the corruption has been on an index file so I don’t need to restore from backups.

newfstatat(4, ".trash.index", {st_mode=S_IFREG|0600, st_size=33, …}, AT_SYMLINK_NOFOLLOW) = 0
openat(4, ".trash.index", O_RDONLY|O_NOCTTY|O_NONBLOCK|O_NOFOLLOW|O_CLOEXEC) = 5
fstat(5, {st_mode=S_IFREG|0600, st_size=33, …}) = 0
read(5,  <unfinished …>
+++ killed by SIGSEGV +++

Related posts:

  1. Bizarre “No space left on device” error from Xen What should have been a routine “remove DIMMs and run...
  2. BTRFS Status March 2014 I’m currently using BTRFS on most systems that I can...
  3. Kernel Security vs Uptime For best system security you want to apply kernel security...

Syndicated 2014-04-06 11:55:48 from etbe - Russell Coker

6 Apr 2014 sye   » (Journeyer)

Option 1: "Some constitutional scholars, who call themselves textualists, say that the only source of meaning in constitutional law should be the text of the Constitution itself. What do you think of this? Consider the arguments Professor Amar makes for when we can go beyond the text and whether you find them persuasive."

Option 2:

Option 3: "Pick one of the famous decisions from the Warren Court that Professor Amar discusses (such as Brown v. Board of Education, Reynolds v. Simms, or New York Times v. Sullivan). Either defend or criticize the decision the Court came to using the types of constitutional argument that Professor Amar has described in his lectures."

Option 3 New York Times v. Sullivan is most pertinent to our current time. I think I'll dig more into THAT case.

6 Apr 2014 mikal   » (Journeyer)

Initial play with wood turning

I've been going to the ACT Woodcraft Guild for the last year or so learning to turn wood on a lathe. I'm by no means an expert, but here are some of my early efforts.

                         

Tags for this post: wood turning 20140406-woodturning photo

Comment

Syndicated 2014-04-05 17:17:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

5 Apr 2014 sye   » (Journeyer)

5 Apr 2014 dmarti   » (Master)

Movie plot

(Entry for Bruce Schneier's Seventh Movie-Plot Threat Contest)

Ann has completed Agency training for a job as a non-official cover agent at an international oil firm. But now she's assigned to the release engineering team at Aloodo, a large Internet company where the source is open, the culture is wild and free, and release engineering, without management's knowledge, installs back doors for the Agency. A change in the company's elaborate list of security checks means the Agency needs one more inside person, fast, and Ann is the only NOC-qualified agent available.

Hijinks ensue as Ann must make it through the technical interview with a flaky radio connection to an Aloodo-employed NOC agent for support. When it fails, she aces the interview by dropping some petroleum science.

Ann struggles to keep up with both her release engineering work and her Agency responsibilities. But when an series of intricate heists has police baffled, she realizes that the gang is using information that could only come from within Aloodo. Do the back doors have back doors? Who are her new co-workers really working for? Is there anyone she can trust?

Syndicated 2014-04-05 13:54:10 from Don Marti

4 Apr 2014 bagder   » (Master)

curl and proxy headers

Starting in the next curl release, 7.37.0, the curl tool supports the new command line option –proxy-header. (Completely merged at this commit.)

It works exactly like –header does, but will only include the headers in requests sent to a proxy, while the opposite is true for –header: that will only be sent in requests that will go to the end server. But of course, if you use a HTTP proxy and do a normal GET for example, curl will include headers for both the proxy and the server in the request. The bigger difference is when using CONNECT to a proxy, which then only will use proxy headers.

libcurl

For libcurl, the story is slightly different and more complicated since we’re having things backwards compatible there. The new libcurl still works exactly like the former one by default.

CURLOPT_PROXYHEADER is the new option that is the new proxy header option that should be set up exactly like CURLOPT_HTTPHEADER is

CURLOPT_HEADEROPT is then what an application uses to set how libcurl should use the two header options. Again, by default libcurl will keep working like before and use the CURLOPT_HTTPHEADER list in all HTTP requests. To change that behavior and use the new functionality instead, set CURLOPT_HEADEROPT to CURLHEADER_SEPARATE.

Then, the header lists will be handled as separate. An application can then switch back to the old behavior with a unified header list by using CURLOPT_HEADEROPT set to CURLHEADER_UNIFIED.

Syndicated 2014-04-04 15:44:46 from daniel.haxx.se

3 Apr 2014 mjg59   » (Master)

Mozilla and leadership

A post I wrote back in 2012 got linked from a couple of the discussions relating to Brendan Eich being appointed Mozilla CEO. The tldr version is "If members of your community doesn't trust their leader socially, the leader's technical competence is irrelevant". That seems to have played out here.

In terms of background[1]: in 2008, Brendan donated money to the campaign for Proposition 8, a Californian constitutional amendment that expressly defined marriage as being between one man and one woman[2]. Both before and after that he had donated money to a variety of politicians who shared many political positions, including the definition of marriage as being between one man and one woman[3].

Mozilla is an interesting organisation. It consists of the for-profit Mozilla Corporation, which is wholly owned by the non-profit Mozilla Foundation. The Corporation's bylaws require it to work to further the Foundation's goals, and any profit is reinvested in Mozilla. Mozilla developers are employed by the Corporation rather than the Foundation, and as such the CEO is responsible for ensuring that those developers are able to achieve those goals.

The Mozilla Manifesto discusses individual liberty in the context of use of the internet, not in a wider social context. Brendan's appointment was very much in line with the explicit aims of both the Foundation and the Corporation - whatever his views on marriage equality, nobody has seriously argued about his commitment to improving internet freedom. So, from that perspective, he should have been a fine choice.

But that ignores the effect on the wider community. People don't attach themselves to communities merely because of explicitly stated goals - they do so because they feel that the community is aligned with their overall aims. The Mozilla community is one of the most diverse in free software, at least in part because Mozilla's stated goals and behaviour are fairly inspirational. People who identify themselves with other movements backing individual liberties are likely to identify with Mozilla. So, unsurprisingly, there's a large number of socially progressive individuals (LGBT or otherwise) in the Mozilla community, both inside and outside the Corporation.

A CEO who's donated money to strip rights[4] from a set of humans will not be trusted by many who believe that all humans should have those rights. It's not just limited to individuals directly affected by his actions - if someone's shown that they're willing to strip rights from another minority for political or religious reasons, what's to stop them attempting to do the same to you? Even if you personally feel safe, do you trust someone who's willing to do that to your friends? In a community that's made up of many who are either LGBT or identify themselves as allies, that loss of trust is inevitably going to cause community discomfort.

The first role of a leader should be to manage that. Instead, in the first few days of Brendan's leadership, we heard nothing of substance - at best, an apology for pain being caused rather than an apology for the act that caused the pain. And then there was an interview which demonstrated remarkable tone deafness. He made no attempt to alleviate the concerns of the community. There were repeated non-sequiturs about Indonesia. It sounded like he had no idea at all why the community that he was now leading was unhappy.

And, today, he resigned. It's easy to get into hypotheticals - could he have compromised his principles for the sake of Mozilla? Would an initial discussion of the distinction between the goals of members of the Mozilla community and the goals of Mozilla itself have made this more palatable? If the board had known this would happen, would they have made the same choice - and if they didn't know, why not?

But that's not the real point. The point is that the community didn't trust Brendan, and Brendan chose to leave rather than do further harm to the community. Trustworthy leadership is important. Communities should reflect on whether their leadership reflects not only their beliefs, but the beliefs of those that they would like to join the community. Fail to do so and you'll drive them away instead.

[1] For people who've been living under a rock
[2] Proposition 8 itself was a response to an ongoing court case that, at the point of Proposition 8 being proposed, appeared likely to support the overturning of Proposition 22, an earlier Californian ballot measure that legally (rather than constitutionally) defined marriage as being between one man and one woman. Proposition 22 was overturned, and for a few months before Proposition 8 passed, gay marriage was legal in California.
[3] http://www.theguardian.com/technology/2014/apr/02/controversial-mozilla-ceo-made-donations-right-wing-candidates-brendan-eich
[4] Brendan made a donation on October 25th, 2008. This postdates the overturning of Proposition 22, and as such gay marriage was legal in California at the time of this donation. Donating to Proposition 8 at that point was not about supporting the status quo, it was about changing the constitution to forbid something that courts had found was protected by the state constitution.

comment count unavailable comments

Syndicated 2014-04-03 22:42:26 from Matthew Garrett

3 Apr 2014 badvogato   » (Master)

the day My article went on Front Page, the day that site went offline AGAIN. Is that good or bad omen?

From help kuro5hin.org Thu Apr 3 02:33:21 2014

A story that you submitted titled "Jerry Jeff Walker 'LET OUR MIKE GO'" on kuro5hin.org has been posted.

If you would like to view the story, it is available at the following URL:

http://www.kuro5hin.org/story/2014/3/31/94752/9683

3 Apr 2014 Stevey   » (Master)

Tagging images, and maintaining collections?

I'm an amateur photographer, although these days I tend to drop the amateur prefix, given that I shoot people for cash at least once a month.

(It isn't my main job, and I'd never actually want it to be, because I'm certain I'd become unhappy hustling for jobs and doing the promotion thing.)

Anyway over the years I've built up a large library of images, mostly organized in a hierarchy of directories beneath ~/Images.

Unlike most photographers I don't use aperture, lighttable, or any similar library management. I shoot my images in RAW, convert to JPG via rawtherapee, and keep both versions of the images.

In short I don't want to mix the "library management" functions with the "RAW conversion" because I do regard them as two separate steps. That said I'm reaching a point where I do want to start tagging images, and finding them more quickly.

In the past I wrote a couple of simple tools to inject tags into the EXIF data of images, and then indexed them. But that didn't work so well in practise. I'm starting to think instead I should index images into sqlite:

  • Size.
  • date.
  • Content hash.
  • Tags.
  • Path.

The downside is that this breaks utterly as soon as you move images around on-disk. Which is something my previous exif-manipulation was designed to avoid.

Anyway I'm thinking at the moment, but I know that the existing tools such as F-Spot, shotwell, DigiKam, and similar aren't suitable. So I either need to go standalone and use EXIF tags, accepting the fact that the tags I enter won't be visible to other tools, or I cope with the file-rename issues by attempting to update an existing sqlite database via hash/size/etc.

Syndicated 2014-04-03 11:02:30 from Steve Kemp's Blog

3 Apr 2014 berend   » (Journeyer)

As an update to my last post on weird Ubuntu 12.04 NFS4 server load: repairing the ext file system actually didn't really work.

Redid the test, problems came right back.

Next thing I did was turn the root volume into xfs: better, but still writing 1MB/s, with 50% i/o utilisation for the root disk.

So perhaps a Linux kernel things. Created a 13.10 nfs server, problem disappeared.

2 Apr 2014 oubiwann   » (Journeyer)

Hash Maps in LFE: Request for Comment

As you may have heard, hash maps are coming to Erlang in R17. We're all pretty excited about this. The LFE community (yes, we have one... hey, being headquartered on Gutland keeps us lean!) has been abuzz with excitement: do we get some new syntax for Erlang maps? Or just record-like macros?

That's still an open question. There's a good chance that if we find an elegant solution, we'll get some new syntax.

In an effort to (re)start this conversation and get us thinking about the possibilities, I've drawn together some examples from various Lisps. At the end of the post, we'll review some related data structures in LFE... as a point of contrast and possible guidance.

Note that I've tried to keep the code grouped in larger gists, not split up with prose wedged between them. This should make it easier to compare and contrast whole examples at a glance.

Before we dive into the Lisps, let's take a look at maps in Erlang:

Erlang Maps

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

Advogato User Stats
Users14007
Observer9890
Apprentice745
Journeyer2339
Master1029

New Advogato Members

Recently modified projects

13 Apr 2014 Babel
13 Apr 2014 Polipo
19 Mar 2014 usb4java
8 Mar 2014 Noosfero
17 Jan 2014 Haskell
17 Jan 2014 Erlang
17 Jan 2014 Hy
17 Jan 2014 clj-simulacrum
17 Jan 2014 Haskell-Lisp
17 Jan 2014 lfe-disco
17 Jan 2014 clj-openstack
17 Jan 2014 lfe-openstack
17 Jan 2014 LFE
10 Jan 2014 libstdc++
1 Nov 2013 FAQ Linux

New projects

8 Mar 2014 Noosfero
17 Jan 2014 Haskell
17 Jan 2014 Erlang
17 Jan 2014 Hy
17 Jan 2014 clj-simulacrum
17 Jan 2014 Haskell-Lisp
17 Jan 2014 lfe-disco
17 Jan 2014 clj-openstack
17 Jan 2014 lfe-openstack
17 Jan 2014 LFE
1 Nov 2013 FAQ Linux
15 Apr 2013 Gramps
8 Apr 2013 pydiction
28 Mar 2013 Snapper
5 Jan 2013 Templer