Recent blog entries

25 Jan 2015 mikal   » (Journeyer)

A walk in the San Mateo historic red woods

James, Tony and I went for a little post long haul flight walk this afternoon in some red woods. Very nice.


Interactive map for this route.

Tags for this post: blog pictures 20150124-san_mateo photo california bushwalk sunset
Related posts: Did I mention it's hot here?; Summing up Santa Monica; Noisy neighbours at Central Park in Mountain View; So, how am I getting to the US?; VTA station for the Santa Clara Convention Center; Public transport to San Francisco from Santa Clara


Syndicated 2015-01-25 06:48:00 from : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

25 Jan 2015 joey   » (Master)

making propellor safer with GADTs and type families

Since July, I have been aware of an ugly problem with propellor. Certain propellor configurations could have a bug. I've tried to solve the problem at least a half-dozen times without success; it's eaten several weekends.

Today I finally managed to fix propellor so it's impossible to write code that has the bug, bending the Haskell type checker to my will with the power of GADTs and type-level functions.

the bug

Code with the bug looked innocuous enough. Something like this:

foo :: Property
foo = property "foo" $
    unlessM (liftIO $ doesFileExist "/etc/foo") $ do
        bar <- liftIO $ readFile "/etc/foo.template"
        ensureProperty $ setupFoo bar

The problem comes about because some properties in propellor have Info associated with them. This is used by propellor to introspect over the properties of a host, and do things like set up DNS, or decrypt private data used by the property.

At the same time, it's useful to let a Property internally decide to run some other Property. In the example above, that's the ensureProperty line, and the setupFoo Property is run only sometimes, and is passed data that is read from the filesystem.

This makes it very hard, indeed probably impossible for Propellor to look inside the monad, realize that setupFoo is being used, and add its Info to the host.

Probably, setupFoo doesn't have Info associated with it -- most properties do not. But, it's hard to tell, when writing such a Property if it's safe to use ensureProperty. And worse, setupFoo could later be changed to have Info.

Now, in most languages, once this problem was noticed, the solution would probably be to make ensureProperty notice when it's called on a Property that has Info, and print a warning message. That's Good Enough in a sense.

But it also really stinks as a solution. It means that building propellor isn't good enough to know you have a working system; you have to let it run on each host, and watch out for warnings. Ugh, no!

the solution

This screams for GADTs. (Well, it did once I learned how what GADTs are and what they can do.)

With GADTs, Property NoInfo and Property HasInfo can be separate data types. Most functions will work on either type (Property i) but ensureProperty can be limited to only accept a Property NoInfo.

data Property i where
    IProperty :: Desc -> ... -> Info -> Property HasInfo
    SProperty :: Desc -> ... -> Property NoInfo

data HasInfo
data NoInfo

ensureProperty :: Property NoInfo -> Propellor Result

Then the type checker can detect the bug, and refuse to compile it.


Except ...

Property combinators

There are a lot of Property combinators in propellor. These combine two or more properties in various ways. The most basic one is requires, which only runs the first Property after the second one has successfully been met.

So, what's it's type when used with GADT Property?

requires :: Property i1 -> Property i2 -> Property ???

It seemed I needed some kind of type class, to vary the return type.

class Combine x y r where
    requires :: x -> y -> r

Now I was able to write 4 instances of Combines, for each combination of 2 Properties with HasInfo or NoInfo.

It type checked. But, type inference was busted. A simple expression like "foo requires bar" blew up:

     No instance for (Requires (Property HasInfo) (Property HasInfo) r0)
      arising from a use of `requires'
    The type variable `r0' is ambiguous
    Possible fix: add a type signature that fixes these type variable(s)
    Note: there is a potential instance available:
      instance Requires
                 (Property HasInfo) (Property HasInfo) (Property HasInfo)
        -- Defined at Propellor/Types.hs:167:10

To avoid that, it needed "(foo requires bar) :: Property HasInfo" -- I didn't want the user to need to write that.

I got stuck here for an long time, well over a month.

type level programming

Finally today I realized that I could fix this with a little type-level programming.

class Combine x y where
    requires :: x -> y -> CombinedType x y

Here CombinedType is a type-level function, that calculates the type that should be used for a combination of types x and y. This turns out to be really easy to do, once you get your head around type level functions.

type family CInfo x y
type instance CInfo HasInfo HasInfo = HasInfo
type instance CInfo HasInfo NoInfo = HasInfo
type instance CInfo NoInfo HasInfo = HasInfo
type instance CInfo NoInfo NoInfo = NoInfo
type family CombinedType x y
type instance CombinedType (Property x) (Property y) = Property (CInfo x y)

And, with that change, type inference worked again! \o/

(Bonus: I added some more intances of CombinedType for combining things like RevertableProperties, so propellor's property combinators got more powerful too.)

Then I just had to make a massive pass over all of Propellor, fixing the types of each Property to be Property NoInfo or Property HasInfo. I frequently picked the wrong one, but the type checker was able to detect and tell me when I did.

A few of the type signatures got slightly complicated, to provide the type checker with sufficient proof to do its thing...

before :: (IsProp x, Combines y x, IsProp (CombinedType y x)) => x -> y -> CombinedType y x
before x y = (y `requires` x) `describe` (propertyDesc x)

    :: (Combines (Property x) (Property y))
    => Property x
    => Property y
    => CombinedType (Property x) (Property y)
onChange = -- 6 lines of code omitted

fallback :: (Combines (Property p1) (Property p2)) => Property p1 -> Property p2 -> Property (CInfo p1 p2)
fallback = -- 4 lines of code omitted

.. This mostly happened in property combinators, which is an acceptable tradeoff, when you consider that the type checker is now being used to prove that propellor can't have this bug.

Mostly, things went just fine. The only other annoying thing was that some things use a [Property], and since a haskell list can only contain a single type, while Property Info and Property NoInfo are two different types, that needed to be dealt with. Happily, I was able to extend propellor's existing (&) and (!) operators to work in this situation, so a list can be constructed of properties of several different types:

propertyList "foos" $ props
    & foo
    & foobar
    ! oldfoo    


The resulting 4000 lines of changes will be in the next release of propellor. Just as soon as I test that it always generates the same Info as before, and perhaps works when I run it. (eep)

These uses of GADTs and type families are not new; this is merely the first time I used them. It's another Haskell leveling up for me.

Anytime you can identify a class of bugs that can impact a complicated code base, and rework the code base to completely avoid that class of bugs, is a time to celebrate!

Syndicated 2015-01-25 03:54:14 from see shy jo

24 Jan 2015 dmarti   » (Master)

QoTD: Zoë Keating

It’s one thing for individuals to upload all my music for free listening (it doesn’t bother me). It’s another thing entirely for a major corporation to force me to. I was encouraged to participate and now, after I’m invested, I’m being pressured into something I don’t want to do.

Zoë Keating

Syndicated 2015-01-24 17:20:29 from Don Marti

24 Jan 2015 amits   » (Journeyer)

Get ready for FUDCon APAC 2015 in Pune, India!

Mark your calendars for Jun 26 – 28 for FUDCon Pune.  Start making travel arrangements. Think of topics to speak on, workshops and hackfests to organise, and have fun with friends.

FUDCon Pune is being hosted at MIT COE.  They have excellent infrastructure and an amazing team of people who have been really helpful in addressing our needs to host a large conference.

Hop on to #fedora-india on freenode and the mailing list for information on volunteering.  The etherpad has all the to-do items, feel free to jump in and help!  The Twitter, Google+ and Facebook pages will have announcements and Planet Fedora will have blog posts from various people involved with the FUDCon.

It’s going to be a blast organising a FUDCon again!

Syndicated 2015-01-24 07:37:31 (Updated 2015-01-24 07:41:14) from Think. Debate. Innovate.

23 Jan 2015 berend   » (Journeyer)

Continually see this in svn 1.8.11 server log currently:

Provider encountered an error while streaming a REPORT response.  [500, #0]
A failure occurred while driving the update report editor [500, #103]

Get this when doing a checkout/update. Client says:

svn: E120190: Error retrieving REPORT: An error occurred during authentication

Many reports of same problem over many years over the internet. No solution.

The only thing that works is svn 1.6 client. Go figure.

Setup is apache 2.2 server, wandisco latest 1.8 svn, and svn repository is nfs mounted. But that doesn't matter, even when making the repository local it doesn't work.

Upgraded my Ubuntu 14.04 Trusty Tahr to the WanDisco 1.8.11 from 1.8.8, doesn't help.

23 Jan 2015 dan   » (Master)

The Invisible AUDIO element

I said this morning that I was going to replace the browser-native audio controls with something which looks (approximately, at least) consistent everywhere. There’s another couple of reasons for wanting to revisit the way we render the audio element

  • on the default Android browser, we don’t get an ended event when the player gets to the end of the track, which means every five minutes I have to pick the phone up and unlock it and press ‘remove’ in the play queue to trigger the next track
  • when the screen is sleeping or the tab is hidden, the requestAnimationFrame handler that triggers Om repaints is called late or not at all. Again, time to pick the phone up and unlock and …
  • I want/need to make it run on Windows, which does not support Ogg in the audio element. Although in principle I could hang multiple source children onto the audio element and let the browser choose which one it likes best, rewriting the DOM after it has been parsed is said to be not a good idea which means using info from the JS canPlayType method to choose the bext format for each track from those available for that track.

The nice thing about Om application state is that it’s also a perfectly ordinary Clojure atom and we can call add-watch on it to have a perfectly ordinary Clojure(Script) function called whenever it changes. So what we’re going to do is

  • a new key in app-state to contain the desired player state
  • an Om component to render a player UI, and update this desired state when buttons are clicked
  • some event handlers to get news from the audio element and figure out what it might be doing (principally, has it reached the end of the track, or is it having connectivity issues) and update the desired state correspondingly
  • a watch on app-state that calls the function currently named sync-player-state, which compares the desired state to what the audio element is actually doing, and updates the audio element appropriately

Syndicated 2015-01-22 19:26:47 from diary at Telent Netowrks

22 Jan 2015 mikal   » (Journeyer)

Harcourt and Rogers Trigs

I needed to visit someone in deepest darkest North Canberra yesterday, and there was an hour to kill between that meeting and the local Linux User's Group meeting. It seemed silly to have driven all that way and to not see a couple of trigs, so I visited these two. Both these trigs were easy to get to and urban. Frankly a little boring.

Harcourt trig is in what I will call a cow paddock -- it doesn't have a lot of trees happening and feels a bit like left over land. Access to the nature reserve wasn't very obvious to me from the suburban streets, but the KML file below might help others to work it out. It wasn't too bad once I'd navigated the maze of streets and weird paved areas.


Interactive map for this route.

Rogers was similar, except access was more obvious because it is in an older suburb. This is a nicer reserve than Harcourt's, with a nice peak and some walking opportunities around the base of the hill. I think I'll probably end up coming back to this one as my wife is nostalgic about growing up backing on to this reserve.

    [kml: 20150122-2]

Tags for this post: blog pictures 20150122-harcourt_and_rogers photo canberra gungahlin belconnen bushwalk trig_point
Related posts: A walk around Mount Stranger; Taylor Trig; Urambi Trig; Walk up Tuggeranong Hill; A quick walk to Tuggeranong Trig; Wanniassa Trig


Syndicated 2015-01-22 13:10:00 from : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

22 Jan 2015 dmarti   » (Master)

mobile ad revenue fail

Arel Lidow has a look at Mary Meeker's Internet Trends report and writes, Each year, the gap between dollars spent on mobile advertising versus time spent on mobile devices increases: in 2011, the implied gap was about $14 billion; in 2013, it was about $28 billion. So why is the gap in mobile ad spend so damn large? And when will those billions of dollars come flooding in?

I plotted the same data, and and put the numbers for print, web, and mobile, across several years, on the same graph.

Clearly, Lidow is right. Mobile is remarkably disappointing, compared to web. But what is going on with print?

Even as the fraction of user time spent on print falls, it's worth more to advertisers than mobile is.

This isn't much of a surprise, if you look at advertising history. More targetable ad media such as junk fax and email spam tend to fall in value, while non-targetable ad media tend to hold or gain value. (Seems paradoxical until you look at the economics behind it.)

But here's Lidow's recommendation: If you could wave a magic wand and provide a perfect attribution system with widespread usage by marketers and agencies, the mobile ad landscape would change quickly, and ad spend would increase.

So wait a minute. Take the a low-value ad medium and make it more valuable by doing more of what makes it less valuable? Wouldn't you want to figure out how to go the other way?

I don't get it. More and more I'm starting to think that this whole surveillance marketing trend is more about selling Marketing to the rest of the company than about selling stuff to customers.

Syndicated 2015-01-22 15:12:23 from Don Marti

22 Jan 2015 slef   » (Master)

Outsourcing email to Google means SPF allows phishing?

I expect this is obvious to many people but bahumbug To Phish, or Not to Phish? just woke me up to the fact that if Google hosts your company email then its Sender Policy Framework might make other Google-sent emails look legitimate for your domain. When combined with the unsupportive support of the big free webmail hosts, is this another black mark against SPF?

Syndicated 2015-01-22 03:57:00 from Software Cooperative News » mjr

20 Jan 2015 mikal   » (Journeyer)

Another lunch time walk

My arm still hurts, so no gym again. Instead, another lunch time walk although this one was shorter. The skies were dramatic, but no rain unfortunately. I found GC1DEFB during this walk.


Tags for this post: blog pictures 20150120-geocaching photo canberra tuggeranong bushwalk geocaching
Related posts: Lunchtime geocaching; A walk around Mount Stranger; Taylor Trig; Urambi Trig; Walk up Tuggeranong Hill; A quick walk to William Farrer's grave


Syndicated 2015-01-19 22:45:00 (Updated 2015-01-20 10:06:58) from : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

20 Jan 2015 hacker   » (Master)

HOWTO: Enable Docker API through firewalld on CentOS 7.x (el7)

Playing more and more with Docker across multiple Linux distributions has taught me that not all Linux distributions are treated the same. There’s a discord right now in the Linux community about systemd vs. SysV init. In our example, CentOS 7.x uses systemd, where all system services are spawned and started. I am using this […]

Related posts:
  1. Tuesday Tip: rsync Command to Include Only Specific Files I find myself using rsync a lot, both for moving...
  2. SOLVED: VMware Tools __devexit_p Error on Linux Kernel 3.8 and Earlier If you run a current version of VMware Workstation, VMware...
  3. Using fdupes to Solve the Data Duplication Problem: I’ve got some dupes! Well, 11.6 hours later after scanning the NAS with fdupes,...

Syndicated 2015-01-20 05:04:14 from random neuron misfires

19 Jan 2015 mikal   » (Journeyer)

Lunchtime geocaching

Woke up this morning with a sore left arm, which ruled out going to the gym. Instead, I decided to go for a geocaching walk at lunch time. I found these caches: GC235FM; GC56N78; GC5B9WT; GC5F6G3; and GC5F0PE. A nice walk.


Tags for this post: blog pictures 20150119-geocaching photo canberra tuggeranong bushwalk geocaching
Related posts: A walk around Mount Stranger; Taylor Trig; Urambi Trig; Walk up Tuggeranong Hill; A quick walk to William Farrer's grave; A quick walk to Tuggeranong Trig


Syndicated 2015-01-18 20:30:00 from : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

18 Jan 2015 broonie   » (Journeyer)

Heating the Internet of Things

Internet of Things seems to be trendy these days, people like the shiny apps for controlling things and typically there are claims that the devices will perform better than their predecessors by offloading things to the cloud – but this makes some people worry that there are potential security issues and it’s not always clear that internet usage is actually delivering benefits over something local. One of the more widely deployed applications is smart thermostats for central heating which is something I’ve been playing with. I’m using Tado, there’s also at least Nest and Hive who do similar things, all relying on being connected to the internet for operation.

The main thing I’ve noticed has been that the temperature regulation in my flat is better, my previous thermostat allowed the temperature to vary by a couple of degrees around the target temperature in winter which got noticeable, with this the temperature generally seems to vary by a fraction of a degree at most. That does use the internet connection to get the temperature outside, though I’m fairly sure that most of this is just a better algorithm (the thermostat monitors how quickly the flat heats up when heating and uses this when to turn off rather than waiting for the temperature to hit the target then seeing it rise further as the radiators cool down) and performance would still be substantially improved without it.

The other thing that these systems deliver which does benefit much more from the internet connection is that it’s easy to control them remotely. This in turn makes it a lot easier to do things like turn the heating off when it’s not needed – you can do it remotely, and you can turn the heating back on without being in the flat so that you don’t need to remember to turn it off before you leave or come home to a cold building. The smarter ones do this automatically based on location detection from smartphones so you don’t need to think about it.

For example, when I started this post this I was sitting in a coffee shop so the heating had been turned off based on me taking my phone with me and as a result the temperature gone had down a bit. By the time I got home the flat was back up to normal temperature all without any meaningful intervention or visible difference on my part. This is particularly attractive for me given that I work from home – I can’t easily set a schedule to turn the heating off during the day like someone who works in an office so the heating would be on a lot of the time. Tado and Nest will to varying extents try to do this automatically, I don’t know about Hive. The Tado one at least works very well, I can’t speak to the others.

I’ve not had a bill for a full winter yet but I’m fairly sure looking at the meter that between the two features I’m saving a substantial amount of energy (and hence money and/or the environment depending on what you care about) and I’m also seeing a more constant temperature within the flat, my guess would be that most of the saving is coming from the heating being turned off when I leave the flat. For me at least this means that having the thermostat internet connected is worthwhile.

Syndicated 2015-01-18 21:23:58 from Technicalities

18 Jan 2015 mikal   » (Journeyer)

Taylor Trig

At the top of Mount Taylor lies the first trig point defeated by a group walk I've been on. Steve * 3, Erin, Michael *2, Andrew, Cadell, Maddie, Mel, Neill and Jenny all made it to the top of this one, so I'm super proud of us as a group. A nice walk and Mount Taylor clearly has potential for other walks as well, so I am sure I'll return here again.


Tags for this post: blog pictures 20150118-mount_taylor photo canberra tuggeranong bushwalk trig_point
Related posts: A walk around Mount Stranger; Urambi Trig; Walk up Tuggeranong Hill; A quick walk to Tuggeranong Trig; Wanniassa Trig; A quick walk to William Farrer's grave


Syndicated 2015-01-17 23:50:00 from : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

16 Jan 2015 Killerbees   » (Journeyer)

Privacy In Public III

Google Glass Explorer Program Shuts Down

I'm not going to say a lot about this, except that I'm glad, and I hope that the next step for this tech takes seriously into account the privacy-in-public implications of people walking around with cameras streaming whatever they see.
Anyone who was ever concerned with the level of surveillance in modern society by CCTV, helmet cams, the hacking of web-cams, and the use this can be put to by the nefarious activities of GCHQ and the US NSA, will be pleased that the headlong rush to turn us all into autonomous surveillance drones has paused for thought.
Lets hope Google use the pause to reflect on this.

Syndicated 2015-01-16 12:58:00 (Updated 2015-01-16 12:58:24) from Danny Angus

16 Jan 2015 marnanel   » (Journeyer)

rm -rf /

I said elsewhere that "rm -rf /" is special-cased to fail under Linux, and some people asked me about it. FTR here's my answer:

I'd thought rm was a bash builtin, but it isn't. The rm in GNU coreutils, however, does check for the root directory as of 2003-11-09 (by inode number, not by name); the warning message is "it is dangerous to operate recursively on /". You can override this using "--no-preserve-root", though I don't know why you'd want to.

This entry was originally posted at Please comment there using OpenID.

Syndicated 2015-01-16 09:44:34 from Monument

16 Jan 2015 bagder   » (Master)

Changing networks with Linux

A rather long time ago I blogged about my work to better deal with changing networks while Firefox is running, and the change was then pushed for Android and I subsequently pushed the same functionality for Firefox on Mac.

Today I’ve landed yet another change, which detects network changes on Firefox OS and Linux.

Firefox Nightly screenshotAs Firefox OS uses a Linux kernel, I ended up doing the same fix for both the Firefox OS devices as for Firefox on Linux desktop: I open a socket in the AF_NETLINK family and listen on the stream of messages the kernel sends when there are network updates. This way we’re told when the routing tables update or when we get a new IP address etc. I consider this way better than the NotifyIpInterfaceChange() API Windows provides, as this allows us to filter what we’re interested in. The windows API makes that rather complicated and in fact a lot of the times when we get the notification on windows it isn’t clear to me why!

The Mac API way is what I would consider even more obscure, but then I’m not at all used to their way of doing things and how you add things to the event handlers etc.

The journey to the landing of this particular patch was once again long and bumpy and full of sweat in this tradition that seem seems to be my destiny, and this time I ran into problems with the Firefox OS emulator which seems to have some interesting bugs that cause my code to not work properly and as a result of that our automated tests failed: occasionally data sent over a pipe or socketpair doesn’t end up in the receiving end. In my case this means that my signal to the child thread to die would sometimes not be noticed and thus the thread wouldn’t exit and die as intended.

I ended up implementing a work-around that makes it work even if the emulator eats the data by also checking a shared should-I-shutdown-now flag every once in a while. For more specific details on that, see the bug.

Syndicated 2015-01-16 07:28:24 from

16 Jan 2015 mikal   » (Journeyer)

Another Nova spec update

I started chasing down the list of spec freeze exceptions that had been requested, and that resulted in the list of specs for Kilo being updated. That updated list is below, but I'll do a separate post with the exception requests highlighted soon as well.


  • Add more detailed network information to the metadata server: review 85673 (approved).
  • Add separated policy rule for each v2.1 api: review 127863 (requested a spec exception).
  • Add user limits to the limits API (as well as project limits): review 127094.
  • Allow all printable characters in resource names: review 126696 (approved).
  • Consolidate all console access APIs into one: review 141065 (approved).
  • Expose the lock status of an instance as a queryable item: review 127139 (abandoned); review 85928 (approved).
  • Extend api to allow specifying vnic_type: review 138808 (requested a spec exception).
  • Implement instance tagging: review 127281 (fast tracked, approved).
  • Implement the v2.1 API: review 126452 (fast tracked, approved).
  • Improve the return codes for the instance lock APIs: review 135506.
  • Microversion support: review 127127 (approved).
  • Move policy validation to just the API layer: review 127160 (approved).
  • Nova Server Count API Extension: review 134279 (fast tracked).
  • Provide a policy statement on the goals of our API policies: review 128560 (abandoned).
  • Sorting enhancements: review 131868 (fast tracked, approved, implemented).
  • Support JSON-Home for API extension discovery: review 130715 (requested a spec exception).
  • Support X509 keypairs: review 105034 (approved).


  • Expand support for volume filtering in the EC2 API: review 104450.
  • Implement tags for volumes and snapshots with the EC2 API: review 126553 (fast tracked, approved).


  • Actively hunt for orphan instances and remove them: review 137996 (abandoned); review 138627.
  • Add totalSecurityGroupRulesUsed to the quota limits: review 145689.
  • Check that a service isn't running before deleting it: review 131633.
  • Enable the nova metadata cache to be a shared resource to improve the hit rate: review 126705 (abandoned).
  • Implement a daemon version of rootwrap: review 105404 (requested a spec exception).
  • Log request id mappings: review 132819 (fast tracked).
  • Monitor the health of hypervisor hosts: review 137768.
  • Remove the assumption that there is a single endpoint for services that nova talks to: review 132623.

Block Storage

  • Allow direct access to LVM volumes if supported by Cinder: review 127318.
  • Cache data from volumes on local disk: review 138292 (abandoned); review 138619.
  • Enhance iSCSI volume multipath support: review 134299 (requested a spec exception).
  • Failover to alternative iSCSI portals on login failure: review 137468 (requested a spec exception).
  • Give additional info in BDM when source type is "blank": review 140133.
  • Implement support for a DRBD driver for Cinder block device access: review 134153 (requested a spec exception).
  • Poll volume status: review 142828 (abandoned).
  • Refactor ISCSIDriver to support other iSCSI transports besides TCP: review 130721 (approved).
  • StorPool volume attachment support: review 115716 (approved, requested a spec exception).
  • Support Cinder Volume Multi-attach: review 139580 (approved).
  • Support iSCSI live migration for different iSCSI target: review 132323 (approved).


Containers Service


  • Develop and implement a profiler for SQL requests: review 142078 (abandoned).
  • Enforce instance uuid uniqueness in the SQL database: review 128097 (fast tracked, approved, implemented).
  • Nova db purge utility: review 132656.
  • Online schema change options: review 102545 (approved).
  • Support DB2 as a SQL database: review 141097 (fast tracked, approved).
  • Validate database migrations and model': review 134984 (approved).

Hypervisor: Docker

Hypervisor: FreeBSD

  • Implement support for FreeBSD networking in nova-network: review 127827.

Hypervisor: Hyper-V

  • Allow volumes to be stored on SMB shares instead of just iSCSI: review 102190 (approved, implemented).
  • Instance hot resize: review 141219.

Hypervisor: Ironic

Hypervisor: VMWare

  • Add ephemeral disk support to the VMware driver: review 126527 (fast tracked, approved).
  • Add support for the HTML5 console: review 127283 (requested a spec exception).
  • Allow Nova to access a VMWare image store over NFS: review 126866.
  • Enable administrators and tenants to take advantage of backend storage policies: review 126547 (fast tracked, approved).
  • Enable the mapping of raw cinder devices to instances: review 128697.
  • Implement vSAN support: review 128600 (fast tracked, approved).
  • Support multiple disks inside a single OVA file: review 128691.
  • Support the OVA image format: review 127054 (fast tracked, approved).

Hypervisor: libvirt

Instance features


  • A lock-free quota implementation: review 135296 (approved).
  • Automate the documentation of the virtual machine state transition graph: review 94835.
  • Fake Libvirt driver for simulating HW testing: review 139927 (abandoned).
  • Flatten Aggregate Metadata in the DB: review 134573 (abandoned).
  • Flatten Instance Metadata in the DB: review 134945 (abandoned).
  • Implement a new code coverage API extension: review 130855.
  • Move flavor data out of the system_metadata table in the SQL database: review 126620 (approved).
  • Move to polling for cinder operations: review 135367.
  • PCI test cases for third party CI: review 141270.
  • Transition Nova to using the Glance v2 API: review 84887 (abandoned).
  • Transition to using glanceclient instead of our own home grown wrapper: review 133485 (approved).


  • Enable lazy translations of strings: review 126717 (fast tracked, approved).


  • Add a new linuxbridge VIF type, macvtap: review 117465 (abandoned).
  • Add a plugin mechanism for VIF drivers: review 136827 (abandoned).
  • Add support for InfiniBand SR-IOV VIF Driver: review 131729 (requested a spec exception).
  • Neutron DNS Using Nova Hostname: review 90150 (abandoned).
  • New VIF type to allow routing VM data instead of bridging it: review 130732 (approved, requested a spec exception).
  • Nova Plugin for OpenContrail: review 126446 (approved).
  • Refactor of the Neutron network adapter to be more maintainable: review 131413.
  • Use the Nova hostname in Neutron DNS: review 137669.
  • Wrap the Python NeutronClient: review 141108.


  • Dynamically alter the interval nova polls components at based on load and expected time for an operation to complete: review 122705.


  • A nested quota driver API: review 129420.
  • Add a filter to take into account hypervisor type and version when scheduling: review 137714.
  • Add an IOPS weigher: review 127123 (approved, implemented); review 132614.
  • Add instance count on the hypervisor as a weight: review 127871 (abandoned).
  • Add soft affinity support for server group: review 140017 (approved).
  • Allow extra spec to match all values in a list by adding the ALL-IN operator: review 138698 (fast tracked, approved).
  • Allow limiting the flavors that can be scheduled on certain host aggregates: review 122530 (abandoned).
  • Allow the remove of servers from server groups: review 136487.
  • Cache aggregate metadata: review 141846.
  • Convert get_available_resources to use an object instead of dict: review 133728 (abandoned).
  • Convert the resource tracker to objects: review 128964 (fast tracked, approved).
  • Create an object model to represent a request to boot an instance: review 127610 (approved).
  • Decouple services and compute nodes in the SQL database: review 126895 (approved).
  • Distribute PCI Requests Across Multiple Devices: review 142094.
  • Enable adding new scheduler hints to already booted instances: review 134746.
  • Fix the race conditions when migration with server-group: review 135527 (abandoned).
  • Implement resource objects in the resource tracker: review 127609 (approved, requested a spec exception).
  • Improve the ComputeCapabilities filter: review 133534 (requested a spec exception).
  • Isolate Scheduler DB for Filters: review 138444 (requested a spec exception).
  • Isolate the scheduler's use of the Nova SQL database: review 89893 (approved).
  • Let schedulers reuse filter and weigher objects: review 134506 (abandoned).
  • Move select_destinations() to using a request object: review 127612 (approved).
  • Persist scheduler hints: review 88983.
  • Refactor allocate_for_instance: review 141129.
  • Stop direct lookup for host aggregates in the Nova database: review 132065 (abandoned).
  • Stop direct lookup for instance groups in the Nova database: review 131553 (abandoned).
  • Support scheduling based on more image properties: review 138937.
  • Trusted computing support: review 133106.



  • Make key manager interface interoperable with Barbican: review 140144 (fast tracked, approved).
  • Provide a reference implementation for console proxies that uses TLS: review 126958 (fast tracked, approved).
  • Strongly validate the tenant and user for quota consuming requests with keystone: review 92507 (approved).

Service Groups


Syndicated 2015-01-15 19:16:00 from : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

15 Jan 2015 dmarti   » (Master)

Perfect storm for web ads in 2015?

Is it just me, or is all this stuff hitting the web ad business all at once?


The market couldn't sustain a zillion different 8-bit microcomputers or web portals, back when those were a thing. And it has always seemed unlikely that the market can keep supporting a zillion lookalike adtech firms. Jack Marshall of the Wall Street Journal writes, A shakeout is under way in the online advertising industry, where dozens of startups—often with seemingly undifferentiated services and limited scale— face the reality that there isn’t enough room for everyone.

Google and Facebook are eating the ecosystem. Michael Eisenberg writes, Today, most adtech companies are exploiting features that are missing on the core platforms of Google, Facebook, and many of the already public companies. They are optimising and brokering between technology platforms (mobile and web), exchanges and advertisers. However, information is nearing perfection in this market, making it difficult to build a moat around businesses and maintain margins.

Large agencies that plan to make a living helping clients navigate a confusing list of technology partners are probably on the wrong side of the trend here. They're like Unix ISVs who planned to keep building the same basic product on dozens of basically identical but incompatible Unix variants. A difficult feat of management and tech integration, but not really the way that mature technology markets tend to go.

Fraud crisis

You know how, when a lot of people are starting committees to talk about how something is an industry-wide problem and it's everyone's responsibility to fix it, that means the problem is about to go away?

Me either.

Bob Hoffman explains this one best.

Blocking keeps going up, tracking protection emerges

Ad blocking is trending up, but it's not for everyone. Many users have a basic fairness expectation around advertising: if you look at the content, you should also accept the ads that that support it.

Tracking protection, though, is a situation where fairness norms point away from adtech. A 2014 survey found that 87 percent of users choose not to be tracked by default. Tracking protection products such as Disconnect and Privacy Badger are using a different message from crude ad blocking, to reach more users. Disconnect is positioning its tracking protection product as as basic Internet security software—Join over 3 million people who use our open source software to protect their identities and sensitive personal info from hackers and trackers—not a way to get something for nothing.

Browser built-in tracking protection is coming along, too. Apple Safari already blocks third-party cookies, MSIE has tracking protection lists (which lump adtech in with socially-engineeered malware) and Firefox is getting its own tracking protection too.

The holdout is Google Chrome, and that's a whole other story. Google as a whole would certainly do better on an all-tracking-protected web, because if everyone's less able to track users, Google's expertise in parsing content matters more. But it's hard for information packrats to walk away from shiny, tempting information.

Tech-aware publishers

The typical adtech/publisher relationship has more in common with one-sided record contracts than with typical advertising. Publishers haven't understood the technology as well as adtech firms, and so have signed away their valuable audiences in pursuit of surveillance marketing woo-woo.

But that's changing. New publishers have web skills from the ground up. Vox Media is a good example. And existing publishers are getting better at defending their interests. Quartz, an Atlantic Media site, runs ads that look and work more like expensive magazine ads than like ratty web display ads. And, most important, Quartz ads are intact for users running Disconnect.

The near-term effect of VC investment in web publishing startups is that many publishers will have the breathing space to turn down the short-term revenue from crappy, targeted "click the monkey" or "one weird trick" ads, and pursue other options. Tracking protection for a site's audience is the kind of "moat" that investors tend to look for.

Put it together

The fun part isn't any one of these trends, or even the fact that they're hitting at the same time, but how they interact.

Fraud helps drive consolidation. Consolidation, with more accurate tracking, encourages more users to try tracking protection. Tracking protection and fraud drive ad spending to quality publishers. Success for quality publishers means more investment in tracking protection. And around it goes.

What a fun year this is going to be.

Syndicated 2015-01-15 13:35:48 from Don Marti

15 Jan 2015 bagder   » (Master)

http2 explained 1.8

I’ve been updating my “http2 explained” document every nohttp2 logow and then since my original release of it back in April 2014. Today I put up version 1.8 which is one of the bigger updates in a while:

http2 explained

The HTTP/2 Last Call within the IETF ended yesterday and the wire format of the protocol has remained fixed for quite some time now so it seemed like a got moment.

I updated some graphs and images to make them look better and be more personal, I added some new short sections in 8.4 and I refreshed the language in several places. Also, now all links mentioned in footnotes and elsewhere should be properly clickable to make following them a more pleasant experience.

As always, do let me know if you find errors, have questions on the content or think I should add something!

Syndicated 2015-01-15 12:57:18 from

14 Jan 2015 broonie   » (Journeyer)

Kernel build times for automated builders

Over the past year or so various people have been automating kernel builds with the aim of both setting the standard that things should build reliably and using the resulting builds for automated testing. This has been having good results, it’s especially nice to compare the results for older stable kernel builds with current ones and notice how much happier everything is.

One of the challenges with doing this is that for good coverage you really need to include allmodconfig or allyesconfig builds to ensure coverage of as much kernel code as possible but that’s fairly resource intensive given the size of the kernel, especially when you want to cover several architectures. It’s also fairly important to get prompt results, development trees are changing all the time and the longer the gap between a problem appearing and it being identified the more likely the report is to be redundant.

Since I was looking at my own setup and I know of several people who’ve done similar benchmarking I thought I’d publish some ballpark numbers for from scratch allmodconfig builds on a single architecture:

i7-4770 with SSD 20 minutes
linode 2048 1.25 hours
EC2 m3.medium 1.5 hours
EC2 c3.medium 2 hours
Cubietruck with SSD 20 hours

All with the number of tasks spawned by make set to the number of execution threads the system has and no speedups from anything like ccache. I may keep this updated in future with further results.

Obviously there’s tradeoffs beyond the time, especially for someone like me doing this at home with their own resources – my desktop is substantially faster than anything else I’ve tried but I’m also using it interactively for my work, it’s not easily accessible when not at home and the fans spin up during builds while EC2 starts to cost noticeable money to use as you add more builds.

Syndicated 2015-01-14 22:37:52 from Technicalities

14 Jan 2015 bagder   » (Master)

My talks at FOSDEM 2015


Saturday 13:30, embedded room (Lameere)

Tile: Internet all the things – using curl in your device

Embedded devices are very often network connected these days. Network connected embedded devices often need to transfer data to and from them as clients, using one or more of the popular internet protocols.

libcurl is the world’s most used and most popular internet transfer library, already used in every imaginable sort of embedded device out there. How did this happen and how do you use libcurl to transfer data to or from your device?

Sunday, 09:00 Mozilla room (UD2.218A)

Title: HTTP/2 right now

HTTP/2 is the new version of the web’s most important and used protocol. Version 2 is due to be out very soon after FOSDEM and I want to inform the audience about what’s going on with the protocol, why it matters to most web developers and users and not the last what its status is at the time of FOSDEM.

Syndicated 2015-01-14 14:48:14 from

14 Jan 2015 benad   » (Apprentice)

Alpha: My First PC

The PC port of Final Fantasy VII that I recently completed was the first of many PC-only games I wanted to play, but queued up because playing PC games is inconvenient. I have a 2011 Mac mini that I can dual-boot in Windows, which is what I mostly used for FF VII, but rebooting was slow, the mini was noisy, and its graphics card simply unable to properly play games made after 2010. I have a late-2013 MacBook Pro, but I keep using it for work, it's inconvenient for playing on a TV, and its graphics card could have been better.

I insisted on using Macs, even for PC games, because "gaming PCs" are just too much trouble. Almost all small-form-factor PCs sacrifice graphics performance for size and quieter fans, including the mini. On the other end, even your average "gaming PC" is expensive, a bulky tower with neon lights and require manual assembly. Here's the thing: I can do all of that without problem, from building a PC server to maintaining Windows Server. But that's what I do at work. It's as if there is not such thing as a "casual gaming PC for your TV". Well, at least until the Alienware Alpha, essentially a small-form-factor gaming PC.

The Alienware Alpha is presented as a kind of video game console. While it runs Windows 8.1, its default user account is running a modified version of XBMC that replaces the Windows desktop, and lets you run Steam in "Big Picture" mode. The entire setup can be done (a bit clumsily) using the provided XBox 360 controller (oddly, with its USB dongle for wireless use). For me, though, I already had my wireless mouse and keyboard (and a USB mouse with a long USB extension of FPS games), because I want to play older PC games made for a mouse and keyboard, so I ultimately disabled that "full screen" account and set up a standard desktop Windows account.

And you have to accept that the Alienware Alpha is a PC that isn't that user-friendly and requires tweaking to play games. For example, the frame rate of "Metro: Last Light" was terrible because it was using outdated nvidia libraries; updating the library files made the game much faster. Or Geometry Wars 3 had terrible lag issues, until you run it in windowed mode or manually edit its settings file. Actually, the simple fact that the Alpha's nvidia card is "too new" to be recognized by older games is enough to force you to tweak all the settings. I'm still curious about dual-booting into SteamOS, a Linux distribution of Steam that has a proper "console feel", though most games I want to play are PC-only or not in Steam in the first place (from GOG, actually).

With all that said, the Alpha is a pretty good PC. I was able to plan all the games at maximum settings at at least 30 frames per second, and much more on games made before 2012. It's well optimized for 1080p, which is less than 4K support from current-gen 3D gaming cards, but is perfect for TV use. The hard drive is slower than my MacBook Pro's SSD, but the 3D card is so much better on the Alpha that I don't mind the extra load time. You can still easily replace the hard drive in the Alpha with a SSD, and you can upgrade pretty much everything else but the motherboard and 3D chip, with detailed service manuals. It has an HDMI passthrough, digital optical audio output, many USB 2 and 3 ports (and even a hidden USB port underneath, perfect for my wireless keyboard dongle). Finally, its price is competitive, meaning absurdly cheap compared to similar specifications from Apple.

What I'm saying is that the Alienware Alpha is a good "entry-level" casual gaming PC for use on a TV, without the hassle of a typical PC tower. That, and I now have a PC. I still feel a bit weird about that.

Syndicated 2015-01-14 00:33:59 from Benad's Blog

14 Jan 2015 jas   » (Master)

Replicant 4.2 0003 on I9300

The Replicant project released version 4.2 0003 recently. I have been using Replicant on a Samsung SIII (I9300) for around 14 months now. Since I have blogged about issues with NFC and Wifi earlier, I wanted to give a status update after upgrading to 0003. I’m happy to report that my NFC issue has been resolved in 0003 (the way I suggested; reverting the patch). My issues with Wifi has been improved in 0003, with my merge request being accepted. What follows below is a standalone explanation of what works and what doesn’t, as a superset of similar things discussed in my earlier blog posts.

What works out of the box: Audio, Telephony, SMS, Data (GSM/3G), Back Camera, NFC. 2D Graphics is somewhat slow compared to stock ROM, but I’m using it daily and can live with that so it isn’t too onerus. Stability is fine, similar to other Android device I’m used to. Video playback does not work (due to non-free media decoders?), which is not a serious problem for me but still likely the biggest outstanding issue except for freedom concerns. 3D graphics apparently doesn’t work, and I believe it is what prevents Firefox from working properly (it crashes). I’m having one annoying but strange problem with telephony: when calling one person I get scrambled audio around 75% of the time. I can still hear what the other person is saying, but can barely make anything out of it. This only happens over 3G, so my workaround when calling that person is to switch to 2G before and switch back after. I talk with plenty other people, and have never had this problem with anyone else, and it has never happened when she talks with anyone else but me. If anyone has suggestion on how to debug this, I’m all ears.

Important apps to get through daily life for me includes K9Mail (email), DAVDroid (for ownCloud CalDav/CardDAV), CalDav Sync Adapter (for Google Calendars), Conversations (XMPP/Jabber chat), FDroid (for apps), ownCloud (auto-uploading my photos), SMS Backup+, Xabber (different XMPP/Jabber accounts), Yubico Authenticator, MuPDF and oandbackup. A couple of other apps I find useful are AdAway (remove web ads), AndStatus, Calendar Widget, NewsBlur and ownCloud News Reader (RSS readers), Tinfoil for Facebook, Twidere (I find its UI somewhat nicer than AndStatus’s), and c:geo.

A number of things requires non-free components. As I discussed in my initial writeup from when I started using Replicant I don’t like this, but I’m accepting it temporarily. The list of issues that can be fixed by adding non-free components include the front camera, Bluetooth, GPS, and Wifi. After flashing the Replicant ROM image that I built (using the fine build instructions), I’m using the following script to add the missing non-free files from Cyanogenmod.

# Download Cyanogenmod 10.1.3 (Android 4.2-based) binaries:
# wget
# echo "073a464a9f5129c490502c77374495c38a25ba790c10e27f51b43845baeba6bf" | sha256sum -c 
# unzip

adb root
adb remount
adb shell mkdir /system/vendor/firmware
adb shell chmod 755 /system/vendor/firmware

# Front Camera
adb push cm-10.1.3-i9300/system/vendor/firmware/fimc_is_fw.bin /system/vendor/firmware/fimc_is_fw.bin
adb push cm-10.1.3-i9300/system/vendor/firmware/setfile.bin /system/vendor/firmware/setfile.bin
adb shell chmod 644 /system/vendor/firmware/fimc_is_fw.bin /system/vendor/firmware/setfile.bin

# Bluetooth
adb push cm-10.1.3-i9300/system/bin/bcm4334.hcd /system/vendor/firmware/
adb shell chmod 644 /system/vendor/firmware/bcm4334*.hcd

adb push cm-10.1.3-i9300/system/bin/gpsd /system/bin/gpsd
adb shell chmod 755 /system/bin/gpsd
adb push cm-10.1.3-i9300/system/lib/hw/ /system/lib/hw/
adb push cm-10.1.3-i9300/system/lib/ /system/lib/
adb shell chmod 644 /system/lib/hw/ /system/lib/

# Wifi
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_apsta.bin_b1 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_apsta.bin_b2 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_mfg.bin_b0 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_mfg.bin_b1 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_mfg.bin_b2 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_p2p.bin_b0 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_p2p.bin_b1 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_p2p.bin_b2 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_sta.bin_b0 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_sta.bin_b1 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/bcmdhd_sta.bin_b2 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/nvram_mfg.txt /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/nvram_mfg.txt_murata /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/nvram_mfg.txt_murata_b2 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/nvram_mfg.txt_semcosh /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/nvram_net.txt /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/nvram_net.txt_murata /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/nvram_net.txt_murata_b2 /system/vendor/firmware/
adb push cm-10.1.3-i9300/system/etc/wifi/nvram_net.txt_semcosh /system/vendor/firmware/

I hope this helps others switch to a better phone environment!

Syndicated 2015-01-13 23:17:28 from Simon Josefsson's blog

13 Jan 2015 bagder   » (Master)

My first year at Mozilla

January 13th 2014 I started my fiMozilla dinosaur head logorst day at Mozilla. One year ago exactly today.

It still feels like it was just a very short while ago and I keep having this sense of being a beginner at the company, in the source tree and all over.

One year of networking code work that really at least during periods has not progressed as quickly as I would’ve wished for, and I’ve had some really hair-tearing problems and challenges that have taken me sweat and tears to get through. But I am getting through and I’m enjoying every (oh well, let’s say almost every) moment.

During the year I’ve had the chance to meetup with my team mates twice (in Paris and in Portland) and I’ve managed to attend one IETF (in London) and two special HTTP2 design meetings (in London and NYC). counts 47 commits by me in Firefox and that feels like counting high. bugzilla has tracked activity by me in 107 bug reports through the year.

I’ve barely started. I’ll spend the next year as well improving Firefox networking, hopefully with a higher turnout this year. (I don’t mean to make this sound as if Firefox networking is just me, I’m just speaking for my particular part of the networking team and effort and I let the others speak for themselves!)

Onwards and upwards!

Syndicated 2015-01-13 08:49:14 from

13 Jan 2015 bagder   » (Master)

My table tennis racket sized phone

I upgraded my Nexus 5 to a Nexus 6 the other day. It is a biiiig phone, and just to show you how big I made a little picture showing all my Android phones so far using the correct relative sizes. It certainly isn’t very far away from a table tennis racket in size now. My Android track record so far goes like this: HTC Magic, HTC Desire HD, Nexus 4, Nexus 5 and now Nexus 6.


As shown, this latest step is probably the biggest relative size change in a single go. If the next step would be as big, imagine the size that would require! (While you think about that, I’ve already done the math: the 6 is 159.3 mm tall, 15.5% taller than the 5’s’ 137.9mm, so adding 15.5% to the Nexus 6 ends up at 184 – only 16 mm shorter than a Nexus 7 in portrait mode… I don’t think I could handle that!)

After the initial size shock, I’m enjoying the large size. It is a bit of a clunker to cram down into my left front-side jeans pocket where I’m used to carry around my device. It is still doable, but not as easy as before and it easily get uncomfortable when sitting down. I guess I need to sit less or change my habit somehow.

This largest phone ever ironically switched to the smallest SIM card size so my micro-SIM had to be replaced with a nano-SIM.

Borked upgrade procedure

Not a single non-Google app got installed in my new device in the process. I strongly suspect it was that “touch the back of another device to copy from” thing that broke it because it didn’t work at all – and when it failed, it did not offer me to restore a copy from backup which I later learned it does if I skip the touch-back step. I ended up manually re-installing my additional 100 or so apps…

My daughter then switched from her Nexus 4 to my (by then) clean-wiped 5.  For her, we skipped that broken back-touch process and she got a nice backup from the 4 restored onto the 5. But she got another nasty surprise: basically over half of her contacts were just gone when she opened the contacts app on the 5, so we had to manually go through the contact list on the old device and re-add them into the new one. The way we did (not even do) it in the 90s…

The Android device installation (and data transfer) process is not perfect yet. Although my brother says he did his two upgrades perfectly smoothly…

Syndicated 2015-01-13 07:34:53 from

13 Jan 2015 etbe   » (Master)

Systemd Notes

A few months ago I gave a lecture about systemd for the Linux Users of Victoria. Here are some of my notes reformatted as a blog post:

Scripts in /etc/init.d can still be used, they work the same way as they do under sysvinit for the user. You type the same commands to start and stop daemons.

To get a result similar to changing runlevel use the “systemctl isolate” command. Runlevels were never really supported in Debian (unlike Red Hat where they were used for starting and stopping the X server) so for Debian users there’s no change here.

The command systemctl with no params shows a list of loaded services and highlights failed units.

The command “journalctl -u UNIT-PATTERN” shows journal entries for the unit(s) in question. The pattern uses wildcards not regexs.

The systemd journal includes the stdout and stderr of all daemons. This solves the problem of daemons that don’t log all errors to syslog and leave the sysadmin wondering why they don’t work.

The command “systemctl status UNIT” gives the status and last log entries for the unit in question.

A program can use ioctl(fd, TIOCSTI, …) to push characters into a tty buffer. If the sysadmin runs an untrusted program with the same controlling tty then it can cause the sysadmin shell to run hostile commands. The system call setsid() to create a new terminal session is one solution but managing which daemons can be started with it is difficult. The way that systemd manages start/stop of all daemons solves this. I am glad to be rid of the run_init program we used to use on SE Linux systems to deal with this.

Systemd has a mechanism to ask for passwords for SSL keys and encrypted filesystems etc. There have been problems with that in the past but I think they are all fixed now. While there is some difficulty during development the end result of having one consistent way of managing this will be better than having multiple daemons doing it in different ways.

The commands “systemctl enable” and “systemctl disable” enable/disable daemon start at boot which is easier than the SysVinit alternative of update-rc.d in Debian.

Systemd has built in seat management, which is not more complex than consolekit which it replaces. Consolekit was installed automatically without controversy so I don’t think there should be controversy about systemd replacing consolekit.

Systemd improves performance by parallel start and autofs style fsck.

The command systemd-cgtop shows resource use for cgroups it creates.

The command “systemd-analyze blame” shows what delayed the boot process and
systemd-analyze critical-chain” shows the critical path in boot delays.

Sysremd also has security features such as service private /tmp and restricting service access to directory trees.


For basic use things just work, you don’t need to learn anything new to use systemd.

It provides significant benefits for boot speed and potentially security.

It doesn’t seem more complex than other alternative solutions to the same problems.

Related posts:

  1. systemd – a Replacement for init etc The systemd projecct is an interesting concept for replacing init...
  2. Some Notes on DRBD DRBD is a system for replicating a block device across...
  3. licence for lecture notes While attending LCA it occurred to me that the lecture...

Syndicated 2015-01-12 18:07:00 from etbe - Russell Coker

12 Jan 2015 mikal   » (Journeyer)

Kilo Nova deploy recommendations

What would a Nova developer tell a deployer to think about before their first OpenStack install? This was the question I wanted to answer for my OpenStack miniconf talk, and writing this essay seemed like a reasonable way to take the bullet point list of ideas we generated and turn it into something that was a cohesive story. Hopefully this essay is also useful to people who couldn't make the conference talk.

Please understand that none of these are hard rules -- what I seek is for you to consider your options and make informed decisions. Its really up to you how you deploy Nova.

Operating environment

  • Consider what base OS you use for your hypervisor nodes if you're using Linux. I know that many environments have standardized on a given distribution, and that many have a preference for a long term supported release. However, Nova is at its most basic level a way of orchestrating tools packaged by your distribution via APIs. If those underlying tools are buggy, then your Nova experience will suffer as well. Sometimes we can work around known issues in older versions of our dependencies, but often those work-arounds are hard to implement (and therefore likely to be less than perfect) or have performance impacts. There are many examples of the problems you can encounter, but hypervisor kernel panics, and disk image corruption are just two examples. We are trying to work with distributions on ensuring they back port fixes, but the distributions might not be always willing to do that. Sometimes upgrading the base OS on your hypervisor nodes might be a better call.
  • The version of Python you use matters. The OpenStack project only tests with specific versions of Python, and there can be bugs between releases. This is especially true for very old versions of Python (anything older than 2.7) and new versions of Python (Python 3 is not supported for example). Your choice of base OS will affect the versions of Python available, so this is related to the previous point.
  • There are existing configuration management recipes for most configuration management systems. I'd avoid reinventing the wheel here and use the community supported recipes. There are definitely resources available for chef, puppet, juju, ansible and salt. If you're building a very large deployment from scratch consider triple-o as well. Please please please don't fork the community recipes. I know its tempting, but contribute to upstream instead. Invariably upstream will continue developing their stuff, and if you fork you'll spend a lot of effort keeping in sync.
  • Have a good plan for log collection and retention at your intended scale. The hard reality at the moment is that diagnosing Nova often requires that you turn on debug logging, which is very chatty. Whilst we're happy to take bug reports where we've gotten the log level wrong, we haven't had a lot of success at systematically fixing this issue. Your log infrastructure therefore needs to be able to handle the demands of debug logging when its turned on. If you're using central log servers think seriously about how much disks they require. If you're not doing centralized syslog logging, perhaps consider something like logstash.
  • Pay attention to memory usage on your controller nodes. OpenStack python processes can often consume hundreds of megabytes of virtual memory space. If you run many controller services on the same node, make sure you have enough RAM to deal with the number of processes that will, by default, be spawned for the many service endpoints. After a day or so of running a controller node, check in on the VMM used for python processes and make any adjustments needed to your "workers" configuration settings.

  • Estimate your final scale now. Sure, you're building a proof of concept, but these things have a habit of becoming entrenched. If you are planning a deployment that is likely to end up being thousands of nodes, then you are going to need to deploy with cells. This is also possibly true if you're going to have more than one hypervisor or hardware platform in your deployment -- its very common to have a cell per hypervisor type or per hardware platform. Cells is relatively cheap to deploy for your proof of concept, and it helps when that initial deploy grows into a bigger thing. Should you be deploying cells from the beginning? It should be noted however that not all features are currently implemented in cells. We are working on this at the moment though.
  • Consider carefully what SQL database to use. Nova supports many SQL databases via sqlalchemy, but are some are better tested and more widely deployed than others. For example, the Postgres back end is rarely deployed and is less tested. I'd recommend a variant of MySQL for your deployment. Personally I've seen good performance on Percona, but I know that many use the stock MySQL as well. There are known issues at the moment with Galera as well, so show caution there. There is active development happening on the select-for-update problems with Galera at the moment, so that might change by the time you get around to deploying in production. You can read more about our current Galera problems on Jay Pipe's blog .
  • We support read only replicas of the SQL database. Nova supports offloading read only SQL traffic to read only replicas of the main SQL database, but I do no believe this is widely deployed. It might be of interest to you though.
  • Expect a lot of SQL database connections. While Nova has the nova-conductor service to control the number of connections to the database server, other OpenStack services do not, and you will quickly out pace the number of default connections allowed, at least for a MySQL deployment. Actively monitor your SQL database connection counts so you know before you run out. Additionally, there are many places in Nova where a user request will block on a database query, so if your SQL back end isn't keeping up this will affect performance of your entire Nova deployment.
  • There are options with message queues as well. We currently support rabbitmq, zeromq and qpid. However, rabbitmq is the original and by far the most widely deployed. rabbitmq is therefore a reasonable default choice for deployment.

  • Not all hypervisor drivers are created equal. Let's be frank here -- some hypervisor drivers just aren't as actively developed as others. This is especially true for drivers which aren't in the Nova code base -- at least the ones the Nova team manage are updated when we change the internals of Nova. I'm not a hypervisor bigot -- there is a place in the world for many different hypervisor options. However, the start of a Nova deploy might be the right time to consider what hypervisor you want to use. I'd personally recommend drivers in the Nova code base with active development teams and good continuous integration, but ultimately you have to select a driver based on its merits in your situation. I've included some more detailed thoughts on how to evaluate hypervisor drivers later in this post, as I don't want to go off on a big tangent during my nicely formatted bullet list.
  • Remember that the hypervisor state is interesting debugging information. For example with the libvirt hypervisor, the contents on /var/lib/instances is super useful for debugging misbehaving instances. Additionally, all of the existing libvirt tools work, so you can use those to investigate as well. However, I strongly recommend you only change instance state via Nova, and not go directly to the hypervisor.

  • Avoid new deployments of nova-network. nova-network has been on the deprecation path for a very long time now, and we're currently working on the final steps of a migration plan for nova-network users to neutron. If you're a new deployment of Nova and therefore don't yet depend on any of the features of nova-network, I'd start with Neutron from the beginning. This will save you a possible troublesome migration to Neutron later.

Testing and upgrades
  • You need a test lab. For a non-trivial deployment, you need a realistic test environment. Its expected that you test all upgrades before you do them in production, and rollbacks can sometimes be problematic. For example, some database migrations are very hard to roll back, especially if new instances have been created in the time it took you to decide to roll back. Perhaps consider turning off API access (or putting the API into a read only state) while you are validating a production deploy post upgrade, that way you can restore a database snapshot if you need to undo the upgrade. We know this isn't perfect and are working on a better upgrade strategy for information stored in the database, but we will always expect you to test upgrades before deploying them.
  • Test database migrations on a copy of your production database before doing them for real. Another reason to test upgrades before doing them in production is because some database migrations can be very slow. Its hard for the Nova developers to predict which migrations will be slow, but we do try to test for this and minimize the pain. However, aspects of your deployment can affect this in ways we don't expect -- for example if you have large numbers of volumes per instance, then that could result in database tables being larger than we expect. You should always test database migrations in a lab and report any problems you see.
  • Think about your upgrade strategy in general. While we now support having the control infrastructure running a newer release than the services on hypervisor nodes, we only support that for one release (so you could have your control plane running Kilo for example while you are still running Juno on your hypervisors, you couldn't run Icehouse on the hypervisors though). Are you going to upgrade every six months? Or are you going to do it less frequently but step through a series of upgrades in one session? I suspect the latter option is more risky -- if you encounter a bug in a previous release we would need to back port a fix, which is a much slower process than fixing the most recent release. There are also deployments which choose to "continuously deploy" from trunk. This gets the access to features as they're added, but means that the deployments need to have more operational skill and a closer association with the upstream developers. In general continuous deployers are larger public clouds as best as I can tell.

libvirt specific considerations
  • For those intending to run the libvirt hypervisor driver, not all libvirt hypervisors are created equal. libvirt implements pluggable hypervisors, so if you select the Nova libvirt hypervisor driver, you then need to select what hypervisor to use with libvirt as well. It should be noted however that some hypervisors work better than others, with kvm being the most widely deployed.
  • There are two types of storage for instances. There is "instance storage", which is block devices that exist for the life of the instance and are then cleaned up when the instance is destroyed. There is also block storage provided Cinder, which is persistent and arguably easier to manage than instance storage. I won't discuss storage provided by Cinder any further however, because it is outside the scope of this post. Instance storage is provided by a plug in layer in the libvirt hypervisor driver, which presents you with another set of deployment decisions.
  • Shared instance storage is attractive, but it comes at a cost. Shared instance storage is an attractive option, but isn't required for live migration of instances using the libvirt hypervisor. Think about the costs of shared storage though -- for example putting everything on network attached storage is likely to be expensive, especially if most of your instances don't need the facility. There are other options such as Ceph, but the storage interface layer in libvirt is one of the areas of code where we need to improve testing so be wary of bugs before relying on those storage back ends.

Thoughts on how to evaluate hypervisor drivers

As promised, I also have some thoughts on how to evaluate which hypervisor driver is the right choice for you. First off, if your organization has a lot of experience with a particular hypervisor, then there is always value in that. If that is the case, then you should seriously consider running the hypervisor you already have experience with, as long as that hypervisor has a driver for Nova which meets the criteria below.

What's important is to be looking for a driver which works well with Nova, and a good measure of that is how well the driver development team works with the Nova development team. The obvious best case here is where both teams are the same people -- which is true for drivers that are in the Nova code base. I am aware there are drivers that live outside of Nova's code repository, but you need to remember that the interface these drivers plug into isn't a stable or versioned interface. The risk of those drivers being broken by the ongoing development of Nova is very high. Additionally, only a very small number of those "out of tree" drivers contribute to our continuous integration testing. That means that the Nova team also doesn't know when those drivers are broken. The breakages can also be subtle, so if your vendor isn't at the very least doing tempest runs against their out of tree driver before shipping it to you then I'd be very worried.

You should also check out how many bugs are open in LaunchPad for your chosen driver (this assumes the Nova team is aware of the existence of the driver I suppose). Here's an example link to the libvirt driver bugs currently open. As well as total bug count, I'd be looking for bug close activity -- its nice if there is a very small number of bugs filed, but perhaps that's because there aren't many users. It doesn't necessarily mean the team for that driver is super awesome at closing bugs. The easiest way to look into bug close rates (and general code activity) would be to checkout the code for Nova and then look at the log for your chosen driver. For example for the libvirt driver again:

$ git clone
$ cd nova/nova/virt/driver/libvirt
$ git log .

That will give you a report on all the commits ever for that driver. You don't need to read the entire report, but it will give you an idea of what the driver authors have recently been thinking about.

Another good metric is the specification activity for your driver. Specifications are the formal design documents that Nova adopted for the Juno release, and they document all the features that we're currently working on. I write summaries of the current state of Nova specs regularly, which you can see posted at with this being the most recent summary at the time of writing this post. You should also check how much your driver authors interact with the core Nova team. The easiest way to do that is probably to keep an eye on the Nova team meeting minutes, which are posted online.

Finally, the OpenStack project believes strongly in continuous integration testing. It (s/It/Testing) has clear value in the number of bugs it finds in code before our users experience them, and I would be very wary of driver code which isn't continuously integrated with Nova. Thus, you need to ensure that your driver has well maintained continuous integration testing. This is easy for "in tree" drivers, as we do that for all of them. For out of tree drivers, continuous integration testing is done with a thing called "third party CI".

How do you determine if a third party CI system is well maintained? First off, I'd start by determining if a third party CI system actually exists by looking at OpenStack's list of known third party CI systems. If the third party isn't listed on that page, then that's a very big warning sign. Next you can use Joe Gordon's lastcomment tool to see when a given CI system last reported a result:

$ git clone
$ ./ --name "DB Datasets CI"
last 5 comments from 'DB Datasets CI'
[0] 2015-01-07 00:46:33 (1:35:13 old) 'Ignore 'dynamic' addr flag on gateway initialization' 
[1] 2015-01-07 00:37:24 (1:44:22 old) 'Use session with neutronclient' 
[2] 2015-01-07 00:35:33 (1:46:13 old) 'libvirt: Expanded test libvirt driver' 
[3] 2015-01-07 00:29:50 (1:51:56 old) 'ephemeral file names should reflect fs type and mkfs command' 
[4] 2015-01-07 00:15:59 (2:05:47 old) 'Support for ext4 as default filesystem for ephemeral disks' 

You can see here that the most recent run is 1 hour 35 minutes old when I ran this command. That's actually pretty good given that I wrote this while most of America was asleep. If the most recent run is days old, that's another warning sign. If you're left in doubt, then I'd recommend appearing in the OpenStack IRC channels on freenode and asking for advice. OpenStack has a number of requirements for third party CI systems, and I haven't discussed many of them here. There is more detail on what OpenStack considers a "well run CI system" on the OpenStack Infrastructure documentation page.

General operational advice

Finally, I have some general advice for operators of OpenStack. There is an active community of operators who discuss their use of the various OpenStack components at the openstack-operators mailing list, if you're deploying Nova you should consider joining that mailing list. While you're welcome to ask questions about deploying OpenStack at that list, you can also ask questions at the more general OpenStack mailing list if you want to.

There are also many companies now which will offer to operate an OpenStack cloud for you. For some organizations engaging a subject matter expert will be the right decision. Probably the most obvious way to evaluate which of those companies to use is to look at their track record of successful deployments, as well as their overall involvement in the OpenStack community. You need a partner who can advocate for you with the OpenStack developers, as well as keeping an eye on what's happening upstream to ensure it meets your needs.


Thanks for reading so far! I hope this document is useful to someone out there. I'd love to hear your feedback -- are there other things we wished deployers considered before committing to a plan? Am I simply wrong somewhere? Finally, this is the first time that I've posted an essay form of a conference talk instead of just the slide deck, and I'd be interested in if people find this format more useful than a YouTube video post conference. Please drop me a line and let me know if you find this useful!

Tags for this post: openstack nova
Related posts: One week of Nova Kilo specifications; Specs for Kilo; Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno Nova PTL Candidacy; Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: ironic


Syndicated 2015-01-12 14:11:00 from : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

12 Jan 2015 sye   » (Journeyer)

I'll be traveling from this Thursday for 10 days. May Peace and wonder surround y'all.

11 Jan 2015 StevenRainwater   » (Master)

Rainwater Reptile Ranch Chili

It’s hard to believe I’ve been blogging for more than a decade and never posted the recipe for the Rainwater Reptile Ranch chili. The weather has been cold this weekend. We just made a big batch of it last night and the recipe card is sitting here in front of me. This recipe has evolved and changed over the years. It started out when I was dating Susan and wanted her to experience this staple of Texas cuisine. That early attempt was based on reverse engineering the ingredients listed on a package of Wick Fowler’s 2 Alarm Chili that I spotted in a grocery store. I figured if I had the right stuff in approximately the right proportions and threw it all in a pot, I’d get close. Every batch changed for a few years as we consulted other chili recipes and made patches. It eventually began to stabilize into this recipe. The cinnamon is a nod to the Cincinnati variety of chili Susan experienced in her college days.

It will be obvious as you read the ingredients list that we are not chili purists by any means. We add beans, tomatoes, and even use alternative meats in our chili. I blame my parents. They raised me on a chili recipe that would barely fit even the most liberal definition today. It was more like a ground beef stew or soup than actual chili. I’ve had a lot of different bowls of chili in my life and loved most of them. So don’t worry too much about purity and give it a try!

Rainwater Reptile Ranch Texas Chili
Revision level unknown [September 1993]

2.5 lbs ground turkey – we use 2 packages of Jennie-O extra lean. You can substitute ground beef or other types of meat if preferred.
1 or 2 large yellow onions, diced
32 oz salt-free tomato sauce (if you’re using canned, 4 8oz cans)
8 oz red wine
5 large fresh tomatoes without skins (or two 14oz. cans tomatoes)
1 cup dried pinto or pink beans (or two cans of ranch style beans, or no beans at all if you’re a purist)
2 tsp cumin
1.5 tsp paprika
.5 tsp ground mustard
.25 tsp cinnamon
.5 tsp garlic powder
1/2 cup chili powder (that stuff from the grocery store simply will not do, and it’s mostly salt. For great chili you want the good stuff from someplace like Pendery’s – we use a combination of Pecos red and several other varieties – darker flavors with medium heat are our favorite)
dash of cayenne pepper
1 tbsp oregano
1 tsp cilantro
1 tbsp cornmeal

1 diced white onion
your favorite type of cheese

Prepare the beans first. Dump 1 cup of dried beans into a 3 quart or larger pot of water. Bring it to a boil, turn it down to medium low for two hours. Check the water level occasionally and add water if it gets to low. In the last half hour drop in .5 tsp salt.

In a 6 quart pot or dutch oven, put in half of the diced onions, the tomato sauce, wine and tomatoes. Put the pot on low heat. Stir in the dried spices and chili powder. Don’t put the cornmeal in yet.

Prepare the meat. Add a little olive oil to a frying pan. Set heat on medium-high. Put a quarter of the diced onions into the pan and let them sizzle a little bit. Add half the ground meat and brown it. With turkey and some other meats, you’ll need to use the spatula to break the meat into the granularity you want in the final chili while you’re browning it. I prefer fine granularity of meat in my chili but others prefer larger chunks of meat. Once the meat is browned, dump the entire contents of the pan into your your chili pot. With 2 lbs of meat, you’ll need to repeat this process twice. If you’re using a meat other than turkey, you may need to adjust the spicing. You may also want to add a little salt with some meats.

When the beans are ready, drain and dispose of the water they boiled in, rinse them in a colander, then add them to the big chili pot. By the time your meat and beans have been added, your chili should have reached a boil. Let it boil for ten minutes, then turn the heat on the pot down to simmer. If the consistency is too thick, add a little water. Let the pot simmer for several hours, stirring occasionally. The taste will continue to improve the longer it simmers. About 10 minutes before serving, stir in the cornmeal. With meats that tend to be a bit greasy, like ground beef, the cornmeal will greatly improve the consistency of the chili.

Serve in a bowl and garnish with diced white onions and cheese on top. Serve with crackers or fritos. Enjoy with friends on a cold day whenever possible.

Syndicated 2015-01-11 17:41:27 from Steevithak of the Internet

10 Jan 2015 dmarti   » (Master)

Fedora 21 note: grub2 prompt

Fedora 21 installed on my main laptop ( Thinkpad, formerly Fedora 20.) Came up at a GRUB2 prompt instead of booting normally. Used this to fix:

  grub2-mkconfig -o /boot/grub/grub.cfg
grub2-install --target=x86_64-efi 

(Found on Gentoo Forums. Appreciating the Gentoo scene right now.) I'm still not noticing a lot of differences yet, but at least I won't be showing up at SCALE this year without the shiny new thing.

Syndicated 2015-01-10 15:49:41 from Don Marti

8 Jan 2015 bagder   » (Master)

curl 7.40.0: unix domain sockets and smb

curl and libcurl curl dot-to-dot7.40.0 was just released this morning. There’s a closer look at some of the perhaps more noteworthy changes. As usual, you can find the entire changelog on the curl web site.

HTTP over unix domain sockets

So just before the feature window closed for the pending 7.40.0 release of curl, Peter Wu’s patch series was merged that brings the ability to curl and libcurl to do HTTP over unix domain sockets. This is a feature that’s been mentioned many times through the history of curl but never previously truly implemented. Peter also very nicely adjusted the test server and made two test cases that verify the functionality.

To use this with the curl command line, you specify the socket path to the new –unix-domain option and assuming your local HTTP server listens on that socket, you’ll get the response back just as with an ordinary TCP connection.

Doing the operation from libcurl means using the new CURLOPT_UNIX_SOCKET_PATH option.

This feature is actually not limited to HTTP, you can do all the TCP-based protocols except FTP over the unix domain socket, but it is to my knowledge only HTTP that is regularly used this way. The reason FTP isn’t supported is of course its use of two connections which would be even weirder to do like this.


SMB is also known as CIFS and is an old network protocol from the Microsoft world access files. curl and libcurl now support this protocol with SMB:// URLs thanks to work by Bill Nagel and Steve Holme.

Security Advisories

Last year we had a large amount of security advisories published (eight to be precise), and this year we start out with two fresh ones already on the 8th day… The ones this time were of course discovered and researched already last year.

CVE-2014-8151 is a way we accidentally allowed an application to bypass the TLS server certificate check if a TLS Session-ID was already cached for a non-checked session – when using the Mac OS SecureTransport SSL backend.

CVE-2014-8150 is a URL request injection. When letting curl or libcurl speak over a HTTP proxy, it would copy the URL verbatim into the HTTP request going to the proxy, which means that if you craft the URL and insert CRLFs (carriage returns and linefeed characters) you can insert your own second request or even custom headers into the request that goes to the proxy.

You may enjoy taking a look at the curl vulnerabilities table.

Bugs bugs bugs

The release notes mention no less than 120 specific bug fixes, which in comparison to other releases is more than average.


Syndicated 2015-01-08 20:24:36 from

8 Jan 2015 waffel   » (Journeyer)

update wrong timestamp for a page in mediawiki

I had the problem, that one of my VMs, serving our wiki, got the wrong system date. The date was pointing to year 2018.

Some users of our wiki changed the content while the VM has the wrong date. After I see the problem I update the VM and re-connect it to NTP again and the time was corrected.

But, the paged changed in the the “wrong” timeframe are stored with this wrong timeframe in the wiki database.

Now every call on recent changes shows the pages, changed in the wrong timeframe” always at top (because of this future date).

Now I found a way to “fix” this in the database. To do this, you need access to your wiki DB with RW rights. I have done this on mysql (but these SQL statements should also work on other database systems):

update revision set rev_timestamp=20150101000000 where rev_timestamp > 20150108115205;

update recentchanges set rc_timestamp = 20150101000000 where rc_timestamp > 20150108115205;

Einsortiert unter:administration Tagged: mediawiki

Syndicated 2015-01-08 14:54:57 from waffel's Weblog

8 Jan 2015 waffel   » (Journeyer)

find and remove dead links

I asked myself the question, what would be the best way under linux to find and remove dead links?

Here is the short answer (only tested on bash):

find . -type l -exec sh -c "file -b {} | grep -q ^broken" \; -print | tr "\n" "" | xargs -0 rm

Einsortiert unter:administration Tagged: linux

Syndicated 2015-01-08 11:17:00 from waffel's Weblog

8 Jan 2015 etbe   » (Master)

Conference Suggestions

LCA 2015 is next week so it seems like a good time to offer some suggestions for other delegates based on observations of past LCAs. There’s nothing LCA specific about the advice, but everything is based on events that happened at past LCAs.

Don’t Oppose a Lecture

Question time at the end of a lecture isn’t the time to demonstrate that you oppose everything about the lecture. Discussion time between talks at a mini-conf isn’t a time to demonstrate that you oppose the entire mini-conf. If you think a lecture or mini-conf is entirely wrong then you shouldn’t attend.

The conference organisers decide which lectures and mini-confs are worthy of inclusion and the large number of people who attend the conference are signalling their support for the judgement of the conference organisers. The people who attend the lectures and mini-confs in question want to learn about the topics in question and people who object should be silent. If someone gives a lecture about technology which appears to have a flaw then it might be OK to ask one single question about how that issue is resolved, apart from that the lecture hall is for the lecturer to describe their vision.

The worst example of this was between talks at the Haecksen mini-conf last year when an elderly man tried at great length to convince me that everything about feminism is wrong. I’m not sure to what degree the Haecksen mini-conf is supposed to be a feminist event, but I think it’s quite obviously connected to feminism – which is of course was why he wanted to pull that stunt. After he discovered that I was not going to be convinced and that I wasn’t at all interested in the discussion he went to the front of the room to make a sexist joke and left.

Consider Your Share of Conference Resources

I’ve previously written about the length of conference questions [1]. Question time after a lecture is a resource that is shared among all delegates. Consider whether you are asking more questions than the other delegates and whether the questions are adding benefit to other people. If not then send email to the speaker or talk to them after their lecture.

Note that good questions can add significant value to the experience of most delegates. For example when a lecturer appears to be having difficulty in describing their ideas to the audience then good questions can make a real difference, but it takes significant skill to ask such questions.

Dorm Walls Are Thin

LCA is one of many conferences that is typically held at a university with dorm rooms offered for delegates. Dorm rooms tend to have thinner walls than hotel rooms so it’s good to avoid needless noise at night. If one of your devices is going to make sounds at night please check the volume settings before you start it. At one LCA I was startled at about 2AM but the sound of a very loud porn video from a nearby dorm room, the volume was reduced within a few seconds, but it’s difficult to get to sleep quickly after that sort of surprise.

If you set an alarm then try to avoid waking other people. If you set an early alarm and then just get up then other people will get back to sleep, but pressing “snooze” repeatedly for several hours (as has been done in the past) is anti-social. Generally I think that an alarm should be at a low volume unless it is set for less than an hour before the first lecture – in which case waking people in other dorm rooms might be doing them a favor.

Phones in Lectures

Do I need to write about this? Apparently I do because people keep doing it!

Phones can be easily turned to vibrate mode, most people who I’ve observed taking calls in LCA lectures have managed this but it’s worth noting for those who don’t.

There are very few good reasons for actually taking a call when in a lecture. If the hospital calls to tell you that they have found a matching organ donor then it’s a good reason to take the call, but I can’t think of any other good example.

Many LCA delegates do system administration work and get calls at all times of the day and night when servers have problems. But that isn’t an excuse for having a conversation in the middle of the lecture hall while the lecture is in progress (as has been done). If you press the green button on a phone you can then walk out of the lecture hall before talking, it’s expected that mobile phone calls sometimes have signal problems at the start of the call so no-one is going to be particularly surprised if it takes 10 seconds before you say hello.

As an aside, I think that the requirement for not disturbing other people depends on the number of people who are there to be disturbed. In tutorials there are fewer people and the requirements for avoiding phone calls are less strict. In BoFs the requirements are less strict again. But the above is based on behaviour I’ve witnessed in mini-confs and main lectures.


It is the responsibility of people who consume substances to ensure that their actions don’t affect others. For smokers that means smoking far enough away from lecture halls that it’s possible for other delegates to attend the lecture without breathing in smoke. Don’t smoke in the lecture halls or near the doorways.

Also using an e-cigarette is still smoking, don’t do it in a lecture hall.


Unwanted photography can be harassment. I don’t think there’s a need to ask for permission to photograp people who harass others or break the law. But photographing people who break the social agreement as to what should be done in a lecture probably isn’t. At a previous LCA a man wanted to ask so many questions at a keynote lecture that he had a page of written notes (seriously), that was obviously outside the expected range of behaviour – but probably didn’t justify the many people who photographed him.

A Final Note

I don’t think that LCA is in any way different from other conferences in this regard. Also I don’t think that there’s much that conference organisers can or should do about such things.

Related posts:

  1. A Linux Conference as a Ritual Sociological Images has an interesting post by Jay Livingston PhD...
  2. Suggestions and Thanks One problem with the blog space is that there is...
  3. Length of Conference Questions After LCA last year I wrote about “speaking stacks” and...

Syndicated 2015-01-08 12:02:52 from etbe - Russell Coker

8 Jan 2015 slef   » (Master)

Social Network Wishlist

All I want for 2015 is a Free/Open Source Software social network which is:

  • easy to register on (no reCaptcha disability-discriminator or similar, a simple openID, activation emails that actually arrive);
  • has an email help address or online support or phone number or something other than the website which can be used if the registration system causes a problem;
  • can email when things happen that I might be interested in;
  • can email me summaries of what’s happened last week/month in case they don’t know what they’re interested in;
  • doesn’t email me too much (but this is rare);
  • interacts well with other websites (allows long-term members to post links, sends trackbacks or pingbacks to let the remote site know we’re talking about them, makes it easy for us to dent/tweet/link to the forum nicely, and so on);
  • isn’t full of spam (has limits on link-posting, moderators are contactable/accountable and so on, and the software gives them decent anti-spam tools);
  • lets me back up my data;
  • is friendly and welcoming and trolls are kept in check.

Is this too much to ask for? Does it exist already?

Syndicated 2015-01-08 04:10:54 from Software Cooperative News » mjr

7 Jan 2015 mikal   » (Journeyer)

A quick walk to William Farrer's grave

This was a Canberra Bushwalking Club walk lead by John Evans. Not very long, but I would never have found this site without John's leadership, so much appreciated.


Tags for this post: blog pictures 20150107-william_farrers_grave photo canberra tuggeranong bushwalk historical grave
Related posts: A walk around Mount Stranger; Urambi Trig; Walk up Tuggeranong Hill; A quick walk to Tuggeranong Trig; Wanniassa Trig; Two more weeks to go


Syndicated 2015-01-07 01:53:00 from : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

6 Jan 2015 hacker   » (Master)

Tuesday Tip: rsync Command to Include Only Specific Files

I find myself using rsync a lot, both for moving data around, for creating backups using rsnapshot (yes, even on Windows!) and for mirroring public Open Source projects and repositories. I used to create all sorts of filters and scripts to make sure I was getting only the files I wanted and needed, but I […]

Related posts:
  1. SOLVED: VMware Tools __devexit_p Error on Linux Kernel 3.8 and Earlier If you run a current version of VMware Workstation, VMware...
  2. Using fdupes to Solve the Data Duplication Problem: I’ve got some dupes! Well, 11.6 hours later after scanning the NAS with fdupes,...
  3. Updating Legacy Fedora Linux Distributions to Use Archive Repositories I run a VMware ESXi server here that hosts ~500...

Syndicated 2015-01-06 19:35:18 from random neuron misfires

6 Jan 2015 Stevey   » (Master)

Here we go again.

Once upon a time I worked from home for seven years, for a company called Bytemark. Then, due to a variety of reasons, I left. I struck out for adventures and pastures new, enjoyed the novelty of wearing clothes every day, and left the house a bit more often.

Things happened. A year passed.

Now I'm working for Bytemark again, although there are changes and the most obvious one is that I'm working in a shared-space/co-working setup, renting a room in a building near my house instead of being in the house.

Shame I have to get dressed, but otherwise it seems to be OK.

Syndicated 2015-01-06 00:00:00 from Steve Kemp's Blog

6 Jan 2015 marnanel   » (Journeyer)

Gentle Readers: happy new year

Gentle Readers
a newsletter made for sharing
volume 3, number 1
5th January 2015: happy new year
What I’ve been up to

Working on getting better. They've put me on a new medication, lamotrigine, and they're ramping me up slowly at 25mg a fortnight. It's not at the full dose yet, but I think it's helping already.

The other day I went to visit some friends, and they had a harp! So of course I asked to play it. Even though I'd never played before, after about two hours it was sounding rather tuneful. I think I'll save up for one and learn to play it properly.
Photo thanks to Kit.

A poem of mine

Here's a poem about ringing in the new year. It's the earliest sonnet of mine I think is any good: I wrote it when I was about 18.


Look to your Lord who gives you life.
This year must end as all the years.
You live here in the vale of tears.
This year brought toil, the next year strife.
For too, too soon we break our stay.
The end of things may be a birth.
The clouds will fade and take the earth.
Make fast your joy on New Year's Day.
When dies a friend we weep and mourn.
When babes are born we drink with cheer.
But no man mourns when dies the year.
When dies the age, may you be born.
Your death, your birth, are close at hand.
In him we trust. In him we stand.

A picture

Caption: two wise men and a cow visit Mary.
First wise man: I bring gold!
Second wise man: I bring frankincense!

Something wonderful

A group called Africa2Moon announced today that it's organising an Africa-wide effort to go to the moon. Many people have objected that Africa has many problems which need work and money, and that a moon shot will only distract from more urgent priorities. The organisation's answer was rather interesting: in a way, the moon landing itself is a sort of McGuffin. The real story is about getting there: as a side effect of training up the scientists and the engineers, and building the systems needed, it should reduce the brain drain to the west, and improve life across the continent as a side-effect-- not unlike the effects of the US space programme a generation earlier.

And this set me thinking about parallels: we all live in communities that need investments of time and effort and money, from food banks to counsellors. When and how is it possible to create something within these communities, something that everyone can collaborate on and be inspired by?

Something from someone else

by W B Yeats

Had I the heaven's embroidered cloths,
Enwrought with golden and silver light,
The blue and the dim and the dark cloths
Of night and light and the half-light;
I would spread the cloths under your feet:
But I, being poor, have only my dreams;
I have spread my dreams under your feet;
Tread softly because you tread on my dreams.


Gentle Readers is published on Mondays and Thursdays, and I want you to share it. The archives are at, and so is a form to get on the mailing list. If you have anything to say or reply, or you want to be added or removed from the mailing list, I’m at and I’d love to hear from you. The newsletter is reader-supported; please pledge something if you can afford to, and please don't if you can't. ISSN 2057-052X. Love and peace to you all.
This entry was originally posted at Please comment there using OpenID.

Syndicated 2015-01-06 01:45:17 from Monument

6 Jan 2015 joey   » (Master)

a bug in my ear

True story: Two days ago, as I was about to drift off to sleep at 2 am, a tiny little bug flew into my ear. Right down to my eardrum, which it fluttered against with its wings.

It was a tiny little moth-like bug, the kind you don't want to find in a bag of flour, and it had been beating against my laptop screen a few minutes before.

This went on for 20 minutes, in which I failed to get it out with a cue tip and shaking my head. It is very weird to have a bug flapping in your head.

I finally gave up and put in eardrops, and stopped the poor thing flapping. I happen to know these little creatures mass almost nothing, and rapidly break up into nearly powder when dead. So while I've not had any bug bits come out, I'm going by the way my ear felt a little stopped up yesterday, and just fine today, and guessing it'll be ok. Oh, and I've been soaking it in the tub and putting in eardrops for good measure.

If I've seemed a little distracted lately, now you know why!

Syndicated 2015-01-06 01:36:24 from see shy jo

5 Jan 2015 marnanel   » (Journeyer)


Odd trivia question: consider the archipelago immediately northwest of France. The two largest islands by population are Great Britain and Ireland. What's the third?

This entry was originally posted at Please comment there using OpenID.

Syndicated 2015-01-05 23:03:02 from Monument

5 Jan 2015 pixelbeat   » (Journeyer)

coreutils inbox - Dec 2014

Latest news from the coreutils project

Syndicated 2014-12-31 00:00:00 from

4 Jan 2015 mikal   » (Journeyer)

Wanniassa Trig

I walked up to Wanniassa Trig this afternoon. It was a nice walk, the nature park is in the middle of suburban Canberra, but you couldn't tell that from within much of the park. The nature park also has excellently marked fire trails. There were really cool thunderstorms on the ranges as I walked, whilst I managed to avoid getting rained on while walking.


Tags for this post: blog pictures 20150104-wanniassa_trig photo canberra tuggeranong bushwalk trig_point
Related posts: A walk around Mount Stranger; Urambi Trig; Walk up Tuggeranong Hill; A quick walk to Tuggeranong Trig; Two more weeks to go; In Canberra


Syndicated 2015-01-04 02:00:00 from : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

4 Jan 2015 marnanel   » (Journeyer)

The Interior Castle

Further snark from St Teresa:

"A rich man, without son or heir, loses part of his property, but still has more than enough to keep himself and his household. If this misfortune grieves and disquiets him as though he were left to beg his bread, how can our Lord ask him to give up all things for His sake? This man will tell you he regrets losing his money because he wished to bestow it on the poor."

This entry was originally posted at Please comment there using OpenID.

Syndicated 2015-01-04 07:39:35 from Monument

4 Jan 2015 Skud   » (Master)

I feel like Arnold Rimmer with his study timetable.

I’m terrible at New Year’s resolutions, year-in-review posts, “theme word for the year”, or anything along those lines. My best resolution of all time, back in 2002 or 2003, was “eat better quality cheese”, and I’ll never hope to match it again. Still, things are a mess for me at present and something needs to change, and today, before the “work year” starts, seems like a good day to take stock.

I’m not going to make resolutions, because everyone knows they don’t stick (except the cheese one). What I’m trying to do is prompt myself to be a bit more thoughtful about my time and energy. So, today I spent a bit of time working through some questions like:

  • How do I spend my time? How do I want to be spending it?
  • How can I tell whether I’m spending my time the way I want to be?
  • How can I be more thoughtful about each day?
  • How can I avoid spinning my wheels?

I started with a spreadsheet entitled Why I have no time, which I’ve shared publicly. In it I broke down my work and non-work time in an “ideal” situation, noting how many hours a week I’d like to spend on various things. Of course the distinction between “work” and “non-work” is a bit blurred for someone who’s self-employed, does lots of voluntary stuff, and has personal interests that cross over with professional ones, but it’s a rough breakdown.

screencap of my time spreadsheet

Is this anything like reality? Time to find out.

Then I updated Toggl, which I’ve been using for time tracking throughout 2014, so that my “Projects” matched the spreadsheet, in terms of general categorisation and colour coding. My Toggl Projects are:

  • Paid contract work (split by client for convenience)
  • Growstuff – development
  • Growstuff – other
  • Professional development/research
  • Work email/catchups
  • Work – writing/other projects
  • Work – planning
  • Work travel
  • Business admin/paperwork
  • Meals
  • Life admin and domestic miscellanea
  • Health
  • Social events/activities
  • Personal projects
  • Personal blogging/writing
  • Relaxation – crafts/tv/reading
  • Internet/social media/chat

I know I’m reasonably good at using Toggl to track my time, so this will let me see whether my “ideal” matches reality or not. If not, then I’m going to have to reflect on whether the way I’m spending my time is in keeping with my goals and values, or not. It’ll be interesting to see how that works out.

Finally, in an attempt to be more thoughtful about each day and avoid spinning my wheels, I’ve come up with a couple of worksheets to help myself. They are:

  • The breakfast worksheet (one page, ~5 minutes) which I hope to fill in over breakfast each morning, to give a bit of shape to my day.
  • The weekly worksheet (1 page, maybe 10-15 mins) which I hope to do on Sunday/Monday, to give shape to the week ahead.

On the back of the weekly worksheet is a checklist of achievements that I can check off throughout the week. My checklist’s pretty idiosyncratic, and I’ve given myself lots of easy ones to get the kick of checking them off easily — you’ll see that the first checkbox is for having filled in the front of the worksheet. The left column is for work stuff, and the right column for personal (but see the caveats above). Some of them are non-specific, like “work meeting” or “self care” or “left the house” and there are multiple checkboxes, so I can have a tick whenever I do something relevant and leave a note about the details if I want to.

screencap of part of my achievement checklist

I’m glad I have some easy wins on the checklist.

I’ve revised the worksheets already, just an hour or so after I created them, and I expect I’ll keep adapting them as I use them. I’ll be interested to see which questions/prompts are most useful to me, and which ones I can usefully drop.

Please feel free to copy/re-use any of these ideas if you find them useful!

Syndicated 2015-01-04 00:41:44 from Infotropism

3 Jan 2015 sye   » (Journeyer)

3 Jan 2015 hacker   » (Master)

HOWTO: Run boot2docker in VMware Fusion and ESXi with Shipyard to Manage Your Containers

This took me awhile to piece together, and I had to go direct to the maintainers of several of these components to get clarity on why some things worked, while others did not, even following the explicit instructions. Here, I present the 100% working HOWTO: I started with a post I found written by someone […]

Related posts:
  1. SOLVED: VMware Tools __devexit_p Error on Linux Kernel 3.8 and Earlier If you run a current version of VMware Workstation, VMware...
  2. HOWTO: Properly install native VMware Tools in pfSense 2.0.3 (FreeBSD 8.1) If you’re anything like me, you take security seriously. With...
  3. AT&T Locks Horns with Hurricane Sandy and my data-only MiFi We were hit pretty hard by Hurricane Sandy out here...

Syndicated 2015-01-03 06:10:11 from random neuron misfires

2 Jan 2015 jas   » (Master)

OpenPGP Smartcards and GNOME

The combination of GnuPG and a OpenPGP smartcard (such as the YubiKey NEO) has been implemented and working well for a around a decade. I recall starting to use it when I received a FSFE Fellowship card long time ago. Sadly there has been some regressions when using them under GNOME recently. I reinstalled my laptop with Debian Jessie (beta2) recently, and now took the time to work through the issue and write down a workaround.

To work with GnuPG and smartcards you install GnuPG agent, scdaemon, pscsd and pcsc-tools. On Debian you can do it like this:

apt-get install gnupg-agent scdaemon pcscd pcsc-tools

Use the pcsc_scan command line tool to make sure pcscd recognize the smartcard before continuing, if that doesn’t recognize the smartcard nothing beyond this point will work. The next step is to make sure you have the following line in ~/.gnupg/gpg.conf:


Logging out and into GNOME should start gpg-agent for you, through the /etc/X11/Xsession.d/90gpg-agent script. In theory, this should be all that is required. However, when you start a terminal and attempt to use the smartcard through GnuPG you would get an error like this:

jas@latte:~$ gpg --card-status
gpg: selecting openpgp failed: unknown command
gpg: OpenPGP card not available: general error

The reason is that the GNOME Keyring hijacks the GnuPG agent’s environment variables and effectively replaces gpg-agent with gnome-keyring-daemon which does not support smartcard commands (Debian bug #773304). GnuPG uses the environment variable GPG_AGENT_INFO to find the location of the agent socket, and when the GNOME Keyring is active it will typically look like this:

jas@latte:~$ echo $GPG_AGENT_INFO 

If you use GnuPG with a smartcard, I recommend to disable GNOME Keyring’s GnuPG and SSH agent emulation code. This used to be easy to achieve in older GNOME releases (e.g., the one included in Debian Wheezy), through the gnome-session-properties GUI. Sadly there is no longer any GUI for disabling this functionality (Debian bug #760102). The GNOME Keyring GnuPG/SSH agent replacement functionality is invoked through the XDG autostart mechanism, and the documented way to disable system-wide services for a normal user account is to invoke the following commands.

jas@latte:~$ mkdir ~/.config/autostart
jas@latte:~$ cp /etc/xdg/autostart/gnome-keyring-gpg.desktop ~/.config/autostart/
jas@latte:~$ echo 'Hidden=true' >> ~/.config/autostart/gnome-keyring-gpg.desktop 
jas@latte:~$ cp /etc/xdg/autostart/gnome-keyring-ssh.desktop ~/.config/autostart/
jas@latte:~$ echo 'Hidden=true' >> ~/.config/autostart/gnome-keyring-ssh.desktop 

You now need to logout and login again. When you start a terminal, you can look at the GPG_AGENT_INFO environment variable again and everything should be working again.

jas@latte:~$ echo $GPG_AGENT_INFO 
jas@latte:~$ echo $SSH_AUTH_SOCK 
jas@latte:~$ gpg --card-status
Application ID ...: D2760001240102000060000000420000
jas@latte:~$ ssh-add -L
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDFP+UOTZJ+OXydpmbKmdGOVoJJz8se7lMs139T+TNLryk3EEWF+GqbB4VgzxzrGjwAMSjeQkAMb7Sbn+VpbJf1JDPFBHoYJQmg6CX4kFRaGZT6DHbYjgia59WkdkEYTtB7KPkbFWleo/RZT2u3f8eTedrP7dhSX0azN0lDuu/wBrwedzSV+AiPr10rQaCTp1V8sKbhz5ryOXHQW0Gcps6JraRzMW+ooKFX3lPq0pZa7qL9F6sE4sDFvtOdbRJoZS1b88aZrENGx8KSrcMzARq9UBn1plsEG4/3BRv/BgHHaF+d97by52R0VVyIXpLlkdp1Uk4D9cQptgaH4UAyI1vr cardno:006000000042

That’s it. Resolving this properly involves 1) adding smartcard code to the GNOME Keyring, 2) disabling the GnuPG/SSH replacement code in GNOME Keyring completely, 3) reorder the startup so that gpg-agent supersedes gnome-keyring-daemon instead of vice versa, so that people who installed the gpg-agent really gets it instead of the GNOME default, or 4) something else. I don’t have a strong opinion on how to solve this, but 3) sounds like a simple way forward.

Syndicated 2015-01-02 20:46:40 from Simon Josefsson's blog

2 Jan 2015 ade   » (Journeyer)

Awkward questions for those boarding the microservices bandwagon

Why isn't this a library?
What heuristics do you use to decide when to build (or extract) a service versus building (or extracting) a library?

How do you plan to deploy your microservices?
What is your deployable unit?
Will you be deploying each microservice in isolation or deploying the set of microservices needed to implement some business functionality?
Are you capable of deploying different instances (where an instance may represent multiple processes on multiple machines) of the same microservice with different configurations?

Is it acceptable for another team to take your code and spin up another instance of your microservice?
Can team A use team B's microservice or are they only used within rather than between teams?
Do you have consumer contacts for your microservices or is it the consumer's responsibility to keep up with the changes to your API?

Is each microservice a snowflake or are there common conventions?
How are these conventions enforced?
How are these conventions documented?
What's involved in supporting these conventions?
Are there common libraries that help with supporting these conventions?

How do you plan to monitor your microservices?
How do you plan to trace the interactions between different microservices in a production environment?

What constitutes a production-ready microservice in your environment?
What does the smallest possible deployable microservice look like in your environment?

Syndicated 2015-01-02 19:31:00 (Updated 2015-01-02 19:31:12) from Ade Oshineye

2 Jan 2015 etbe   » (Master)


reason=”verification failed; insecure key”

I’ve recently noticed OpenDKIM on systems I run giving the above message when trying to verify a DKIM message from my own domain. According to Google searches this is due to DNSSEC not being enabled. I’m not certain that I really need DNSSEC for this reason (I can probably make DKIM work without it), but the lack of it does decrease the utility of DKIM and DNSSEC is generally a good thing to have.

Client (Recursive) Configuration

The Debian Wiki page about DNSSEC is really good for setting up recursive resolvers [1]. Basically if you install the bind9 package on Debian/Wheezy (current stable) it will work by default. If you have upgraded from an older release then it might not work (IE if you modified the BIND configuration and didn’t allow the upgrade to overwrite your changes). The Debian Wiki page is also quite useful if you aren’t using Debian, most of it is more Linux specific than Debian specific.

dig +short TXT | tail -1

After you have enabled DNSSEC on a recursive resolver the above command should return “Yes, you are using DNSSEC“.

dig +noall +comments

The above command queries a zone that’s deliberately misconfigured, it will fail if DNSSEC is working correctly.

Signing a Zone

Digital Ocean has a reasonable tutorial on signing a zone [2].

dnssec-keygen -a NSEC3RSASHA1 -b 2048 -n ZONE

The above command creates a Zone Signing Key.

dnssec-keygen -f KSK -a NSEC3RSASHA1 -b 4096 -n ZONE

The above command creates a Key Signing Key. This will take a very long time if you don’t have a good entropy source, on my systems it took a couple of days. Run this from screen or tmux.


When you have created the ZSK and KSK you need to add something like the above to your zone file to include the DNSKEY records.


%.signed: %
        dnssec-signzone -A -3 $(shell head -c 100 /dev/random | sha1sum | cut -b 1-16) -k $(shell echo ksk/K$<*.key) -N INCREMENT -o $< -t $< $(shell echo zsk/K$<*.key)
        rndc reload

Every time you change your signed zone you need to create a new signed zone file. Above is the Makefile I’m currently using to generate the signed file. This relies on storing the KSK files in a directory named ksk/ and the ZSK files in a directory named zsk/. Then BIND needs to be configured to use instead of

The Registrar

Every time you sign the zone a file with a name like will be created, it will have the same contents every time which are the DS entries you send to the registrar to have your zone publicly known as being signed.

Many registrars don’t support DNSSEC, if you use such a registrar (as I do) then you need to transfer your zone before you can productively use DNSSEC. Without the DS entries being signed by a registrar and included in the TLD no-one will recognise your signatures on zone data.

ICANN has a list of registrars that support DNSSEC [3]. My next task is to move some of my domains to such registrars, unfortunately they cost more so I probably won’t transfer all my zones. Some of my zones don’t do anything that’s important enough to need DNSSEC.

Related posts:

  1. Time Zones and Remote Servers It’s widely regarded that the best practice is to set...
  2. Dynamic DNS The Problem My SE Linux Play Machine has been down...

Syndicated 2015-01-02 14:08:37 from etbe - Russell Coker

1 Jan 2015 oubiwann   » (Journeyer)

Scientific Computing and the Joy of Language Interop

The scientific computing platform for Erlang/LFE has just been announced on the LFE blog. Though written in the Erlang Lisp syntax of LFE, it's fully usable from pure Erlang. It wraps the new py library for Erlang/LFE, as well as the ErlPort project. More importantly, though, it wraps Python 3 libs (e.g., math, cmath, statistics, and more to come) and the ever-eminent NumPy and SciPy projects (those are in-progress, with matplotlib and others to follow).

(That LFE blog post is actually a tutorial on how to use lsci for performing polynomial curve-fitting and linear regression, adapted from the previous post on Hy doing the same.)

With the release of lsci, one can now start to easily and efficiently perform computationally intensive calculations in Erlang/LFE (and any other Erlang Core-compatible language, e.g., Elixir, Joxa, etc.) That's super-cool, but it's not quite the point ...

While working on lsci, I found myself experiencing a great deal of joy. It wasn't just the fact that supervision trees in a programming language are insanely great. Nor just the fact that scientific computing in Python is one of the best in any language. It wasn't only being able to use two syntaxes that I love (LFE and Python) cohesively, in the same project. And it wasn't the sum of these either ;-) You probably see where I'm going with this ;-) The joy of these and many other great aspects of inter-operation between multiple powerful computing systems is truly greater than the sum of its parts.

I've done a bunch of Julia lately and am a huge fan of this language as well. One of the things that Julia provides is explicit interop with Python. Julia is targeted at the world of scientific computing, aiming to be a compelling alternative to Fortran (hurray!), so their recognition of the enormous contribution the Python scientific computing community has made to the industry is quite wonderful to see.

A year or so ago I did some work with Clojure and LFE using Erlang's JInterface. Around the same time I was using LFE on top of  Erjang, calling directly into Java without JInterface. This is the same sort of Joy that users of Jython have, and these are more examples of languages and tools working to take advantage of the massive resources available in the computing community.

Obviously, language inter-op is not new. Various FFIs have existed for quite some time (I'm a big fan of the Common Lisp CFFI), but what is new (relatively, that is ... as I age, anything in the past 10 years is new) is that we are seeing this not just for programs reaching down into C/C++, but reaching across, to other higher-level languages, taking advantage of their great achievements -- without having to reinvent so many wheels.

When this level of cooperation, credit, etc., is done in the spirit of openness, peer-review, code-reuse, and standing on the shoulders of giants (or enough people to make giants!), we get joy. Beautiful, wonderful coding joy.

And it's so much greater than the sum of the parts :-)

Syndicated 2015-01-01 20:56:00 (Updated 2015-01-01 21:01:38) from Duncan McGreggor

1 Jan 2015 dmarti   » (Master)

QoTD: Doc Searls

Nobody is writing with more insight and depth on the subject of online advertising, and doing the work required to understand what kinds of advertising best support (and hurt) what's left of professional journalism in the networked world.

Doc Searls

(This is about me, believe it or not. if I can't get a conference speaking slot out of that...)

Syndicated 2015-01-01 16:48:38 from Don Marti

1 Jan 2015 dmarti   » (Master)

Predictions for 2015

(No long list of predictions, just a news story that we might see this year...)

Go For Bro? Maybe later

SAN FRANCISCO (Apr. 1, 2015) is a hot new startup that helps people handle those routine chores that nobody has time for. But the online community-building drama has us filing this one under "maybe later."

When I needed my old DVD collection ripped and Ebay-ed, I hit the Broconomy app, which is well-designed and snappy (see screenshot). It quoted me a fifteen-minute wait time and dispatched a worker to my apartment. So far so good, but twenty minutes later, no worker.

It's not just me. Flaky is par for the course, according to disappointed app store comments. Broconomy CEO J.R. Dobbs Jr. blames an "online anti-tech hate campaign" by loosely organized Internet trolls. The best-known of the troll groups calls itself "International Workers of the Web." "Our workers appreciate the opportunity to make some extra income, and only a few online trolls are trying to make things worse for everyone," Dobbs said in an email interview. The online troll campaign is not associated with the Industrial Workers of the World.

According to one "union" forum, members are instructed to sign up for sharing economy sites, and subscribe to alerts of "flash strikes" on particular companies and ZIP codes. (The forum is listed as a hate site, so I can't link to it here.)

Social media expert Prof. Jane Brooklyn said in an interview that the troll campaign uses familiar methods of Internet humor to keep members engaged. Participants often post screenshots of tasks on the Broconomy app along with captions mocking the customers. "Some common themes are inability to recognize common food items or failure to complete toilet training," Brooklyn said.

The most attention-getting posts are those where a worker cancels a job at the last minute, then intercepts the new worker on the way to the customer site and makes a new recruit for the "union" campaign. "Instead of threatening or insulting workers who don't participate, they typically offer a stuffed toy animal, a cupcake, and the same small payment that the worker would have received for the original job, in cash," Brooklyn said.

Last fall, Broconomy was the subject of an investigation by the California Department of Labor. After a worker was trapped in the collapse of a customer's "Hobbit"-themed birthday party, and later rescued, the state accused the company of failing to carry worker's compensation insurance. The complaint was settled this year. Terms of the settlement are confidential, but all Broconomy workers in California were required to re-register as independent contractors for Broconomy's Qatari affiliate.

The Good

  • Clean, intuitive app design.

  • Low price

The Bad

  • Trolls, trolls, trolls! There ought to be a law...

Syndicated 2015-01-01 15:05:26 from Don Marti

1 Jan 2015 badvogato   » (Master)

Happy YE Year to all y'all.

Three things I remembered: ttyl

31 Dec 2014 crhodes   » (Master)

a year in review

A brief retrospective, partly brought to you by grep:

  • CATS credits earnt: 30 (15 + 7.5 + 7.5) at level 7
  • crosswords solved: >=40
  • words blogged: 75k
  • words blogged, excluding crosswords: 50k
  • SBCL releases made: 12 (latest today!)
  • functioning ARM boards sitting on my desk: 3 (number doing anything actually useful, beyond SBCL builds: 0 so far, working on it)
  • emacs packages worked on: 2 (iplayer squeeze)
  • public engagement events co-organized: 1
  • ontological inconsistencies resolved: 1
  • R packages made: 1 (University PR departments offended: not enough)

Blogging’s been a broad success; slightly tailed off of late, what with

  • Crawl Dungeon Sprint wins: 2 (The Pits, Thunderdome: both GrFi)

so there’s an obvious New Year’s Resolution right there.

Happy New Year!

Syndicated 2014-12-31 22:27:37 from notes

31 Dec 2014 dmarti   » (Master)

2015: the year to save web advertising?

I spent some time with targeted online advertising from the advertiser side this year, looking at a data-packed dashboard and tweaking all kinds of stuff. Did not get to spend that much time on it, since I have to do a lot of different things for work, but did get to learn and try it out. And managing targeted ads is like all the most habit-forming parts of solving crossword puzzles, gambling for real money, checking and re-checking social sites, and getting sucked into a real-time strategy game. All at the same time.

But in the long run...

The more targetable an ad medium is, the more it provokes filters and regulation. (Do Not Call, the junk fax ban, spam filters, web adblock...)

The less targetable an ad medium is, the more it can support content, build brands, and get attention. (Even people who spent perfectly good money on a TiVo still watch most of the TV commercials.)

In 2015, we have an opportunity to save web advertising, by moving toward less targetability. While database marketers are all fired up about ads in native mobile apps, the main web browsers all have decent tracking protection available to users who choose to turn it on.

Individual sites and brands can't unilaterally give up the Big Data habit all at once, but I can help make my own site's users less trackable, which helps me a little bit right away and a lot more later.

Now is the chance to inform, nudge, and tempt users into doing what's right for publishers and brands. Some users like the "getting away with something" feeling of running an ad blocker, but IMHO most people will feel better about helping their favorite sites by getting tracking protection turned on.

Please have a look at and let me know what you think. JavaScript bug reports and pull requests welcome.

End of year bonus links

Mathew Ingram: It’s getting harder to tell what’s a real Silicon Valley startup and what’s a parody

JR Hennessy: The tech utopia nobody wants: why the world nerds are creating will be awful | JR Hennessy

Lisa Vaas: Cat stalker knows where your kitty lives (and it's your fault)

BOB HOFFMAN: Why Your Social Media Strategy Sucks

Lauren Kirchner: Amway Journalism

Kashmir Hill: Forget Glass. Here Are Wearables That Protect Your Privacy.

Derek Thompson: The New York Times Is a Great Company in a Terrible Business

Robinson Meyer: I Drank a Cup of Hot Coffee That Was Overnighted Across the Country SHUT UP AND TAKE MY MONEY.

Rebecca J. Rosen: Actually, Some Material Goods Can Make You Happy

Dana: Browsewrap Agreements Must Be Brought to Users’ Attention

Richard Byrne Reilly: NSA spying might have affected U.S. tech giants more than we thought

AdExchanger: Beware Of Publishers’ Walled Gardens

Angèle Christin: When it comes to chasing clicks, journalists say one thing but feel pressure to do another

AdExchanger: Answering A Squirrelly Question: 'What Is PII?'

Sarah Sluis: Ghostery and IPONWEB Team Up To Bring Fraud Detection To RTB

John Robb: The BEST grocery store brand in the US right now is Market Basket. Here’s their secret.

Sean Blanchfield: 2014 Report – Adblocking Goes Mainstream

eaon pritchard: influencer theory is the wrong end of the stick

ronan: ‘The Only Way To Combat Ad Fraud Is With Real-Time Transparency’

Allison Schiff: Fraud-day With Dstillery: Everyone Is Responsible For Fighting Fraud (In the long run, fighting fraud means fixing tracking.)

Tauriq Moosa: Comment sections are poison: handle with care or remove them

Matthew Garrett: My free software will respect users or it will be bullshit

Katerina Pavlidis: The day I realised my personal data was no longer mine

sil: The next big thing is privacy

Stephany Fan: Do Beacons Track You? No, You Track Beacons

ronan: Facebook Poses Further Threat To Google With Full FAN Roll Out

Alex Hern: Sir Tim Berners-Lee speaks out on data ownership

ronan: Ad Tech Firms Prepare For Bolstered US Profile

Doc Searls: How Radio Can Defend the Dashboard

Federal Trade Commission: Online ads roll the dice Could different groups of people, including “protected classes,” see entirely different ads? If the offer and group are subject to legal protections, could the result have a disproportionate adverse impact? Even if they are not subject to legal protection, can some ads be offensive or harmful to some audiences?

BOB HOFFMAN: Amazing Tale Of Online Ad Fraud

AdExchanger: Facebook And Google Are Bringing Walled Gardens Back

Erika Napoletano: Why Copyblogger Is Killing Its Facebook Page (via Street Fight)

ronan: Domain Identity Theft Is The Fraudsters’ Latest Ponzi Scheme

Computerworld: Hackers strike defense companies through real-time ad bidding

Amanda Tomas: My Day Interviewing For The Service Economy Startup From Hell (via The Awl)

Malvertising Campaign on Yahoo, AOL, Triggers CryptoWall Infections

jonathan: How Verizon’s Advertising Header Works (via The Not-So Private Parts)

Ad blocker that clicks on the ads (via OneAndOneIs2)

BOB HOFFMAN: Hypocrisy By Proxy

Susan Crawford: Jammed (via Doc Searls WeblogDoc Searls Weblog ») Silicon Valley will destroy your job: Amazon, Facebook and our sick new economy

Matthew Yglesias: Car dealers are awful. It's time to kill the dumb laws that keep them in business.

Rachel Goodman, Staff Attorney, ACLU Racial Justice Program: FTC Needs to Make Sure Companies Aren’t Using Big Data to Discriminate

Gregory Ferenstein: Study: more Americans fear spying from corporations than the government (also, clowns)

Andrew Casale: A Better Programmatic Supply Chain Will Root Out Fraud

Bourree Lam: Newspaper Ad Revenue Fell $40 Billion in a Decade

Ben Williams: Acceptable Advertising – before and after (via Adblock Plus and (a little) more - mozilla -) (via Adblock Plus and (a little) more - Blog)

Jim Edwards: The Guardian Is Being Swamped With 'Dark Traffic' And No One Knows Where It's Coming From ("Dark" traffic is worthless to low-reputation sites, though. The less information available on individual users, the more that site reputation matters.)

Greg Sterling: First Half Ad Revenue: Search Dominates PC Ads But Not Mobile

Napier Lopez: Reuters moves conversation to social media as it kills comments on its news stories

MediaPost | Garfield at Large: These 15 Hottest Naked Celebrity Diets For Getting Audience Attention Will Shock You

Nicholas Nethercote: Quantifying the effects of Firefox’s Tracking Protection

Bloomberg: High-Speed Ad Traders Profit by Arbitraging Your Eyeballs

Pew Research Center's Internet & American Life Project: Americans Consider Certain Kinds of Data to be More Sensitive than Others

Baekdal Plus: Clickbait is the Greatest Threat To Your Future Success - (by @baekdal)

eaon pritchard: sincerity is bullshit

Digg Top Stories: Somebody’s Already Using Verizon’s ID To Track Users

Daniel Nazer: Victory! Court Finally Throws Out Ultramercial’s Infamous Patent on Advertising on the Internet

Doc Searls: Some thoughts on App Based Car Services (ABCS)

Brian Merchant: How the World's Largest PR Firm Uses Big Data to Grow Its Astroturf Campaigns

marks: Dark social traffic in the mobile app era -- Fusion (via Digiday) (via Digiday)

Darren: My journey in becoming a Mozillian

Simon Phipps: A shadowy consortium opposes your Internet privacy

Michael Sebastian: Major Ad Buyer Tells Magazines It Won't Buy Tablet Circ Like It's Print Any More

Frédéric Filloux: The Rise of AdBlock Reveals A Serious Problem in the Advertising Ecosystem

Kelly Jackson Higgins: Online Ad Fraud Exposed: Advertisers Losing $6.3 Billion To $10 Billion Per Year

Media Briefing TheMediaBriefing Analysis: Guardian CEO: 'The idea we will survive by becoming a technology company is garbage'

Chris Smith: How publishers combat ad blockers

BOB HOFFMAN: Charts, Graphs, Facts, and Fiction

Ed Lee: Getting Tiles Data Into Firefox

Daniel Terdiman: Facebook goes all-in on advertising after years of laying groundwork

Ben Williams: Adblock Plus is best defense against 'malvertising’ according to new study (via Adblock Plus and (a little) more - Blog)

Dylan Tweney: Tech VCs poured millions into media companies in 2014 — but it’s not clear why

Leo Mirani: The secret to the Uber economy is wealth inequality

Bruce SchneierS: Over 700 Million People Taking Steps to Avoid NSA Surveillance

Antonio Cangiano: Don’t Count on Ads

Barry Levine: Sorenson releases Spark tools to take TV to the next interactive level

Brian Braiker: Michael Wolff on digital media in 2015: ‘A deluge of crap’ (via Doc Searls Weblog)

Gretchen Shirm: The push: product placement in fiction

Jay Rosen: When to quit your journalism job (via The Big Picture) (via The Big Picture)

Ryan Gantz: Bad community is worse than no community

Alisha Ramos: Reporters, designers, and developers become BFFs (via Pressthink)

Timothy Geigner: Librarians Are Continuing To Defend Open Access To The Web As A Public Service

Justin Peters: The Sony Emails Are Fair Game

Ricardo Bilton: 4 publishers that killed their comment sections in 2014 (via Marketing Land » Marketing Day)

John Koetsier: Mobile ad tech to 2017: App-centric, private markets, native ads, and closed loops

george tannenbaum: Writing copy.

Steven Englehardt: How cookies can be used for global surveillance

AdExchanger: The Publisher’s Guide To Domain Spoofing

Jeffrey Zeldman: Unexamined Privilege is the real source of cruelty in Facebook’s “Your Year in Review” (via One Foot Tsunami and swissmiss)

Mark Copyranter: IT’S THE END OF ADVERTISING CREATIVITY AS WE KNOW IT (and you should not feel fine).

RichardStacy: Organic social media is dead: but was it ever alive?

Syndicated 2014-12-31 16:05:40 from Don Marti

31 Dec 2014 eMBee   » (Journeyer)

learning smalltalk with Google Code In

For years i have been meaning to learn smalltalk. my first exploration started about 10 years ago while teaching two children to make a game with squeak. Then i worked through a tutorial about making a simple game. Unfortunately it didn't capture my interest. So the my attempts to learn smalltalk were stalled as i searched for a project that i could do with it.

Why do i want to learn smalltalk? Because it is the first object-oriented language. Many of the OO concepts were invented in smalltalk. There is also the concept of working in an image that not only contains my code but also a full IDE which is used to update my code at runtime. Updating code at runtime is a concept that has been with me for more than 20 years now, ever since i started programming MUDs in LPC and writing modules for the Spinner/Roxen webserver in Pike. Pike allows recompiling classes at runtime. Any new instances will be made from the new class, while old instances remain as is. If the compilation fails, the class is not replaced and the old class continues to work. This way it is possible to make changes on a live server without restarting and disrupting ongoing requests. A decade later i discovered sTeam, the platform that also drives this very website. It takes this process even further: sTeam persists code and objects in a database. While in Roxen objects live as long as it takes to process a request, in sTeam objects are permanent, much like in a smalltalk image. sTeam then adds the capability to update live objects with new class implementations. The image concept of smalltalk is therefore already very familiar, and the major difference is smalltalk's GUI.

Recently a friend asked me what it would take to build a text search application for the Baha'i writings in chinese. There is one for english and other western languages, but not for chinese, and it does not run on mobile devices. It is also not Free Software, so i can't use it as a base to improve. But i didn't really want to take on a new project either so i just filed the idea for the time being.

One of my customers is managing access to several internal resources through htaccess and htpasswd. Because they have many interns who need to have access to some of these, and because they are now spread over multiple servers, it is becoming more and more cumbersome to manage them manually via these files. It also does not help that a salt module which we could use to help depends on apache helpers, which we can not install because apache conflicts with nginx which we are using. So i started exploring alternatives. One such alternative is a different way for nginx to verify access. It can make a request to an external service which then grants or rejects access depending on the resource and credentials. This could be implemented as a webservice with a webinterface to manage the users. I looked for some existing applications that would get me part of the way but i found nothing suitable.

Enter Google Code-In: FOSSASIA invited the BLUG to join them as mentors.

At first i put up tasks for the community-calendar project, but then i realized that this was an opportunity to explore new ideas. Figuring that teaching is the best way to learn i put up those project ideas as tasks for the students. I could ask students to learn and explore, and finally work on those projects. I would pick the technology and guide the students through a sequence of tasks to acquire the skills needed to implement the actual applications. This was my chance to get back into smalltalk. Since code-In targets middle and highschool students, it is quite unlikely that any of them already know smalltalk, or have even heared about it. so in a way this will introduce a few students to smalltalk. I picked pharo because i feel it is going in the right direction trying to improve itself and also adding things like commandline support.

The desktop application was straight-forward: find out how to embed text-documents in the image and make them searchable.

The web application took more exploration. I wanted to do it with a RESTful api and a javascript frontend. Again, the frontend was easy to define: create a user management interface. For the backend, the question was which webframework to use? AIDA/web has builtin user management and REST style url support by default. Seaside includes a REST module, but both are strong on generating html which i am not interested in. Then there is iliad, which appears more lightweight. Eventually i figured i could just let the students explore each, and i created a task for each tutorial that i could find:

(some of these i repeated because the student who did the them first time didn't pick up the follow-up tasks.)

Finally i discovered that Zinc, the HTTP server used by most frameworks is powerful enough to build a RESTful API without all the templating extras that the above frameworks provide. I also discovered teapot, a microframework that might be useful.

Once the students are familiar with the smalltalk environment, they can move on to the next steps:

Of course there are also tasks for the front-end

Related is also this task about a file editor, which i believe should make it easier to edit static assets like html and css pages from within the image:

Syndicated 2014-12-31 06:24:17 (Updated 2014-12-31 18:10:38) from DevLog

31 Dec 2014 eMBee   » (Journeyer)

leaning smalltalk through Google Code In

For years i have been meaning to learn smalltalk. my first exploration started about 10 years ago while teaching two children to make a game with squeak. Then i worked through a tutorial about making a simple game. Unfortunately it didn't capture my interest. So the my attempts to learn smalltalk were stalled as i searched for a project that i could do with it.

Why do i want to learn smalltalk? Because it is the first object-oriented language. Many of the OO concepts were invented in smalltalk. There is also the concept of working in an image that not only contains my code but also a full IDE which is used to update my code at runtime. Updating code at runtime is a concept that has been with me for more than 20 years now, ever since i started programming MUDs in LPC and writing modules for the Spinner/Roxen webserver.

Recently a friend asked me what it would take to build a text search application for the Baha'i writings in chinese. There is one for english and other western languages, but not for chinese, and it does not run on mobile devices. It is also not Free Software, so i can't use it as a base to improve. But i didn't really want to take on a new project either so i just filed the idea for the time being.

One of my customers is managing access to several internal resources through htaccess and htpasswd. Because they have many interns who need to have access to some of these, and because they are now spread over multiple servers, it is becoming more and more cumbersome to manage them manually via these files. It also does not help that a salt module which we could use to help depends on apache helpers, which we can not install because apache conflicts with nginx which we are using. So i started exploring alternatives. One such alternative is a different way for nginx to verify access. It can make a request to an external service which then grants or rejects access depending on the resource and credentials. This could be implemented as a webservice with a webinterface to manage the users. I looked for some existing applications that would get me part of the way but i found nothing suitable.

Enter Google Code-In: FOSSASIA invited the BLUG to join them as mentors.

At first i put up tasks for the community-calendar project, but then i realized that this was an opportunity to explore new ideas. Figuring that teaching is the best way to learn i put up those project ideas as tasks for the students. I could ask students to learn and explore, and finally work on those projects. I would pick the technology and guide the students through a sequence of tasks to acquire the skills needed to implement the actual applications. This was my chance to get back into smalltalk. Since code-In targets middle and highschool students, it is quite unlikely that any of them already know smalltalk, or have even heared about it. so in a way this will introduce a few students to smalltalk. I picked pharo because i feel it is going in the right direction trying to improve itself and also adding things like commandline support.

The desktop application was straight-forward: find out how to embed text-documents in the image and make them searchable.

The web application took more exploration. I wanted to do it with a RESTful api and a javascript frontend. Again, the frontend was easy to define: create a user management interface. For the backend, the question was which webframework to use? AIDA/web has builtin user management and REST style url support by default. Seaside includes a REST module, but both are strong on generating html which i am not interested in. Then there is iliad, which appears more lightweight. Eventually i figured i could just let the students explore each, and i created a task for each tutorial that i could find:

(some of these i repeated because the student who did the them first time didn't pick up the follow-up tasks.)

Finally i discovered that Zinc, the HTTP server used by most frameworks is powerful enough to build a RESTful API without all the templating extras that the above frameworks provide. I also discovered teapot, a microframework that might be useful.

Once the students are familiar with the smalltalk environment, they can move on to the next steps:

Of course there are also tasks for the front-end

Related is also this task about a file editor, which i believe should make it easier to edit static assets like html and css pages from within the image:

Syndicated 2014-12-31 06:11:01 from DevLog

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

Advogato User Stats

New Advogato Members

Recently modified projects

18 Dec 2014 AshWednesday
2 Dec 2014 Justice4all
11 Nov 2014 respin
20 Jun 2014
13 Apr 2014 Babel
13 Apr 2014 Polipo
19 Mar 2014 usb4java
8 Mar 2014 Noosfero
17 Jan 2014 Haskell
17 Jan 2014 Erlang
17 Jan 2014 Hy
17 Jan 2014 clj-simulacrum
17 Jan 2014 Haskell-Lisp
17 Jan 2014 lfe-disco
17 Jan 2014 clj-openstack

New projects

2 Dec 2014 Justice4all
11 Nov 2014 respin
8 Mar 2014 Noosfero
17 Jan 2014 Haskell
17 Jan 2014 Erlang
17 Jan 2014 Hy
17 Jan 2014 clj-simulacrum
17 Jan 2014 Haskell-Lisp
17 Jan 2014 lfe-disco
17 Jan 2014 clj-openstack
17 Jan 2014 lfe-openstack
17 Jan 2014 LFE
1 Nov 2013 FAQ Linux
15 Apr 2013 Gramps
8 Apr 2013 pydiction