Recent blog entries

31 Mar 2015 bagder   » (Master)

The state and rate of HTTP/2 adoption

http2 logoThe protocol HTTP/2 as defined in the draft-17 was approved by the IESG and is being implemented and deployed widely on the Internet today, even before it has turned up as an actual RFC. Back in February, already upwards 5% or maybe even more of the web traffic was using HTTP/2.

My prediction: We’ll see >10% usage by the end of the year, possibly as much as 20-30% a little depending on how fast some of the major and most popular platforms will switch (Facebook, Instagram, Tumblr, Yahoo and others). In 2016 we might see HTTP/2 serve a majority of all HTTP requests – done by browsers at least.

Counted how? Yeah the second I mention a rate I know you guys will start throwing me hard questions like exactly what do I mean. What is Internet and how would I count this? Let me express it loosely: the share of HTTP requests (by volume of requests, not by bandwidth of data and not just counting browsers). I don’t know how to measure it and we can debate the numbers in December and I guess we can all end up being right depending on what we think is the right way to count!

Who am I to tell? I’m just a person deeply interested in protocols and HTTP/2, so I’ve been involved in the HTTP work group for years and I also work on several HTTP/2 implementations. You can guess as well as I, but this just happens to be my blog!

The HTTP/2 Implementations wiki page currently lists 36 different implementations. Let’s take a closer look at the current situation and prospects in some areas.

Browsers

Firefox and Chome have solid support since a while back. Just use a recent version and you’re good.

Internet Explorer has been shown in a tech preview that spoke HTTP/2 fine. So, run that or wait for it to ship in a public version soon.

There are no news about this from Apple regarding support in Safari. Give up on them and switch over to a browser that keeps up!

Other browsers? Ask them what they do, or replace them with a browser that supports HTTP/2 already.

My estimate: By the end of 2015 the leading browsers with a market share way over 50% combined will support HTTP/2.

Server software

Apache HTTPd is still the most popular web server software on the planet. mod_h2 is a recent module for it that can speak HTTP/2 – still in “alpha” state. Give it time and help out in other ways and it will pay off.

Nginx has told the world they’ll ship HTTP/2 support by the end of 2015.

IIS was showing off HTTP/2 in the Windows 10 tech preview.

H2O is a newcomer on the market with focus on performance and they ship with HTTP/2 support since a while back already.

nghttp2 offers a HTTP/2 => HTTP/1.1 proxy (and lots more) to front your old server with and can then help you deploy HTTP/2 at once.

Apache Traffic Server supports HTTP/2 fine. Will show up in a release soon.

Also, netty, jetty and others are already on board.

HTTPS initiatives like Let’s Encrypt, helps to make it even easier to deploy and run HTTPS on your own sites which will smooth the way for HTTP/2 deployments on smaller sites as well. Getting sites onto the TLS train will remain a hurdle and will be perhaps the single biggest obstacle to get even more adoption.

My estimate: By the end of 2015 the leading HTTP server products with a market share of more than 80% of the server market will support HTTP/2.

Proxies

Squid works on HTTP/2 support.

HAproxy? I haven’t gotten a straight answer from that team, but Willy Tarreau has been actively participating in the HTTP/2 work all the time so I expect them to have work in progress.

While very critical to the protocol, PHK of the Varnish project has said that Varnish will support it if it gets traction.

My estimate: By the end of 2015, the leading proxy software projects will start to have or are already shipping HTTP/2 support.

Services

Google (including Youtube and other sites in the Google family) and Twitter have ran HTTP/2 enabled for months already.

Lots of existing services offer SPDY today and I would imagine most of them are considering and pondering on how to switch to HTTP/2 as Chrome has already announced them going to drop SPDY during 2016 and Firefox will also abandon SPDY at some point.

My estimate: By the end of 2015 lots of the top sites of the world will be serving HTTP/2 or will be working on doing it.

Content Delivery Networks

Akamai plans to ship HTTP/2 by the end of the year. Cloudflare has previously stated that they will “support HTTP/2 just as soon as it is practical“.

Amazon has not given any response publicly that I can find for when they will support HTTP/2 on their services.

Not a totally bright situation but I also believe (or hope) that as soon as one or two of the bigger CDN players start to offer HTTP/2 the others might feel a bigger pressure to follow suit.

Non-browser clients

curl and libcurl support HTTP/2 since months back, and the HTTP/2 implementations page lists available implementations for just about all major languages now. Like node-http2 for javascript, http2-perl, http2 for Go, Hyper for Python, OkHttp for Java, http-2 for Ruby and more. If you do HTTP today, you should be able to switch over to HTTP/2 relatively easy.

More?

I’m sure I’ve forgotten a few obvious points but I might update this as we go as soon as my dear readers point out my faults and mistakes!

How long is HTTP/1.1 going to be around?

My estimate: HTTP 1.1 will be around for many years to come. There is going to be a double-digit percentage share of the existing sites on the Internet (and who knows how many that aren’t even accessible from the Internet) for the foreseeable future. For technical reasons, for philosophical reasons and for good old we’ll-never-touch-it-again reasons.

The survey

Finally, I asked friends on twitter, G+ and Facebook what they think the HTTP/2 share would be by the end of 2015 with the help of a little poll. This does of course not make it into any sound or statistically safe number but is still just a collection of what a set of random people guessed. A quick poll to get a rough feel. This is how the 64 responses I received were distributed:

http2 share at end of 2015

Evidently, if you take a median out of these results you can see that the middle point is between 5-10 and 10-15. I’ll make it easy and say that the poll showed a group estimate on 10%. Ten percent of the total HTTP traffic to be HTTP/2 at the end of 2015.

I didn’t vote here but I would’ve checked the 15-20 choice, thus a fair bit over the median but only slightly into the top quarter..

In plain numbers this was the distribution of the guesses:

0-5% 29.1% (19)
5-10% 21.8% (13)
10-15% 14.5% (10)
15-20% 10.9% (7)
20-25% 9.1% (6)
25-30% 3.6% (2)
30-40% 3.6% (3)
40-50% 3.6% (2)
more than 50% 3.6% (2)

Syndicated 2015-03-31 05:54:36 from daniel.haxx.se

30 Mar 2015 benad   » (Apprentice)

Electricity Savings: All Those Blinking Lights

As part of my "spring cleaning", and partly inspired by this "Earth Hour" thing, I did an inventory of all the connected electrical devices around my apartment.

I basically categorized them this way:

  1. Devices that are used all the time and must be connected: Lights, electrical heating, fridge, water heater and so on.
  2. Devices that are seldom used, but cannot be turned off completely or disconnected easily: Oven, washer, dryer, and so on.
  3. Devices that are on all the time, for some reason.
  4. Devices that are used enough to warrant leaving them in "low-power standby mode".
  5. Devices I should turn off completely or disconnect when not used.

While I can't do anything for the devices in categories 1 and 2, other than replacing them, my goal was to move as many devices to either standby or turned off as possible. For example, my "home server PC", a Mac mini, doesn't use much power, but do I really need to have to running all the time? So I programmed it to be in standby, and wake up only during the afternoons on weekdays.

For devices already in standby mode, are they used enough? For example, my Panasonic Blu-Ray player kept being warm, since it remained in standby mode, for what? About 10 seconds of boot time? Since my TV takes that much time to "boot up" anyway, I just need to power on both at the same time, and I'll save all the electricity of keeping it in standby all the time.

I am generally less worried about laptops, tables and other battery-operated mobile devices when they stand in standby. They are already quite energy-efficient, running on batteries or not, especially when not actively used. Still, unplugging them from chargers reduces risks if there's an electrical surcharge in the apartment's wiring.

Syndicated 2015-03-30 20:26:00 from Benad's Blog

30 Mar 2015 dmarti   » (Master)

It's not about freedom

Doc Searls writes:

We hold as self-evident that personal agency and independence matter utterly, that free customers are more valuable than captive ones, that personal data belongs more to persons themselves than to those gathering it, that conscious signaling of intent by individuals is more valuable than the inferential kind that can only be guessed at, that spying on people when they don’t know about it or like it is wrong, and so on.

I'm going to agree with Doc that these are all good and important principles.

But then I'm going to totally ignore them.

Yes, it is "self-evident" that it's important to behave as a decent human being in online interactions, and in marketing projects. (Complexity dilutes understanding of a system but not moral responsibility for participating in a system. Just because you don't understand how your marketing budget gets diverted to fraud does not mean that you aren't ultimately responsible when you end up funding malware and scams.) Thinking about user rights is important. 30 years ago, Richard Stallman released the GNU Manifesto, which got people thinking about the ethical aspects of software licensing, and we need that kind of work about information in markets, too.

But that's not what I'm on about here. Targeted Advertising Considered Harmful is just background reading for a marketing meeting. And I've been to enough marketing meetings to know that, no matter how rat-holed and digressed the discussion gets, Freedom is never on the agenda.

So I'm going to totally ignore the Freedom side of discussing the targeted ad problem. You don't have to worry about some marketing person clicking through to this site and saying, WTF is this freedom woo-woo? It's all pure, unadulterated, 100% marketing-meeting-compatible business material, with some impressive-looking citations to Economics papers to give it some class.

Big Data proponents like to talk about "co-creating value," so let's apply that expression to advertising. The advertiser offers signal, and the reader offers attention. The value is in the exchange. Here's the point that we need to pick up on, and the point that ad blocker stats are shoving in our face until we get it. When one side's ability to offer value goes away—when a targeted ad ceases to carry signal and becomes just a windshield flyer—there's no incentive for the other side to participate in the exchange. Freedom or no freedom. Homo economicus himself would run a spam filter, or hang up on a cold call, or block targeted ads.

The big problem for web sites now is to get users onto a publisher-friendly tracking protection tool that facilitates advertising's exchange of value for value, before web advertising turns into a mess of crappy targeted ads vs. general filters, the way email spam has.

Syndicated 2015-03-30 14:33:29 from Don Marti

30 Mar 2015 Skud   » (Master)

Visiting San Francisco, Montreal, and Ottawa

Just a quick note to say that I’ll be in North America starting next week, for about two weeks:

  • San Francisco April 6th-10th (meetings, coworking, jetlag recovery, tacos, etc)
  • Montreal April 10th-15th (AdaCamp Montreal — I’m fully booked up from the afternoon of the 12th onward, I’m afraid, but have some time before that)
  • Ottawa April 15th-19th (friends, maybe meetings, coworking, etc)
  • San Francisco, again April 19th-21st

If you’re in any of those places and you’d like to catch up, ping me! I’ve got a fair bit of flexibility so I’m up for coffee/meals/coworking/whatever.

I’m particularly interested in talking with people/groups/orgs about:

  • Open food data, open source for food growers, etc — especially interoperability and linked open data!
  • Sustainable (open source) tech for sustainable (green) communities — why do so many sustainability groups use Facebook and how can we choose tech that better reflects our values?
  • Community management beyond/outside the tech bubble (we didn’t invent this thing; how do we learn and level up from here?)
  • Diversity beyond 101 level — how can we keep pushing forward? What’s next?

I should probably also note that I’ve got some capacity for short-medium term contract work from May onward. For the last 6 months or so I’ve been doing a lot of diversity consulting: I organise/lead AdaCamps (feminist unconferences for women in open tech/culture) around the world, and more recently I’ve been working with the Wikimedia Foundation on their Inspire campaign to address the gender gap. I’m interested in doing more along the same lines, so if you need someone with heaps of expertise at the intersection of open stuff and diversity/inclusiveness, let’s talk!

Syndicated 2015-03-30 13:30:34 from Infotropism

29 Mar 2015 marnanel   » (Journeyer)

in which Final Fantasy is discovered to be a computer game

Today someone made a reference I didn't get to something called a chockoboo (I think). I looked confused, and they said, "Have you heard of Final Fantasy?" "Yes," I said, "but I'm not sure what it is. A film, maybe, or a computer game?" There followed a great deal of explanation which I have now forgotten because I have no context to attach it to, except that FF is a large series of complicated computer games and that chockoboos are important in some of them. I think they must have explained what a chockoboo actually *is*, but if they did I forgot it.

The main takeaway, however, was an alarming realisation that I do this too, to almost everyone I meet.

This entry was originally posted at http://marnanel.dreamwidth.org/332483.html. Please comment there using OpenID.

Syndicated 2015-03-29 17:23:34 from Monument

28 Mar 2015 mdz   » (Master)

What I think about thought

Only parts of us will ever
touch o̶n̶l̶y̶ parts of others –
one’s own truth is just that really — one’s own truth.
We can only share the part that is u̶n̶d̶e̶r̶s̶t̶o̶o̶d̶ ̶b̶y̶ within another’s knowing acceptable t̶o̶ ̶t̶h̶e̶ ̶o̶t̶h̶e̶r̶—̶t̶h̶e̶r̶e̶f̶o̶r̶e̶ so one
is for most part alone.
As it is meant to be in
evidently in nature — at best t̶h̶o̶u̶g̶h̶ ̶ perhaps it could make
our understanding seek
another’s loneliness out.

– unpublished poem by Marilyn Monroe, via berlin-artparasites

This poem inspired me to put some ideas into words this morning, an attempt to summarize my current working theory of consciousness.

Ideas travel through space and time. An idea that exists in my mind is filtered through my ability to express it somehow (words, art, body language, …), and is then interpreted by your mind and its models for understanding the world. This shifts your perspective in some way, some or all of which may be unconscious. When our minds encounter new ideas, they are accepted or rejected, reframed, and integrated with our existing mental models. This process forms a sort of living ecosystem, which maintains equilibrium within the realm of thought. Ideas are born, divide, mutate, and die in the process. Language, culture, education and so on are stable structures which form and support this ecosystem.

Consciousness also has analogues of the immune system, for example strongly held beliefs and models which tend to reject certain ideas. Here again these can be unconscious or conscious. I’ve seen it happen that if someone hears an idea they simply cannot integrate, they will behave as if they did not hear it at all. Some ideas can be identified as such a serious threat that ignoring them is not enough to feel safe: we feel compelled to eliminate the idea in the external world. The story of Christianity describes a scenario where an idea was so threatening to some people that they felt compelled to kill someone who expressed it.

A microcosm of this ecosystem also exists within each individual mind. There are mental structures which we can directly introspect and understand, and others which we can only infer by observing our thoughts and behaviors. These structures communicate with each other, and this communication is limited by their ability to “speak each other’s language”. A dream, for example, is the conveyance of an idea from an unconscious place to a conscious one. Sometimes we get the message, and sometimes we don’t. We can learn to interpret, but we can’t directly examine and confirm if we’re right. As in biology, each part of this process introduces uncountable “errors”, but the overall system is surprisingly robust and stable.

This whole system, with all its many minds interacting, can be thought of as an intelligence unto itself, a gestalt consciousness. This interpretation leads to some interesting further conclusions:

  • The notion that an individual person possesses a single, coherent point of view seems nonsensical
  • The separation between “my mind” and “your mind” seems arbitrary
  • The attribution of consciousness only to humans, or only to living beings, seems absurd

Syndicated 2015-03-28 16:50:22 from We'll see | Matt Zimmerman

27 Mar 2015 marnanel   » (Journeyer)

Image accessibility

I have an accessibility idea. I shall probably do it, unless it turns out to be fundamentally flawed. Your thoughts are appreciated!

1) A site that takes an uploaded JPEG, and a string, and returns the JPEG with the EXIF comment field set to that string.

2) Browser extensions for Firefox and Chrome which set the alt property of each JPEG on a page to its comment field, if it has one.

This means you can describe an image before you post it, and that description travels with the image. Thoughts?

This entry was originally posted at http://marnanel.dreamwidth.org/332220.html. Please comment there using OpenID.

Syndicated 2015-03-27 15:27:19 (Updated 2015-03-27 15:27:29) from Monument

26 Mar 2015 caolan   » (Master)

gtk3 vclplug, some more gesture support

Now gtk3 long-press support to go with swipe

With the demo that a long-press in presentation mode will bring up the context menu for switching between using the pointer for draw-on-slide vs normal slide navigation.

Syndicated 2015-03-26 14:53:00 (Updated 2015-03-26 14:53:33) from Caolán McNamara

26 Mar 2015 caolan   » (Master)

gtk3 vclplug, basic gesture support

gtk3's gesture support is the functionality I'm actually interested in, so now that presentations work in full-screen mode, I've added basic GtkGestureSwipe support to LibreOffice (for gtk3 >= 3.14) and hooked it up the slideshow, so now swiping towards the left advances to the next slide, to the right for the the previous slide.

Syndicated 2015-03-26 09:35:00 (Updated 2015-03-26 09:35:24) from Caolán McNamara

24 Mar 2015 jas   » (Master)

Laptop indecision

I wrote last month about buying a new laptop and I still haven’t made a decision. One reason for this is because Dell doesn’t seem to be shipping the E7250. Some online shops claim to be able to deliver it, but aren’t clear on what configuration it has – and I really don’t want to end up with Dell Wifi.

Another issue has been the graphic issues with the Broadwell GPU (see the comment section of my last post). It seems unlikely that this will be fixed in time for Debian Jessie. I really want a stable OS on this machine, as it will be a work-horse and not a toy machine. I haven’t made up my mind whether the graphics issue is a deal-breaker for me.

Meanwhile, a couple of more sub-1.5kg (sub-3.3lbs) Broadwell i7’s have hit the market. Some of these models were suggested in comments to my last post. I have decided that the 5500U CPU would also be acceptable to me, because some newer laptops doesn’t come with the 5600U. The difference is that the 5500U is a bit slower (say 5-10%) and lacks vPro, which I have no need for and mostly consider a security risk. I’m not aware of any other feature differences.

Since the last round, I have tightened my weight requirement to be sub-1.4kg (sub-3lbs), which excludes some recently introduced models, and actually excludes most of the models I looked at before (X250, X1 Carbon, HP 1040/810). Since I’m leaning towards the E7250, with the X250 as a “reliable” fallback option, I wanted to cut down on the number of further models to consider. Weigth is a simple distinguisher. The 1.4-1.5kg (3-3.3lbs) models I am aware that of that is excluded are the Asus Zenbook UX303LN, the HP Spectre X360, and the Acer TravelMate P645.

The Acer Aspire S7-393 (1.3kg) and Toshiba Kira-107 (1.26kg) would have been options if they had RJ45 ports. They may be interesting to consider for others.

The new models I am aware of are below. I’m including the E7250 and X250 for comparison, since they are my preferred choices from the first round. A column for maximum RAM is added too, since this may be a deciding factor for me. Higher weigth is with touch screens.

Toshiba Z30-B 1.2-1.34kg 16GB 13.3″ 1920×1080
Fujitsu Lifebook S935 1.24-1.36kg 12GB 13.3″ 1920×1080
HP EliteBook 820 G2 1.34-1.52kg 16GB 12.5″ 1920×1080
Dell Latitude E7250 1.25kg 8/16GB? 12.5″ 1366×768
Lenovo X250 1.42kg 8GB 12.5″ 1366×768

It appears unclear whether the E7250 is memory upgradeable, some sites say max 8GB some say max 16GB. The X250 and 820 has DisplayPort, the S935 and Z30-B has HDMI, and the E7250 has both DisplayPort/HDMI. The E7250 does not have VGA which the rest has. All of them have 3 USB 3.0 ports except for X250 that only has 2 ports. The E7250 and 820 claims NFC support, but Debian support is not given. Interestingly, all of them have a smartcard reader. All support SDXC memory cards.

The S935 has an interesting modular bay which can actually fit a CD reader or an additional battery. There is a detailed QuickSpec PDF for the HP 820 G2, haven’t found similar detailed information for the other models. It mentions support for Ubuntu, which is nice.

Comparing these laptops is really just academic until I have decided what to think about the Broadwell GPU issues. It may be that I’ll go back to a fourth-gen i7 laptop, and then I’ll probably pick a cheap reliable machine such as the X240.

Syndicated 2015-03-24 22:11:30 from Simon Josefsson's blog

24 Mar 2015 amits   » (Journeyer)

Live Migrating QEMU-KVM Virtual Machines: Full Text

I’ve attempted to write down all I said while delivering my devconf.cz talk on Live Migrating QEMU-KVM Virtual Machines.  The full text is on the Red Hat Developer Blog:

http://developerblog.redhat.com/2015/03/24/live-migrating-qemu-kvm-virtual-machines/

Syndicated 2015-03-24 15:53:40 from Think. Debate. Innovate.

23 Mar 2015 mikal   » (Journeyer)

A quick walk through Curtin

What do you do when you accidentally engaged a troll on twitter? You go for a walk of course.

I didn't realize there had been a flash flood in Canberra in 1971 that killed seven people, probably because I wasn't born then. However, when I ask people who were around then, they don't remember without prompting either, which I think is sad. I only learnt about the flood because of the geocache I found hidden at the (not very well advertised) memorial today.

       

Interactive map for this route.

Tags for this post: blog pictures 20150323-curtin photo canberra bushwalk
Related posts: Goodwin trig; Big Monks; Geocaching; Confessions of a middle aged orienteering marker; Narrabundah trig and 16 geocaches; Cooleman and Arawang Trigs

Comment

Syndicated 2015-03-23 13:41:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

23 Mar 2015 AlanHorkan   » (Master)

OpenRaster Python Plugin

Thanks to developers Martin Renold and Jon Nordby who generously agreed to relicense the OpenRaster plugin under the Internet Software Consortium (ISC) license (it is a permissive license, it is the license preferred by the OpenBSD project, and also the license used by brushlib from MyPaint). Hopefully other applications will be encouraged to take another look at implementing OpenRaster.

The code has been tidied to conform to the PEP8 style guide, with only 4 warnings remaining, and they are all concerning long lines of more than 80 characters (E501).

The OpenRaster files are also far tidier. For some bizarre reason the Python developers choose to make things ugly by default, and neglected to include any line breaks in the XML. Thanks to Fredrik Lundh and Effbot.org for the very helpful pretty-printing code. The code has also been changed so that many optional tags are included if and only if they are needed, so if you ever do need to read the raw XML it should be a lot easier.

There isn't much for normal users unfortunately. The currently selected layer is marked to the OpenRaster file, and also if a layer is edit locked. If you are sending files to MyPaint it will correctly select the active layer, and recognize which layers were locked. (No import back yet though.) Unfortunately edit locking (or "Lock pixels") does require version 2.8 so if there is anyone out there stuck on version 2.6 or earlier I'd be interested to learn more and I will try to adjust the code if I get any feedback.
I've a few other changes that are almost ready but I'm concerned about compatibility and maintainability so I'm going to take a bit more time before releasing those changes.

The latest code is available from the OpenRaster plugin gitorious project page.

Syndicated 2015-03-23 18:35:16 from Alan Horkan

23 Mar 2015 caolan   » (Master)

gtk3 vclplug, full-screen presentation canvas mode

Newly added simple support to the gtk3 vclplug for "canvas" support which is the thing we draw onto for presentations. Which means the gtk3 vclplug now supports full screen presentations. Which required a whole massive pile of reorganization of the existing canvas backends to move them from their own per-platform concept in canvas to the per-desktop concept in vcl.

So now rather than having only one cairo canvas backend based on the xlib apis which is for "Linux" we have a cairo canvas for each vcl plug. The old school xlib one is moved from inside its #ifdef LINUX in canvas to the shared base of the gtk2, kde, etc backends in vcl, and there is now a new one for gtk3

Presumably there are lots of performance gains to be made to the new canvas backend seeing as I'm just invalidating the whole slide window when the canvas declares that it's flush time but slides appear to appear instantaneously for me and fly ins and move around a patch effects are smooth even in -O0 debug mode so I'll hold back on any optimizations efforts for now.

Syndicated 2015-03-23 13:08:00 (Updated 2015-03-23 13:08:33) from Caolán McNamara

23 Mar 2015 bagder   » (Master)

Fixing the Func KB-460 ‘-key

Func KB-460 keyboardI use a Func KB-460 keyboard with Nordic layout – that basically means it is a qwerty design with the Nordic keys for “åäö” on the right side as shown on the picture above. (yeah yeah Swedish has those letters fairly prominent in the language, don’t mock me now)

The most annoying part with this keyboard has been that the key repeat on the apostrophe key has been sort of broken. If you pressed it and then another key, it would immediately generate another (or more than one) apostrophe. I’ve sort of learned to work around it with some muscle memory and treating the key with care but it hasn’t been ideal.

This problem is apparently only happening on Linux someone told me (I’ve never used it on anything else) and what do you know? Here’s how to fix it on a recent Debian machine that happens to run and use systemd so your mileage will vary if you have something else:

1. Edit the file “/lib/udev/hwdb.d/60-keyboard.hwdb”. It contains keyboard mappings of scan codes to key codes for various keyboards. We will add a special line for a single scan code and for this particular keyboard model only. The line includes the USB vendor and product IDs in uppercase and you can verify that it is correct with lsusb -v and check your own keyboard.

So, add something like this at the end of the file:

# func KB-460
keyboard:usb:v195Dp2030*
KEYBOARD_KEY_70031=reserved

2. Now update the database:

$ udevadm hwdb –update

3. … and finally reload the tweaks:

$ udevadm trigger

4. Now you should have a better working key and life has improved!

With a slightly older Debian without systemd, the instructions I got that I have not tested myself but I include here for the world:

1. Find the relevant input for the device by “cat /proc/bus/input/devices”

2. Make a very simple keymap. Make a file with only a single line like this:

$ cat /lib/udev/keymaps/func
0×70031 reserved

3 Map the key with ‘keymap’:

$ sudo /lib/udev/keymap -i /dev/input/eventX /lib/udev/keymaps/func

where X is the event number you figured out in step 1.

The related kernel issue.

Syndicated 2015-03-23 12:54:55 from daniel.haxx.se

22 Mar 2015 bagder   » (Master)

Summing up the birthday festivities

I blogged about curl’s 17th birthday on March 20th 2015. I’ve done similar posts in the past and they normally pass by mostly undetected and hardly discussed. This time, something else happened.

Primarily, the blog post quickly became the single most viewed blog entry I’ve ever written – and I’ve been doing it for many many years. Already in the first day it was up, I counted more than 65,000 views.

The blog post got more comments than on any other blog post I’ve ever done. Right now they have probably stopped but there are 56 of them now, almost everyone one of them saying congratulations and/or thanks.

The posting also got discussed on both hacker news and reddit, totaling in more than 260 comments. Most of those in positive spirit.

The initial tweet I made about my blog post is the most retweeted and stared tweet I’ve ever posted. At least 85 retweets and 48 favorites (it might even grow a bit more over time). Others subsequently also tweeted the link hundreds of times. I got numerous replies and friendly call-outs on twitter saying “congrats” and “thanks” in many variations.

Spontaneously (ie not initiated or requested by me but most probably because of a comment on hacker news), I also suddenly started to get donations from the curl web site’s donation web page (to paypal). Within 24 hours from my post, I had received 35 donations from friendly fans who donated a total sum of  445 USD. A quick count revealed that the total number of donations ever through the history of curl’s lifetime was 43 before this day. In one day we had basically gotten as many as we had gotten the first 17 years.

Interesting data from this donation “race”: I got donations varying from 1 USD (yes one dollar) to 50 USD and the average donation was then 12.7 USD.

Let me end this summary by thanking everyone who in various ways made the curl birthday extra fun by being nice and friendly and some even donating some of their hard earned money. I am honestly touched by the attention and all the warmth and positiveness. Thank you for proving internet comments can be this good!

Syndicated 2015-03-22 22:28:21 from daniel.haxx.se

22 Mar 2015 dorward   » (Journeyer)

CCTV and Google Glass

Astro Teller is somewhat missing the point:

"I'm amazed by how sensitively people responded to some of the privacy issues," Teller explains, expressing frustration about the backlash against Glass in public, given the prevalence of mobile video. "When someone walks into a bar wearing Glass... there are video cameras all over that bar recording everything." If it were around a year ago "they'd be Meerkatting," Teller joked.

"Society's issues about privacy are completely legitimate," Teller said. "I'm not making an apology for Google Glass. Google Glass did not move the needle... it was literally a rounding error on the number of cameras in your life."

The problem (from my perspective at least) isn't the number of hard-to-notice cameras around. It is who is wielding them and what they might do with them. CCTV isn't really a problem:

Images of people are covered by the Data Protection Act, and so is information about people which is derived from images – for example, vehicle registration numbers. Most uses of CCTV by organisations or businesses will be covered by the Act, regardless of the number of cameras or how sophisticated the equipment is.

Syndicated 2015-03-22 11:30:16 from Dorward's Ramblings

21 Mar 2015 mikal   » (Journeyer)

Narrabundah trig and 16 geocaches

I walked to the Narrabundah trig yesterday, along the way collecting 15 of the 16 NRL themed caches in the area. It would have been all 16, except I can't find the last one for the life of me. I'm going to have to come back.

I really like this area. Its scenic, has nice trails, and you can't tell you're in Canberra unless you really look for it. It seemed lightly used to be honest, I think I saw three other people the entire time I was there. I encountered more dogs off lead than people.

 

Interactive map for this route.

Tags for this post: blog pictures 20150321-narrabundah photo canberra bushwalk
Related posts: Goodwin trig; Big Monks; Geocaching; Confessions of a middle aged orienteering marker; Cooleman and Arawang Trigs; Point Hut Cross to Pine Island

Comment

Syndicated 2015-03-21 14:29:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

20 Mar 2015 crhodes   » (Master)

els2015 is nearly here

This year, I have had the dubious pleasure of being the Local Organizer for the European Lisp Symposium 2015, which is now exactly one month away; in 31 days, hordes of people will be descending on South East London New Cross Gate to listen to presentations, give lightning talks and engage in general discussions about all things Lisp – the programme isn’t quite finalized, but expect Racket, Clojure, elisp and Common Lisp to be represented, as well as more... minority interests, such as C++.

Registration is open! In fact, for the next nine days (until 29th March) the registration fee is set at the absurdly low price of €120 (€60 for students) for two days of talks, tutorials, demos, coffee, pastries, biscuits, convivial discussion and a conference dinner. I look forward to welcoming old and new friends alike to Goldsmiths.

Syndicated 2015-03-20 17:04:33 (Updated 2015-03-20 17:32:13) from notes

20 Mar 2015 bagder   » (Master)

curl, 17 years old today

Today we celebrate the fact that it is exactly 17 years since the first public release of curl. I have always been the lead developer and maintainer of the project.

Birthdaycake

When I released that first version in the spring of 1998, we had only a handful of users and a handful of contributors. curl was just a little tool and we were still a few years out before libcurl would become a thing of its own.

The tool we had been working on for a while was still called urlget in the beginning of 1998 but as we just recently added FTP upload capabilities that name turned wrong and I decided cURL would be more suitable. I picked ‘cURL’ because the word contains URL and already then the tool worked primarily with URLs, and I thought that it was fun to partly make it a real English word “curl” but also that you could pronounce it “see URL” as the tool would display the contents of a URL.

Much later, someone (I forget who) came up with the “backronym” Curl URL Request Library which of course is totally awesome.

17 years are 6209 days. During this time we’ve done more than 150 public releases containing more than 2600 bug fixes!

We started out GPL licensed, switched to MPL and then landed in MIT. We started out using RCS for version control, switched to CVS and then git. But it has stayed written in good old C the entire team.

The term “Open Source” was coined 1998 when the Open Source Initiative was started just the month before curl was born, which was superseded with just a few days by the announcement from Netscape that they would free their browser code and make an open browser.

We’ve hosted parts of our project on servers run by the various companies I’ve worked for and we’ve been on and off various free services. Things come and go. Virtually nothing stays the same so we better just move with the rest of the world. These days we’re on github a lot. Who knows how long that will last…

We have grown to support a ridiculous amount of protocols and curl can be built to run on virtually every modern operating system and CPU architecture.

The list of helpful souls who have contributed to make curl into what it is now have grown at a steady pace all through the years and it now holds more than 1200 names.

Employments

In 1998, I was employed by a company named Frontec Tekniksystem. I would later leave that company and today there’s nothing left in Sweden using that name as it was sold and most employees later fled away to other places. After Frontec I joined Contactor for many years until I started working for my own company, Haxx (which we started on the side many years before that), during 2009. Today, I am employed by my forth company during curl’s life time: Mozilla. All through this project’s lifetime, I’ve kept my work situation separate and I believe I haven’t allowed it to disturb our project too much. Mozilla is however the first one that actually allows me to spend a part of my time on curl and still get paid for it!

The Netscape announcement which was made 2 months before curl was born later became Mozilla and the Firefox browser. Where I work now…

Future

I’m not one of those who spend time glazing toward the horizon dreaming of future grandness and making up plans on how to go there. I work on stuff right now to work tomorrow. I have no idea what we’ll do and work on a year from now. I know a bunch of things I want to work on next, but I’m not sure I’ll ever get to them or whether they will actually ship or if they perhaps will be replaced by other things in that list before I get to them.

The world, the Internet and transfers are all constantly changing and we’re adapting. No long-term dreams other than sticking to the very simple and single plan: we do file-oriented internet transfers using application layer protocols.

Rough estimates say we may have a billion users already. Chances are, if things don’t change too drastically without us being able to keep up, that we will have even more in the future.

1000 million users

It has to feel good, right?

I will of course point out that I did not take curl to this point on my own, but that aside the ego-boost this level of success brings is beyond imagination. Thinking about that my code has ended up in so many places, and is driving so many little pieces of modern network technology is truly mind-boggling. When I specifically sit down or get a reason to think about it at least.

Most of the days however, I tear my hair when fixing bugs, or I try to rephrase my emails to no sound old and bitter (even though I can very well be that) when I once again try to explain things to users who can be extremely unfriendly and whining. I spend late evenings on curl when my wife and kids are asleep. I escape my family and rob them of my company to improve curl even on weekends and vacations. Alone in the dark (mostly) with my text editor and debugger.

There’s no glory and there’s no eternal bright light shining down on me. I have not climbed up onto a level where I have a special status. I’m still the same old me, hacking away on code for the project I like and that I want to be as good as possible. Obviously I love working on curl so much I’ve been doing it for over seventeen years already and I don’t plan on stopping.

Celebrations!

Yeps. I’ll get myself an extra drink tonight and I hope you’ll join me. But only one, we’ll get back to work again afterward. There are bugs to fix, tests to write and features to add. Join in the fun! My backlog is only growing…

Syndicated 2015-03-20 07:04:57 from daniel.haxx.se

20 Mar 2015 mikal   » (Journeyer)

A quick trip to Namadgi

I thought I'd drop down to the Namadgi visitors centre to have a look during lunch because I hadn't been there since being a teenager. I did a short walk to Gudgenby Hut, and on the way back discovered this original border blaze tree. Its stacked on pallets at the moment, but is apparently intended for display one day. This is how much of the ACT's boarder was marked originally -- blazes cut on trees.

 

Interactive map for this route.

Tags for this post: blog pictures 20150320-namadgi photo canberra bushwalk namadgi border
Related posts: Goodwin trig; Big Monks; Geocaching; Confessions of a middle aged orienteering marker; Cooleman and Arawang Trigs; Point Hut Cross to Pine Island

Comment

Syndicated 2015-03-19 20:02:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

20 Mar 2015 marnanel   » (Journeyer)

Poetry and risk aversion

A while back a friend said something about risk aversion, and I asked them about it.

There's a setup where you get given two choices. One choice means you'll definitely get £"x". The other means you'll have a "y"% choice of getting £"z", and if you don't you'll get nothing.

This showed me I am very risk-averse. If you ask me to choose between a definite £5 and a 25% chance of £100, I'm still going to choose the £5 because that's my lunch, dammit. For most amounts of money I won't take the bet unless the odds are better than evens. I suppose everyone has a set of heuristics like that, and this is mine.

There have been times when I've worked around these heuristics on purpose-- you may remember the business about Växjö. But that was merely a workaround; it didn't change the heuristics.

I was thinking yesterday that this explains a lot about why I usually don't enter poetry competitions or submit work to journals: the cost of entry is rarely worth the chance of payoff. "Cost of entry" here might include money, but always includes the manual and mental work needed to prepare and submit, the anxiety about not getting it right, and (if simultaneous submissions aren't allowed) losing the ability to use a particular poem for the next four months. And the payoff is small, and the chance of getting it isn't great. So mostly I don't bother.

See also: applying for jobs, asking people on dates, etc, etc.

This entry was originally posted at http://marnanel.dreamwidth.org/330990.html. Please comment there using OpenID.

Syndicated 2015-03-19 23:38:58 from Monument

18 Mar 2015 mikal   » (Journeyer)

Goodwin trig

I talk about urban trigs, but this one takes the cake. Concrete paths, street lighting, and a 400 meter walk. I bagged this one on the way home from picking something up in Belconnen. To be honest, I can't see myself coming here again.

   

Interactive map for this route.

Tags for this post: blog pictures 20150318-goodwin photo canberra bushwalk trig_point belconnen
Related posts: Harcourt and Rogers Trigs; Big Monks; Cooleman and Arawang Trigs; A walk around Mount Stranger; Forster trig; Two trigs and a first attempt at finding Westlake

Comment

Syndicated 2015-03-18 14:01:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

17 Mar 2015 dkg   » (Master)

Bootable grub USB stick (EFI and BIOS for Intel)

I'm using grub version 2.02~beta2-2.

I want to make a USB stick that's capable of booting Intel architecture EFI machines, both 64-bit (x86_64) and 32-bit (ia32). I'm starting from a USB stick which is attached to a running debian system as /dev/sdX. I have nothing that i care about on that USB stick, and all data on it will be destroyed by this process.

I'm also going to try to make it bootable for traditional Intel BIOS machines, since that seems handy.

I'm documenting what I did here, in case it's useful to other people.

Set up the USB stick's partition table:

parted /dev/sdX -- mktable gpt
parted /dev/sdX -- mkpart biosgrub fat32 1MiB 4MiB
parted /dev/sdX -- mkpart efi fat32 4MiB -1
parted /dev/sdX -- set 1 bios_grub on
parted /dev/sdX -- set 2 esp on
After this, my 1GiB USB stick looks like:
0 root@foo:~# parted /dev/sdX -- print
Model:  USB FLASH DRIVE (scsi)
Disk /dev/sdX: 1032MB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system  Name      Flags
 1      1049kB  4194kB  3146kB  fat32        biosgrub  bios_grub
 2      4194kB  1031MB  1027MB               efi       boot, esp

0 root@foo:~# 
make a filesystem and mount it temporarily at /mnt:
mkfs -t vfat -n GRUB /dev/sdX2
mount /dev/sdX2 /mnt
ensure we have the binaries needed, and add three grub targets for the different platforms:
apt install grub-efi-ia32-bin grub-efi-amd64-bin grub-pc-bin grub2-common

grub-install --removable --no-nvram --no-uefi-secure-boot \
     --efi-directory=/mnt --boot-directory=/mnt \
     --target=i386-efi

grub-install --removable --no-nvram --no-uefi-secure-boot \
     --efi-directory=/mnt --boot-directory=/mnt \
     --target=x86_64-efi

grub-install --removable --boot-directory=/mnt \
     --target=i386-pc /dev/sdX
At this point, you should add anything else you want to /mnt here! For example: And don't forget to cleanup:
umount /mnt
sync

Tags: bios, efi, grub, tip

Syndicated 2015-03-16 23:12:00 from Weblogs for dkg

16 Mar 2015 chalst   » (Master)

Active users

Over a year since my last post? Well, I'm not active. Since most of my posts on Advogato in the past decade have been due to spam, one way or another, the suspension of new accounts rather reduced my working material.

But I see that prla, mbanck and teknopup are posting here through the old-fashioned, unsyndicated diary form. Good work.

16 Mar 2015 pixelbeat   » (Journeyer)

A plan for coreutils i18n support

Steps to complete multi-byte support in GNU coreutils

Syndicated 2015-03-16 12:11:13 from www.pixelbeat.org

16 Mar 2015 crhodes   » (Master)

tmus research programmer position

The AHRC-funded research project that I am a part of, Transforming Musicology, is recruiting a developer for a short-term contract, primarily to work with me on database systems for multimedia (primarily audio) content. The goal for that primary part of the contract is to take some existing work on audio feature extraction and probabilistic nearest-neighbour search indexing, and to develop a means for specialist users (e.g. musicologists, librarians, archivists, musicians) to access the functionality without needing to be experts in the database domain. This of course will involve thinking about user interfaces, but also about distributed computation, separation of data and query location, and so on.

The funding is for six months of programmer time. I would have no objection to someone working for six months in a concentrated block of time; I would also have no objection to stretching the funding out over a longer period of calendar time: it might well provide more time for reflection and a better outcome in the end. I would expect the development activities to be exploratory as well as derived from a specification; to span between the systems and the interface layer; to be somewhat polyglot (we have a C++ library, bindings in Python, Common Lisp and Haskell, and prototype Javascript and Emacs front-ends – no applicant is required to be fluent in all of these!)

There are some potentially fun opportunities during the course of the contract, not least working with the rest of the Transforming Musicology team. The post is based at Goldsmiths, and locally we have some people working on Systems for Early Music, on Musicology and Social Networking, and on Musical Memory; the Goldsmiths contribution is part of a wider effort, with partners based at Oxford working on Wagnerian Leitmotif and on a Semantic Infrastructure, at Queen Mary working on mid-level representations of music, and in Lancaster coordinating multiple smaller projects. As well as these opportunities for collaboration, there are a number of events coming up: firstly, the team would hope to have a significant presence at the conference of the International Society for Music Information Retrieval, which will be held in Málaga in October. We don’t yet know what we will be submitting there (let alone what will be accepted!) but there should be an opportunity to travel there, all being well. In July we’ll be participating in the Digital Humanities at Oxford Summer School, leading a week-long programme of workshops and lectures on digital methods for musicology. Also, in November of 2014, we also participated in the AHRC’s “Being Human” festival, with a crazy effort to monitor participants’ physiological responses to Wagner opera; there’s every possibility that we will be invited to contribute to a similar event this year. And there are other, similar projects in London with whom we have friendly relations, not least the Digital Music Lab project at City University and the EU-funded PRAISE project at Goldsmiths.

Sound interesting? Want to apply? There’s a more formal job advert on the project site, while the “person specification” and other application-related materials is with Goldsmiths HR. The closing date for applications is the 27th March; we’d hope to interview shortly after that, and to have the developer working with us from some time in May or June. Please apply!

Syndicated 2015-03-16 11:47:14 from notes

16 Mar 2015 marnanel   » (Journeyer)

the care and feeding of marnanel

some things to know about me:

* I may be wrong and often am. If I am, I would like to know, and learn better. But...
* I hate conflict. If you are rude, aggressive, hostile, ridiculing, I'll probably not talk to you.
* I am aware that I am privileged in many ways; if I show unchecked privilege, I appreciate hearing about it and I promise to take it seriously. I expect the same from you.
* Autonomy is important. I would like to hear your stories rather than tell my own. But if your behaviour involves nonconsensual damage to others, especially children, I am unlikely to be sympathetic (to put it mildly). Anti-vaccination people are specifically included here as people who damage children.
* I love hugs and cuddles, but please don't touch me without asking.
* If I have a panic attack, please hang around. Afterwards I will probably go and hide somewhere for a bit, and then I probably won't cope too well with people talking to me.
* If I'm occupied with nothing but my phone in public, that's probably a way of hiding.
* I hate phone calls. I hate making them, and I hate receiving them. Text or email instead, unless it's urgent, or you've arranged it otherwise. (To my parents: yes, you count as having arranged otherwise. But I still prefer email.)
* My pronouns are they/them, though zie/zir is fine too, and other pronouns are all right where I'm not out as genderqueer. If you get it wrong, that's fine. But don't get it wrong on purpose.
* Do not shout at me. Ever.
* I like reconciliation. If we were friends in the past, I probably want to be friends again. There are a very few exceptions, but you know who you are.
* I like vegetarian food, but I'll eat some kinds of meat if that's all that's available. I'm allergic to uncooked egg (and this includes scrambled eggs, for some reason). Eggs in things like cake are fine. Actually, cake is lovely in general.
* I have a bad habit of avoiding dealing with things I don't know how to handle, especially emails I don't know how to answer. In particular, I love getting fanmail, but I'm rather bad at answering it. I'm really sorry: I'm working on it. I do read it all, and it does make me happy, and I love you all.
* Please don't assume I can pick up on hints, or flirting, or that I know any particular social conventions about conversations; please be explicit. If there's something you can't or don't want to talk about, I will pick it up and worry about it if you lie about the things round the edges in inconsistent ways. I really like it when people talk to me about how they want to talk to me and how I want to talk to them.
* I'll try to add trigger warnings to posts and pictures. Again, if I get it wrong, let me know.
* I have triggers of my own. I may have to leave a conversation because of them. It's a PTSD thing.
* Reciting poetry and singing and scripting/echolalia are coping habits.
* I apologise too much. I'm working on it.

Did I miss anything? Questions and comments and suggestions are welcome.

This entry was originally posted at http://marnanel.dreamwidth.org/330693.html. Please comment there using OpenID.

Syndicated 2015-03-15 23:26:48 (Updated 2015-03-15 23:40:44) from Monument

14 Mar 2015 Stevey   » (Master)

Moving to Newcastle

Although things are not 100% certain it seems highly likely we'll be moving to Newcastle in five months time.

If I seem distracted/absent/busy over the next month or two this will be a good excuse!

Syndicated 2015-03-14 00:00:00 from Steve Kemp's Blog

13 Mar 2015 joey   » (Master)

7drl 2015 day 7 scroll success

A frantic last day of work on Scroll.

Until 3 am last night, I was working on adding a new procedurally generated level.

This morning, fixed two major bugs reported by playesters overnight. Also fixed crashes on small screens and got the viewport to scroll. Added a victory animation in time for lunch.

After lunch, more level generation work. Wasted an entire hour tracking down a bug in level gen I introduced last night, when I was bad and didn't create a data type to express an idea. Added a third type of generated level, with its own feel.

Finished up with a level selection screen, which needed just 47 lines of code and features a playable character.

I have six hours until my 7drl is officially over, but I'm done! Success! You can download the code, play, etc, at Scroll's homepage

Syndicated 2015-03-13 22:27:53 from see shy jo

13 Mar 2015 bagder   » (Master)

Video: My curl talk from FOSDEM 2015

I mentioned the talk before, and now the video has been made available. About 25 minutes with me presenting curl.

cURL

Syndicated 2015-03-13 15:04:17 from daniel.haxx.se

12 Mar 2015 wainstead   » (Master)

Waverous finally moves to GitHub

What with the imminent demise of Google Code (has it been around that long already?), it was finally time to move Waverous over to GitHub. Henceforth:

https://github.com/wainstead/waverous

Syndicated 2015-03-12 22:26:00 (Updated 2015-03-12 22:26:38) from Wainstead

12 Mar 2015 joey   » (Master)

7drl 2015 day 6 must add more

Last night I put up a telnet server and web interface to play a demo of scroll and send me playtester feedback, and I've gotten that almost solid today. Try it!

Today was a scramble to add more features to Scroll and fix bugs. The game still needs some balancing, and generally seems a little too hard, so added a couple more spells, and a powerup feature to make it easier.

Added a way to learn new spells. Added a display of spell inventory on 'i'. For that, I had to write a quick windowing system (20 lines of code).

Added a system for ill effects from eating particular letters. Interestingly, since such a letter is immediately digested, it doesn't prevent the worm from moving forwards. So, the ill effects can be worth it in some situations. Up to the player to decide.

I'm spending a lot of time now looking at letter frequency historgrams to decide which letter to use for a new feature. Since I've several times accidentially used the same letter for two different things (most amusingly, I assigned 'k' to a spell, forgetting it was movement), I refactored all the code to have a single charSet which defines every letter and what it's used for, be that movement, control, spell casting, or ill effects. I'd like to use that to further randomize which letters are used for spell components, out of a set that have around the same frequency. However, I doubt that I'll have time to do that.

In the final push tonight/tomorrow, I hope to add an additional kind of level or two, make the curses viewport scroll when necessary instead of crashing, and hopefully work on game balance/playtester feedback.

I've written ~2800 lines of code so far this week!

Syndicated 2015-03-12 23:02:38 from see shy jo

12 Mar 2015 caolan   » (Master)

gtk3 vclplug,

I've been hacking the gtk3 vclplug for LibreOffice recently, here's the before image after scrolling up and down a few times. UI font not rendered the same as the rest of the desktop, bit droppings everywhere, text missing from style listbox, mouse-wheel non-functional

 Here's today's effort. Correct UI font, scrolling just works, mouse-wheel functional, no bit droppings.



After making it possible to render with cairo to our basebmp surface initially for the purposes of rendering text, I tweaked things so that instead of re-rendering everything in the affected area on a "draw" signal we do our initial render into the underlying basebmp surface on resize events and then trust that our internally triggered paints will keep that basebmp up to date and gtk_widget_queue_draw_area those areas as they are modified in basebmp and just blit that basebmp to the gtk3 cairo surface on the resulting gtk_widget_queue_draw_area- triggered "draw". This is pretty much what we do for the MacOSX backend.

The basebmp is now cairo-compatible so the actual LibreOffice->Gtk3 draw becomes a trivial direct paint to the requested area in the gtk surface from our basebmp surface

With our cairo-compatible basebmp surface the gtk3 native rendering stuff for drawing the buttons and menus etc can then render directly into that basebmp at the desired locations removing a pile of temporary surfaces, conversion code and bounds-checking hackery.

Further under the hood however the headless svp plug that the gtk3 inherits from had a pair of major ultra-frustrating bugs which meant that while it looked good in theory, in practice it still was epically failing wrt bit dropping in practice. Now solved are the two underlying clipping-related bugs. One where an optimization effort would trigger creating an overly clipped region, and another where attempts to copy from the surface were clipped out by the clip region.

Still got some glitches in the impress sidebar and of course the above theming engine is still missing a pile of stuff and slide-show/canvas mode needs implementing, but I'm heartened. Its not complete, but its now less traffic accident and more building site.

Syndicated 2015-03-12 16:54:00 (Updated 2015-03-18 15:41:37) from Caolán McNamara

12 Mar 2015 mjg59   » (Master)

Vendors continue to break things

Getting on for seven years ago, I wrote an article on why the Linux kernel responds "False" to _OSI("Linux"). This week I discovered that vendors were making use of another behavioural difference between Linux and Windows to change the behaviour of their firmware and breaking things in the process.

The ACPI spec defines the _REV object as evaluating "to the revision of the ACPI Specification that the specified \_OS implements as a DWORD. Larger values are newer revisions of the ACPI specification", ie you reference _REV and you get back the version of the spec that the OS implements. Linux returns 5 for this, because Linux (broadly) implements ACPI 5.0, and Windows returns 2 because fuck you that's why[1].

(An aside: To be fair, Windows maybe has kind of an argument here because the spec explicitly says "The revision of the ACPI Specification that the specified \_OS implements" and all modern versions of Windows still claim to be Windows NT in \_OS and eh you can kind of make an argument that NT in the form of 2000 implemented ACPI 2.0 so handwave)

This would all be fine except firmware vendors appear to earnestly believe that they should ensure that their platforms work correctly with RHEL 5 even though there aren't any drivers for anything in their hardware and so are looking for ways to identify that they're on Linux so they can just randomly break various bits of functionality. I've now found two systems (an HP and a Dell) that check the value of _REV. The HP checks whether it's 3 or 5 and, if so, behaves like an old version of Windows and reports fewer backlight values and so on. The Dell checks whether it's 5 and, if so, leaves the sound hardware in a strange partially configured state.

And so, as a result, I've posted this patch which sets _REV to 2 on X86 systems because every single more subtle alternative leaves things in a state where vendors can just find another way to break things.

[1] Verified by hacking qemu's DSDT to make _REV calls at various points and dump the output to the debug console - I haven't found a single scenario where modern Windows returns something other than "2"

comment count unavailable comments

Syndicated 2015-03-12 10:03:52 from Matthew Garrett

11 Mar 2015 joey   » (Master)

7drl 2015 day 5 type directed spell system development

I want my 7drl game Scroll to have lots of interesting spells. So, as I'm designing its spell system, I've been looking at the types, and considering the whole universe of possible spells that fit within the constraints of the types.

My first throught was that a spell would be a function from World -> World. That allows any kind of spell that manipulates the game map. Like, for instance a "whiteout" that projects a stream of whitespace from the player's mouth.

Since Scroll has a state monad, I quickly generalized that; making spell actions a state monad M (), which lets spells reuse other monadic actions, and affect the whole game state, including the player. Now I could write a spell like "teleport", or "grow".

But it quickly became apparent this was too limiting: While spells could change the World map, the player, and even change the list of supported spells, they had no way to prompting for input.

I tried a few types of the Event -> M () variety, but they were all too limiting. Finally, I settled on this type for spell actions: M NextStep -> M NextStep.

And then I spent 3 hours exploring the universe of spells that type allows! To understand them, it helps to see what a NextStep is:

type Step = Event -> M NextStep
data NextStep = NextStep View (Maybe Step)

Since NextStep is a continuation, spells take the original continuation, and can not only modify the game state, but can return an altered continuation. Such as one that prompts for input before performing the spell, and then calls the original continuation to get on with the game.

That let me write "new", a most interesting spell, that lets the player add a new way to cast an existing spell. Spells are cast using ingredients, and so this prompts for a new ingredient to cast a spell. (I hope that "new farming" will be one mode of play to possibly win Scroll.)

And, it lets me write spells that fail in game-ending ways. (Ie, "genocide @"). A spell can cause the game to end by returning a continuation that has Nothing as its next step.

Even better, I could write spells that return a continuation that contains a forked background task, using the 66 line contiuation based threading system I built in day 3. This allows writing lots of fun spells that have an effect that lasts for a while. Things like letting the player quickly digest letters they eat, or slow down the speed of events.

And then I thought of "dream". This spell stores the input continuation and game state, and returns a modified continuation that lets the game continue until it ends, and then restores from the point it saved. So, the player dreams they're playing, and wakes back up where they cast the spell. A wild spell, which can have different variants, like precognitive dreams where the same random numbers are used as will be used upon awaking, or dreams where knowledge carries back over to the real world in different ways. (Supports Inception too..)

Look how easy it was to implement dreaming, in this game that didn't have any notion of "save" or "restore"!

runDream :: M NextStep -> M NextStep -> (S -> S) -> M NextStep
runDream sleepcont wakecont wakeupstate = go =<< sleepcont
   where
         go (NextStep v ms) = return $ NextStep v $ Just $
        maybe wake (go <=<) ms
         wake _evt = do
                 modify wakeupstate
                 wakecont

I imagine that, if I were not using Haskell, I'd have just made the spell be an action, that can do IO in arbitrary ways. Such a spell system can of course do everything I described above and more. But, I think that using a general IO action is so broad that it hides the interesting possibilities like "dream".

By starting with a limited type for spells, and exploring toward more featureful types, I was able to think about the range of possibilities of spells that each type allowed, be inspired with interesting ideas, and implement them quickly.

Just what I need when writing a roguelike in just 7 days!

Syndicated 2015-03-11 22:51:26 from see shy jo

11 Mar 2015 yeupou   » (Master)

Syndication is broken. More recent post are available at https://yeupou.wordpress.com/

11 Mar 2015 amits   » (Journeyer)

FUDCon Pune 2015: CFP Closed, But…

The CFP for FUDCon Pune 2015 is now closed.  We have had an overwhelming response: 141 talks/workshops submitted.  This is more than twice the number of sessions we received for the 2011 edition.  Talk submissions were pouring in till the last minute when we flipped the switch for the CFP page.

However, there’s a twist in the tale, so people who couldn’t get their CFP in aren’t left out.

For FUDCon Pune 2011, we tried something new: instead of the regular barcamp style at FUDCons, we put out a CFP and accepted talks in advance.  This was done as we knew a FUDCon in India was going to be different than the usual FUDCons, where at most a hundred people turn up.  Barcamp style is manageable with those numbers.  We were proved right when the number of attendees exceeded 1000 then.  Setting up the Drupal + COD instance back in 2011 had helped us with getting talk proposals accepted, as well as opening voting for everyone, so we knew which talks were going to be popular and schedule accordingly.  Also, the voting aspect of it had a barcamp feel to it.

Fast forward to the 2015 edition.  The conference is happening end of June, and our CFP closes start of March.  That means there’s almost 4 months between the CFP close and the conference start.  People who plan late, and have something to bring to the conference, are left out (well, there always are hallway tracks, but that restricts the audience).  Also, just having the CFP isn’t traditional FUDCon style.

So I proposed in the 3rd March weekly FUDCon planning meeting that we do some barcamp tracks in addition to the regularly-scheduled tracks.  We squabbled a bit over the details on how to actually make this work — have a new CFP open / have people submit talks via our COD instance, or do it on a whiteboard at the venue / do we recognise barcamp speakers as our scheduled speakers (matters for the swag!) / do we provide subsidy to barcamp speakers / etc.

One thing became clear from these discussions: no one objected to having the barcamp; we just had to flesh out the details.  So here’s some good news for people who couldn’t submit their session for the CFP:

We’re going to have barcamp tracks at the FUDCon, and sessions can be proposed till the day of the conference.

One of the reasons to have a FUDCon is to get people together and work on stuff.  The traditional FUDCon style (i.e. barcamps) were designed so that people active in Fedora just turned up, and things happened.  It didn’t matter who got to speak officially; everyone just mingled together and had their say.  “Hallway tracks” are always the more interesting and productive ones than the scheduled tracks.  They’re also far more interactive and focussed.

We don’t want to take that away.  This obviously means that whoever turns up to the event gets to propose barcamp sessions.  So if people are thinking they’ll attend the event if their talk submission gets accepted, that’s the wrong way to go about it.  Just turn up, you’ll have a voice no matter what.

This also means people needing travel sponsorship need to put in their subsidy requests NOW.  We are soon going to start looking at requests already in the queue, so don’t be late for requesting subsidies.

Doing the barcamp in addition to formal scheduling is going to put a strain on the venue and logistics team, but we like challenges, and we’ll step up to them.

During the 10th March planning meeting, many people were of the view that we should not accept barcamp talk proposals on the COD instance.  Reasons were that’s not how barcamp submissions work, and a whiteboard at the venue can be used just as well.  That obviously means the barcamp tracks can only be held on the 2nd and 3rd days, since the first day people will have to come up, propose sessions, and vote on the proposed ones.

Since most of the people were of this view, we accepted this.  However, we may change this depending on how our scheduling goes for the main conference (we can’t really fit in 140 talks!) so we may just flip over the rest to the barcamp track, and have people vote beforehand.  I find there’s a certain appeal to using the COD instance to ease our jobs.  It also means we have a proper schedule to display on the web / print out, so people know where to head to, instead of looking at the schedule on the web for the formal schedule, and missing out on the barcamps entirely, or having to hunt for the barcamp schedule at the venue.  I’m sure we’ll have a lot more discussions around this topic as we approach the event.

The response to the CFP has been mind-boggling.  We can project that the event itself will witness more participation than the 2011 edition.  We did a fairly good job in 2011 handling 1000 people, 60 talks and 6 parallel sessions; this time we may have a much bigger scale, and we actually know what to expect; I’m confident we will execute it as well as the last time.

Syndicated 2015-03-11 00:40:46 from Think. Debate. Innovate.

10 Mar 2015 joey   » (Master)

7drl 2015 day 4 coding through exhaustion

Slow start today; I was pretty exhausted after yesterday and last night's work. Somehow though, I got past the burn and made major progress today.

All the complex movement of both the player and the scroll is finished now, and all that remains is to write interesting spells, and a system for learning spells, and to balance out the game difficulty.


I haven't quite said what Scroll is about yet, let's fix that:

In Scroll, you're a bookworm that's stuck on a scroll. You have to dodge between words and use spells to make your way down the page as the scroll is read. Go too slow and you'll get wound up in the scroll and crushed.

The character is multiple chars in size (size is the worm's only stat), and the worm interacts with the scroll in lots of ways, like swallowing letters, or diving through a hole to the other side of the scroll. While it can swallow some letters, if it gets too full, it can't move forward anymore, so letters are mostly consumed to be used as spell components.

I think that I will manage to get away without adding any kind of monsters to the game; the scroll (and whoever is reading it) is the antagonist.

As I'm writing this very post, I'm imagining the worm wending its way through my paragraphs. This dual experience of text, where you're both reading its content and hyper-aware of its form, is probably the main thing I wanted to explore in writing Scroll.

As to the text that fills the scroll, it's broadly procedurally generated, in what I hope are unusual and repeatedly surprising (and amusing) ways. I'm not showing any screenshots of the real text, because I don't want to give that surprise away. But, the other thing about Scroll is that it's scroll, a completely usable (if rather difficult..) Unix pager!

Syndicated 2015-03-10 23:07:45 from see shy jo

10 Mar 2015 olea   » (Master)

Qué es el procomún

El siguiente texto lo he preparado a la sazón del programa de actividades complementarias del Almería Creative Commons Film Festival, el primer festival, y casi la primera actividad, en Almería exclusivamente dedicado a este mundo. Me ha gustado tanto que he querido publicarlo en mi propio blog. Aquí queda:

¿QUÉ ES EL PROCOMÚN?

El DRAE lo define como

 procomún.
        ( De pro, provecho, y común).
        1. m. Utilidad pública.

pero el filósofo Antonio Lafuente va mucho más allá:

«lo que es de todos y de nadie al mismo tiempo»

Para Antonio la expresión procomún es la traducción al español más acertada para el término inglés commons. Pero ¿tiene que ver el procomún con nuestra vida diaria? Absolutamente: el aire, el futuro, los sentimientos, el ADN, todos son procomunes cotidianos, casi personales. Otros son más distantes pero igualmente indispensables: las pesquerías, parques naturales… la lista es ¿infinita?. Y tenemos otros procomunes que están floreciendo avivados por el galopante desarrollo tecnológico: el software libre, la Wikipedia, Internet y la Web dentro de ella. El mismo HackLab Almería es un modesto procomún que nos empeñamos en construir para ponerlo a vuestra disposición.

Syndicated 2015-03-10 16:00:00 from Ismael Olea

10 Mar 2015 crhodes   » (Master)

ref2014 data update

Does anyone still care about REF2014? Apart from agonizing about what it will mean in the new assignments of quality-related funding for institutions, obviously.

Among the various surprising things I have had to do this term (as in “surprise! You have to do this”) was to participate in the Winter graduation ceremony: reading out the names of graduands. It was fun; the tiniest bit stressful, because I hadn’t ever managed to attend a ceremony – but since everyone was there to celebrate, most of the pressure was off; I think I managed to doff my hat at the required times and not to trip over my gown while processing. Part of the ceremony is a valedictory speech from the Warden (vice-Chancellor equivalent), and it was perhaps inevitable that part of that was a section congratulating our soon-to-be alumni on belonging to an institution with high-quality research, or possibly “research intensity”.

That reminded me to take another look at the contextual data published by the Higher Education Statistics Authority; part of the “research intensity” calculation involves an estimate of the number of staff who were eligible to participate in the REF. It is only an estimate, not for any fundamental reason but because the employment data and the research submissions were collected by two different agencies; the data quality is not great, resulting probably in about equal measure from database schema mismatches (working out which REF2014 “Unit of Assessment” a given member of departmental staff belongs to) and human error. The good news is that at least the human error can be corrected later; there are now four Universities who have submitted corrected employment numbers to HESA, subtracting off a large number of research assistants (fixed-term contract researchers) from their list of REF-eligible staff – which naturally tends to bump up their measured research intensity.

New corrections mean new spreadsheets, new slightly-different data layouts, and new ingestion code; I suffer so you don’t have to. I’ve made a new version of my ref2014 R package containing the REF2014 results and contextual data; I’ve also put the source of the package up on github in case anyone wants to criticize (or learn from, I guess) my packaging.

Syndicated 2015-03-10 13:36:30 from notes

10 Mar 2015 joey   » (Master)

7drl 2015 day 3 movement at last

Got the player moving in the map! And, got the map to be deadly in its own special way.

        HeadCrush -> do
                showMessage "You die."
                endThread

Even winning the game is implemented. The game has a beginning, a middle, and an end.

I left the player movement mostly unconstrained, today, while I was working on things to do with the end of the game, since that makes it easier to play through and test them. Tomorrow, I will turn on fully constrained movement (an easy change), implement inventory (which is very connected to movement constraints in Scroll), and hope to start on the spell system too.


At this point, Scroll is 622 lines of code, including content. Of which, I notice, fully 119 are types and type classes.

Only 4 days left! Eep! I'm very glad that scroll's central antagonist is already written. I don't plan to add other creatures, which will save some time.


Last night as I was drifting off to sleep, it came to me a way to implement my own threading system for my roguelike. Since time in a roguelike happens in discrete ticks, as the player takes each action, normal OS threads are not suitable. And in my case, I'm doing everything in pure code anyway and certianly cannot fork off a thread for some background job.

But, since I'm using continuation passing style, I can just write my own fork, that takes two continuations and combines them, causing both to be run on each tick, and recursing to handle combining the resulting continuations.

It was really quite simple to implement. Typechecked on the first try even!

fork :: M NextStep -> M NextStep -> M NextStep
fork job rest = do
        jn <- job
        rn <- rest
        runthread jn rn
  where
        runthread (NextStep _ (Just contjob)) (NextStep v (Just contr)) =
                return $ NextStep v $ Just $ \i -> do
                        jn <- contjob i
                        rn <- contr i
                        runthread jn rn
        runthread (NextStep _ Nothing) (NextStep v (Just contr)) =
                return $ NextStep v (Just contr)
        runthread _ (NextStep v Nothing) =
                return $ NextStep v Nothing

endThread :: M NextStep
endThread = nextStep Nothing

background :: M NextStep -> M NextStep
background job = fork job continue

demo :: M NextStep
demo = do
    showMessage "foo"
    background $ next $ const $
        clearMessage >> endThread

That has some warts, but it's good enough for my purposes, and pretty awesome for a threading system in 66 LOC.

Syndicated 2015-03-09 23:33:59 from see shy jo

9 Mar 2015 amits   » (Journeyer)

Easier Access to Random Numbers in KVM VMs

I’ve written previously about random numbers in virtual machines.  KVM still remains the only hypervisor to offer an RNG device to guests.

Quite a lot of exciting changes have landed in the upstream Linux kernel since that last post.  I have written an article in the RHEL blog about it: Red Hat Enterprise Linux Virtual Machines: Access to Random Numbers Made Easy.

That articles talks about the improvements in the recent RHEL 7.1 release.  In upstream terms, all the changes written about have landed in kernel 3.17; so Fedora 21 out-of-the-box, and Fedora 20 after updates, have benefited from the additions.

All the benefits listed in the article apply to all Linux guest VMs running under KVM if they have the virtio-rng device enabled, and run kernel 3.17+ in the guest.

Syndicated 2015-03-09 12:35:13 from Think. Debate. Innovate.

9 Mar 2015 prla   » (Apprentice)

Here at HelloFresh (my new employer since Jan 5th) we sometimes need to change the system date in order to reproduce bugs. Doing so is easy with the date -s command and in order to keep the current time, one can re-use the date command itself to perform substitution:

sudo date -s "2014-12-25 $(date +%H:%M:%S)"

To set it back to local time:
sudo cp /usr/share/zoneinfo/Europe/Berlin /etc/localtime

9 Mar 2015 mikal   » (Journeyer)

Stromlo and Brown Trigs

So, the Facebook group set off for our biggest Trig walk yet today -- Stromlo and Brown Trigs. This ended up being a 15km walk with a little bit of accidental trespass (sorry!) and about 600 meters of vertical rise. I was expecting Stromlo to be prettier to be honest, but it wasn't very foresty. Brown was in nicer territory, but its still not the nicest bit of Canberra I've been to. I really enjoyed this walk though, thanks to Simon, Tony and Jasmine for coming along!

                     

Interactive map for this route.

Tags for this post: blog pictures 20150309-stromlo_and_brown photo canberra bushwalk trig_point
Related posts: Big Monks; Cooleman and Arawang Trigs; A walk around Mount Stranger; Forster trig; Two trigs and a first attempt at finding Westlake; Taylor Trig

Comment

Syndicated 2015-03-08 22:07:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

9 Mar 2015 guylhem   » (Journeyer)

The Thinkpad X201 tablet under OSX 10.9

The X201 tablet is a sweet device that works quite well as a hackintosh - better than my genuine Macbook Air. However,  if you have the Intel core i7 620M CPU (the most powerful one available for the X201t) you are limited: if you don’t use a DSDT, there is no sleep support, and if you use a DSDT there is no sound. Regardless of which CPU you have, there is no tablet support.

The issue was recently discussed on http://forum.osxlatitude.com/index.php?/topic/2833-install-osx-on-lenovo-thinkpad-x201s-and-maybe-x201, with other users reporting similar problems.

After analyzing the problem, basically, the DSDT that is publically available on http://forum.osxlatitude.com/index.php?/topic/5771-revision-749-adding-new-dsdtaml-for-lenovo-thinkpad-x201-and-x201s-that-fixes-sleep-and-scre/ does not work with the core i7 tablet.

Here’s a complete fix that makes sure everything works on the core i7 with MacOSX 10.9:

First, for the sound, the DSDT contains something weird:

            Device (HDEF)
            {
                Name (_ADR, 0x001B0000)  // _ADR: Address
                Name (_S3D, 0x03)  // _S3D: S3 Device State
                Name (RID, Zero)
                Name (_PRW, Package (0x02)  // _PRW: Power Resources for Wake
                {
                    0x0D,
                    0x04
                })
                Method (_DSM, 4, NotSerialized)  // _DSM: Device-Specific Method
                {
                    Store (Package (0x06)
                        {
                            “hda-gfx”,
                            Buffer (0x0A)
                            {
                              

  “onboard-1”
                            },

                            “layout-id”,
                            Buffer (0x04)
                            {
                                 0x0C, 0x00, 0x00, 0x00                           /* …. */
                            },

                            “PinConfigurations”,
                            Buffer (Zero) {}
                        }, Local0)
                    DTGP (Arg0, Arg1, Arg2, Arg3, RefOf (Local0))
                    Return (Local0)
                }
            }
        }

The final part does not applies to the i7 Conexant CX20585, so it should just be:

            Device (HDEF)
            {
                Name (_ADR, 0x001B0000)  // _ADR: Address
                Name (_S3D, 0x03)  // _S3D: S3 Device State
                Name (RID, 0x00)
                Name (_PRW, Package (0x02)  // _PRW: Power Resources for Wake
                {
                    0x0D,
                    0x04
                })
                Method (_PSW, 1, NotSerialized)  // _PSW: Power State Wake
                {
                    Noop
                }
            }
        }

For the tablet part, it’s a bit harder. There are many thing you have to fix.

First, edit  /System/Library/extensions/Apple16X50Serial.kext/Contents/PlugIns/Apple16X50ACPI.kext/Contents/Info.plist  and replace:


<key>IONameMatch</key>
<string>PNP0501</string>

by:

<key>IONameMatch</key>
<array>
<string>PNP0501</string>
<string>WACF004</string>
<string>WACF008</string>
</array>

But that won’t do much good because the WACF004 that shows in ioreg is disabled. Adding that to Apple16x50Serial does not help tabletmagic - you have to enable the peripheral.

I could find only one documented way, TabletEnabler. However the binaries I got do not work with 10.9 and since I don’t have the sourcecode they can’t be fixed.

So I did it my way by editing the DSDT. For the device DTR, simply replace Name (LCFG, 0x00) by Name (LCFG, 0x01) : the 1 will allow method _STA to report the device as enabled (0x0F) instead of disabled (0x0D):

                    Method (_STA, 0, NotSerialized)  // _STA: Status
                    {
                        If (LNotEqual (\PJID, 0x00))
                        {
                            Return (0x00)
                        }

                        Store (0x03, LDNS) /* \_SB_.PCI0.LPC_.LDNS */
                        If (LAnd (LDAS, LCFG))
                        {
                            Return (0x0F)
                        }
                        Else
                        {
                            Return (0x0D)
                        }
                    }

Then TabletMagic will recognize the serial port, but you’re not over. You move the pen, the cursor moves, but it is way off the screen when you approach the right side.

For some reason, tablet magic has a bug so you must specify manually the dimension of the tablet and the screen.

For the tablet, use 26312 x 16520, and for the screen 1280x800. In the mappings tab, make sure you set on the left handside 00001 0001 26311 16519. You may have to do it twice, but after that the pen will work perfectly.


If you want to further fix your X201 Tablet or if you have unrelated problems, check this website: http://osxonthinkpads.wikidot.com/dsdt-edits

Syndicated 2015-03-09 01:34:07 from Guylhem's most recent funny hacks & thoughts

9 Mar 2015 joey   » (Master)

7drl 2015 day 2 level generation and game concept

Much as I want to get my @ displayed and moving around screen, last night and today have focused on level generation instead.

Scroll has kinda strange level generation method, compared to how I suppose most roguelikes do it. There are only 3 calls to rand in all of Scroll. Just a 3 random parameters, but that's enough to ensure a different level each time.

-- Random level generation function.
level :: Bool -> StdGen -> [String]
level randomize r = concat
        [ final (length tutorial + extra) $ concat $ rand mariner1body
        , concat $ rand mariner1end
        , concatMap rand kubla
        ]
  where
    -- here be spoilers

You could say there are two influences in Scroll's level generation method: Nick Montfort and Samuel Taylor Coleridge.


I have thought some about Scroll before starting the 7drl week, but my idea for the game was missing some key concepts. There was nothing to keep the player engaged in moving forward, an unclear victory condition, no clue how to generate appropriate random levels, a large potential for getting stuck, and no way to lose the game. This is all a problem for a roguelike.

But, I had an idea finally last night, about a single unified thing that all of that stuff falls out from. And it's right there in the name of the game!

Now that I understand Scroll better, I wrote the tutorial level. It's a very meta tutorial level that sets the scene well and serves more purposes than are at first apparent. I count a total of 6 things that this "tutorial level" will let the user do.

And interestingly, while the tutorial level is static, it interacts with the rest of the game in a way that will make it be experienced differently every time through.


The strangest line of code I wrote today is:

  import GPL

Somehow, I have never before, in all my time programming, written a line like that one.


Finally, after 7 hours of nonstop coding, I got ncurses to display the generated game world, scrolling around in the display viewport. No @ yet; that will need to wait for tonight or tomorrow!

Syndicated 2015-03-08 23:19:56 from see shy jo

8 Mar 2015 dmarti   » (Master)

QoTD: Julie Fleischer

Kraft is reinventing marketing around data, infrastructure and content to be more informed, addressable, personal and meaningful. We have invested significant resources in building a proprietary data platform that allows us to know, serve and engage our consumers uniquely and at scale. We have trained our marketers on data literacy and reshaped our agency relationships to capitalize on our infrastructure and the opportunities that exist in today's media landscape to act with agility and purpose. We're creating new capabilities in content creation so that we can tell personal stories and launch experiences that attract and delight our next generation of consumers.

Julie Fleischer

My macaroni and cheese has an awesome surveillance bunker, which fills me with delight.

—nobody, ever

Syndicated 2015-03-08 18:37:20 from Don Marti

7 Mar 2015 joey   » (Master)

7drl 2015 day 1 groundwork

Scroll is a roguelike, with a twist, which I won't reveal until I've finished building it. I'll just say: A playable roguelike pun, set in a filesystem near you.

I'm creating Scroll as part of the 7DRL Challange. If all goes well, I'll have a usable roguelike game finished in 7 days.

This is my first time developing a roguelike, and my first time writing a game in Haskell, and my first time writing a game to a time limit. Wow!


First, some groundwork. I'm writing Scroll in Haskell, so let's get the core data types and monads and IO squared away. Then I can spend days 2-7 writing entirely pure functional code, in the Haskell happy place.

To represent the current level, I'm using a Vector of Vectors of Chars. Actually, MVectors, which can be mutated safely by pure code running inside the ST monad, so it's fast and easy to read or write any particular location on the level.

-- Writes a Char to a position in the world.
writeWorld :: Pos -> Char -> M ()
writeWorld (x, y) c = modWorld $ \yv -> do
    xv <- V.read yv y
    V.write xv x c

showPlayer :: M ()
showPlayer = writeWorld (5,8) '@'

(I wish these Vectors had their size as part of their types. There are vector libraries on hackage that do, but not the standard vector library, which has mutable vectors. As it is, if I try to access outside the bounds of the world, it'll crash at runtime.)

Since the game will need some other state, I'm using the state monad. The overall monad stack is type M = StateT S (ST RealWorld). (It could be forall s. StateT S (ST s), but I had some trouble getting that to type check, so I fixed s to RealWorld, which is ok since it'll be run using stToIO.

Next, a concept of time, and the main event loop. I decided to use a continutation passing style, so the main loop takes the current continuation, and runs it to get a snapshot of the state to display, and a new continutation. The advantage of using continuations this way is that all the game logic can be handled in the pure code.

I should probably be using the Cont monad in my monad stack, but I've not learned it and lack time. For now I'm handling the continuations by hand, which seems ok.

updateWorld :: Step
updateWorld (Just 'Q') = do
        addMessage "Are you sure you want to quit? [yn]"
        next $ \i -> case i of
                Just 'y' -> quit
                _ -> continue
updateWorld input = do
        addMessage ("pressed " ++ show input)
        continue

Finally, I wrote some ncurses display code, which is almost working.


Start time: After midnight last night. Will end by midnight next Friday.

Lines of code written today: 368

Craziest type signature today: writeS :: forall a. ((Vec2 a -> ST RealWorld ()) -> M ()) -> Pos -> a -> M ()


By the way, there's a whole LambdaHack library for Haskell, targeted at just this kind of roguelike construction. It looks excellent. I'm not using it for two reasons:

  1. Scroll is going to be unusual in a lot of ways, and LambdaHack probably makes some assumptions that don't fit.
  2. mainSer :: (MonadAtomic m, MonadServerReadRequest m) => [String] -> COps -> (m () -> IO ()) -> (COps -> DebugModeCli -> ((FactionId -> ChanServer ResponseUI RequestUI -> IO ()) -> (FactionId -> ChanServer ResponseAI RequestAI -> IO ()) -> IO ()) -> IO ()) -> IO ()
    That's a lot of stuff to figure out! I only have a week, so it's probably easier to build my own framework, and this gives me an opportunity to learn more generally useful stuff, like how to use mutable Vectors.

Syndicated 2015-03-07 22:42:39 from see shy jo

6 Mar 2015 superuser   » (Journeyer)

Highlighting code in presentations

Before I started using this method, I always struggled with ways to highlight parts of code I wanted to talk about when giving presentations. This method, I've found, is at once the easiest method to employ, and provides context to the viewer. They can easily follow along, and where the code is in relation to other code you are talking about is made apparent.

Add two square shapes to your slide

A picture is worth a thousand words, and this one should show clearly how to highlight lines of code on a slide. You just add two shapes, and set the opacity to such a level that code can still be viewed, but it's dimmed out.

You can also generate nice transitions between highlights of the same code using Magic Move as your transition.

Silvrback blog image

The result is a fairly simple transition as you highlight specific lines of code.

Silvrback blog image

This works with code that cannot fit on your slide as well. Simple add the code to your slide, and allow it to go beyond the edge of your slide. When you want to highlight code that is partially hidden, simply move the text box up as appropriate.

Silvrback blog image

While my examples use Keynote, the technique can be applied in other presentation software using their own appropriate features.

Syndicated 2015-02-27 22:11:34 (Updated 2015-02-27 22:14:05) from Jason Lotito

6 Mar 2015 Stevey   » (Master)

Free hosting, and key-signing

Over the past week I've mailed many of the people who had signed my previous GPG key and who had checked my ID as part of that process. My intention was to ask "Hey you trusted me before, would you sign my new key?".

So far no replies. I may have to be more dedicated and do the local-thing with people.

In other news Bytemark, who have previously donated a blade server, sponsored Debconf, and done other similar things, have now started offering free hosting to Debian-developers.

There is a list of such offers here:

I think that concludes this months blog-posting quota. Although who knows? I turn 39 in a couple of days, and that might allow me to make a new one.

Syndicated 2015-03-06 00:00:00 from Steve Kemp's Blog

6 Mar 2015 bagder   » (Master)

TLS in HTTP/2

SSL padlockI’ve written the http2 explained document and I’ve done several talks about HTTP/2. I’ve gotten a lot of questions about TLS in association with HTTP/2 due to this, and I want to address some of them here.

TLS is not mandatory

In the HTTP/2 specification that has been approved and that is about to become an official RFC any day now, there is no language that mandates the use of TLS for securing the protocol. On the contrary, the spec clearly explains how to use it both in clear text (over plain TCP) as well as over TLS. TLS is not mandatory for HTTP/2.

TLS mandatory in effect

While the spec doesn’t force anyone to implement HTTP/2 over TLS but allows you to do it over clear text TCP, representatives from both the Firefox and the Chrome development teams have expressed their intents to only implement HTTP/2 over TLS. This means HTTPS:// URLs are the only ones that will enable HTTP/2 for these browsers. Internet Explorer people have expressed that they intend to also support the new protocol without TLS, but when they shipped their first test version as part of the Windows 10 tech preview, that browser also only supported HTTP/2 over TLS. As of this writing, there has been no browser released to the public that speaks clear text HTTP/2. Most existing servers only speak HTTP/2 over TLS.

The difference between what the spec allows and what browsers will provide is the key here, and browsers and all other user-agents are all allowed and expected to each select their own chosen path forward.

If you’re implementing and deploying a server for HTTP/2, you pretty much have to do it for HTTPS to get users. And your clear text implementation will not be as tested…

A valid remark would be that browsers are not the only HTTP/2 user-agents and there are several such non-browser implementations that implement the non-TLS version of the protocol, but I still believe that the browsers’ impact on this will be notable.

Stricter TLS

When opting to speak HTTP/2 over TLS, the spec mandates stricter TLS requirements than what most clients ever have enforced for normal HTTP 1.1 over TLS.

It says TLS 1.2 or later is a MUST. It forbids compression and renegotiation. It specifies fairly detailed “worst acceptable” key sizes and cipher suites. HTTP/2 will simply put use safer TLS.

Another detail here is that HTTP/2 over TLS requires the use of ALPN which is a relatively new TLS extension, RFC 7301, which helps us negotiate the new HTTP version without losing valuable time or network packet round-trips.

TLS-only encourages more HTTPS

Since browsers only speak HTTP/2 over TLS (so far at least), sites that want HTTP/2 enabled must do it over HTTPS to get users. It provides a gentle pressure on sites to offer proper HTTPS. It pushes more people over to end-to-end TLS encrypted connections.

This (more HTTPS) is generally considered a good thing by me and us who are concerned about users and users’ right to privacy and right to avoid mass surveillance.

Why not mandatory TLS?

The fact that it didn’t get in the spec as mandatory was because quite simply there was never a consensus that it was a good idea for the protocol. A large enough part of the working group’s participants spoke up against the notion of mandatory TLS for HTTP/2. TLS was not mandatory before so the starting point was without mandatory TLS and we didn’t manage to get to another stand-point.

When I mention this in discussions with people the immediate follow-up question is…

No really, why not mandatory TLS?

The motivations why anyone would be against TLS for HTTP/2 are plentiful. Let me address the ones I hear most commonly, in an order that I think shows the importance of the arguments from those who argued them.

1. A desire to inspect HTTP traffic

looking-glassThere is a claimed “need” to inspect or intercept HTTP traffic for various reasons. Prisons, schools, anti-virus, IPR-protection, local law requirements, whatever are mentioned. The absolute requirement to cache things in a proxy is also often bundled with this, saying that you can never build a decent network on an airplane or with a satellite link etc without caching that has to be done with intercepts.

Of course, MITMing proxies that terminate SSL traffic are not even rare these days and HTTP/2 can’t do much about limiting the use of such mechanisms.

2. Think of the little ones

small-big-dogSmall devices cannot handle the extra TLS burden“. Either because of the extra CPU load that comes with TLS or because of the cert management in a billion printers/fridges/routers etc. Certificates also expire regularly and need to be updated in the field.

Of course there will be a least acceptable system performance required to do TLS decently and there will always be systems that fall below that threshold.

3. Certificates are too expensive

The price of certificates for servers are historically often brought up as an argument against TLS even it isn’t really HTTP/2 related and I don’t think it was ever an argument that was particularly strong against TLS within HTTP/2. Several CAs now offer zero-cost or very close to zero-cost certificates these days and with the upcoming efforts like letsencrypt.com, chances are it’ll become even better in the not so distant future.

pile-of-moneyRecently someone even claimed that HTTPS limits the freedom of users since you need to give personal information away (he said) in order to get a certificate for your server. This was not a price he was willing to pay apparently. This is however simply not true for the simplest kinds of certificates. For Domain Validated (DV) certificates you usually only have to prove that you “control” the domain in question in some way. Usually by being able to receive email to a specific receiver within the domain.

4. The CA system is broken

TLS of today requires a PKI system where there are trusted certificate authorities that sign certificates and this leads to a situation where all modern browsers trust several hundred CAs to do this right. I don’t think a lot of people are happy with this and believe this is the ultimate security solution. There’s a portion of the Internet that advocates for DANE (DNSSEC) to address parts of the problem, while others work on gradual band-aids like Certificate Transparency and OCSP stapling to make it suck less.

please trust me

My personal belief is that rejecting TLS on the grounds that it isn’t good enough or not perfect is a weak argument. TLS and HTTPS are the best way we currently have to secure web sites. I wouldn’t mind seeing it improved in all sorts of ways but I don’t believe running protocols clear text until we have designed and deployed the next generation secure protocol is a good idea – and I think it will take a long time (if ever) until we see a TLS replacement.

Who were against mandatory TLS?

Yeah, lots of people ask me this, but I will refrain from naming specific people or companies here since I have no plans on getting into debates with them about details and subtleties in the way I portrait their arguments. You can find them yourself if you just want to and you can most certainly make educated guesses without even doing so.

What about opportunistic security?

A text about TLS in HTTP/2 can’t be complete without mentioning this part. A lot of work in the IETF these days are going on around introducing and making sure opportunistic security is used for protocols. It was also included in the HTTP/2 draft for a while but was moved out from the core spec in the name of simplification and because it could be done anyway without being part of the spec. Also, far from everyone believes opportunistic security is a good idea. The opponents tend to say that it will hinder the adoption of “real” HTTPS for sites. I don’t believe that, but I respect that opinion because it is a guess as to how users will act just as well as my guess is they won’t act like that!

Opportunistic security for HTTP is now being pursued outside of the HTTP/2 spec and in fact it will also allow HTTP 1.1 clients to upgrade plain TCP connections to instead do “unauthenticated TLS” connections. And yes, it should always be emphasized: with opportunistic security, there should never be a “padlock” symbol or anything that would suggest that the connection is “secure”.

Firefox supports opportunistic security for HTTP and it will be enabled by default from Firefox 37.

Syndicated 2015-03-06 07:46:45 from daniel.haxx.se

5 Mar 2015 pixelbeat   » (Journeyer)

Don't fear SIGPIPE!

Effectively handling the SIGPIPE informational signal

Syndicated 2015-03-05 16:34:06 from www.pixelbeat.org

5 Mar 2015 amits   » (Journeyer)

Fedora 21 Release Party at MIT COE, Pune

Cake

We had a F21 release party at the MIT COE, the venue for FUDCon Pune.  We were expecting about 20 people to turn up, we did not want to make this a big event.  The students from MIT who attended were enthusiastic and already use Linux for their coursework.  They use Ubuntu, and they were curious about what the differences in various distros are, and what to expect at FUDCon.

Pravin has written his experiences here, which have more details.

I’ve uploaded a few photos on the wiki page.

 

Syndicated 2015-03-05 13:19:02 from Think. Debate. Innovate.

4 Mar 2015 hands   » (Master)

The future arrived, again!

I am reminded by Gunnar's wonderful news that I have been very remiss in publishing my own.

Mathilda Sophie Hands, our second daughter, was delivered on the 9th of January.

Her arrival was a little more interesting than we'd have preferred (with Gunde being suddenly diagnosed with HELLP Syndrome), but all has turned out well, with Gunde bouncing back to health surprisingly quickly, and Mathilde going from very skinny to positively chubby in a few short weeks, so no harm done.

Today Mathilda produced her first on-camera smile.

Matilda, smiling on camera for the first time

It's lovely when they start smiling. It seems to signal that there's a proper little person beginning to take shape.

Syndicated 2015-03-04 22:04:17 from chezfil

4 Mar 2015 marnanel   » (Journeyer)

RPGs

TW suicide

I posted recently about why I had to give up HabitRPG-- a combination of playing on my anxiety, guilt trips, not being able to think of appropriate rewards, and so on. I said at the time that this is a problem I have with games in general. But Debbie mentioned a computer RPG earlier and it made me think about why Habit is one of the RPGs in particular I have great problems with.

I don't mind AD&D-type things where you're a collaborative part of a team and you can fade into the background as necessary-- it's not much different from roleplay irl. And I don't mind single-player games where they're a large directed puzzle to solve-- it's not far different from a crossword. But competitive roleplaying makes me want to cause my character's suicide early on to save trouble. Even worse are large open-ended games with no particular goal, the sort of thing where you can say, "Oh, lovely! A whole new universe for me to fail in!"

I think if Elite were released today I probably wouldn't enjoy it much.

This entry was originally posted at http://marnanel.dreamwidth.org/329517.html. Please comment there using OpenID.

Syndicated 2015-03-04 19:30:47 (Updated 2015-03-04 19:31:16) from Monument

4 Mar 2015 jas   » (Master)

EdDSA and Ed25519 goes to IETF

After meeting Niels Möller at FOSDEM and learning about his Ed25519 implementation in GNU Nettle, I started working on a simple-to-implement description of Ed25519. The goal is to help implementers of various IETF (and non-IETF) protocols add support for Ed25519. As many are aware, OpenSSH and GnuPG has support for Ed25519 in recent versions, and OpenBSD since the v5.5 May 2014 release are signed with Ed25519. The paper describing EdDSA and Ed25519 is not aimed towards implementers, and does not include test vectors. I felt there were room for improvement to get wider and more accepted adoption.

Our work is published in the IETF as draft-josefsson-eddsa-ed25519 and we are soliciting feedback from implementers and others. Please help us iron out the mistakes in the document, and point out what is missing. For example, what could be done to help implementers avoid side-channel leakage? I don’t think the draft is the place for optimized and side-channel free implementations, and it is also not the place for a comprehensive tutorial on side-channel free programming. But maybe there is a middle ground where we can say something more than what we can do today. Ideas welcome!

Syndicated 2015-03-04 14:30:16 from Simon Josefsson's blog

4 Mar 2015 dmarti   » (Master)

Digital dimes in St. Louis

From Jason Kint at Digital Content Next, here's all the third-party web tracking that comes with browsing the St. Louis Post-Dispatch web site.

Read the whole thing. (via Darren Herman, on Twitter)

So, not much of a surprise, people don't trust web ads, because creepy tracking. Kint writes,

This problem is only getting worse and the consumer tools that counter it are getting less effective and more and more damaging to those who respect the consumer’s right to understand when and why their activities are being tracked. Transparency and providing the consumer with adequate control over their online privacy are vital—not harmful—to businesses that are built on a solid foundation of trust.

But he's only got part of the solution. Transparency is unworkable. How can regular people read every privacy policy for the third-party trackers they run into, when nobody at the St. Louis Post-Dispatch seems to be able to read the privacy policies for the trackers the paper uses on its own site? Here's what the Post-Dispatch site has to say about their third-party ads:

These companies may employ cookies and clear GIFs to measure advertising effectiveness. Any information that these third parties collect via cookies and clear GIFs is generally not personally identifiable.... We encourage you to read these businesses' privacy policies if you should have any concerns about how they will care for your personal information.

In other words, "third party tracking? That's a thing on the Internet now. We have no idea what's going on with it, so you're on your own." No wonder, as Kint points out, Online advertising is trusted less than any other form of advertising.

The result of all this tracking isn't just wigged-out users and ever-increasing ad blocker installs. The real problem for newspaper sites is data leakage. All those trackers that Kint points out are busily digesting the paper's audience like flies on potato salad, breaking the readership down into database records, and feeding the "print dollars to digital dimes" problem by breaking signaling.

When it comes to data leakage, publishers aren't bringing a knife to a gun fight, they're bringing a white paper about a knife to a gun fight. Terry Heaton, in “Local” is Losing to Outsiders: In 2015, [non-local] independent companies will account for nearly three-fourths of all digital advertising, elbowing out local-media competitors who have tried for two decades to use their existing sales forces to also sell digital advertising. Why is it that when a St. Louis business wants to advertise to a St. Louis newspaper reader, three-quarters of the money goes to intermediaries in New York and Palo Alto?

The problem, though, isn't so much that the adtech firms are taking 3/4 of the advertising pie, it's that they're making the pie smaller than it could be, by building the least trustworthy form of advertising since email spam.

So how do we keep the local papers, the people who are doing the hard nation-protecting work of Journalism, going? Kint says the "consumer tools" are getting worse, and if you're just looking at the best-known ad blocker, I'd have to agree. The "acceptable ads" racket doesn't address the tracking problems that matter. Meanwhile, it's not practical to browse the web with no protection at all, because who's going to read all those "transparent" explanations of exactly how some company you've never heard of sells some information you didn't know you were revealing?

Fortunately, though, we have publisher-friendly alternatives to ad blocking such as Tracking Protection on Firefox, the Disconnect extension, and Microsoft's Tracking Protection Lists. Instead of focusing on the two bad alternatives: unaccountable tracking or misdirected ad blocking, why not focus on the tracking protection that works?

Don't worry, interesting stuff remains to be done. To start with, hey, where are all the ads on stltoday.com? Just because I want to get protected from creepy tracking doesn't mean I'm against advertising in general. I like to look at the ads in local papers when I'm going there, because it gives me a sense of business in the town. (The New York Times is showing me Saks Fifth Avenue ads, and I have tracking protection on.) St. Louis, please, make your newspaper site work with tracking protection, and show me some ads.

Syndicated 2015-03-03 03:39:32 from Don Marti

3 Mar 2015 bagder   » (Master)

curl: embracing github more

Pull requests and issues filed on github are most welcome!

The curl project has been around for a long time by now and we’ve been through several different version control systems. The most recent switch was when we switched to git from CVS back in 2010. We were late switchers but then we’re conservative in several regards.

When we switched to git we also switched to github for the hosting, after having been self-hosted for many years before that. By using github we got a lot of services, goodies and reliable hosting at no cost. We’ve been enjoying that ever since.

cURLHowever, as we have been a traditional mailing list driving project for a long time, I have previously not properly embraced and appreciated pull requests and issues filed at github since they don’t really follow the old model very good.

Just very recently I decided to stop fighting those methods and instead go with them. A quick poll among my fellow team mates showed no strong opposition and we are now instead going full force ahead in a more github embracing style. I hope that this will lower the barrier and remove friction for newcomers and allow more people to contribute easier.

As an effect of this, I would also like to encourage each and everyone who is interested in this project as a user of libcurl or as a contributor to and hacker of libcurl, to skip over to the curl github home and press the ‘watch’ button to get notified and future requests and issues that appear.

We also offer this helpful guide on how to contribute to the curl project!

Syndicated 2015-03-03 06:49:27 from daniel.haxx.se

2 Mar 2015 caolan   » (Master)

gtk3 vclplug, text rendering via cairo

The LibreOffice gtk3 vclplug is currently basically rendering everything via the "svp" plugin code which renders to basebmp surfaces and then blits the result of all this onto the cairo surface belonging to the toplevel gtk3 widget

So the text is rendered with the svp freetype based text rendering and looks like this...






With some hacking I've unkinked a few places and allowed the basebmp backend to take the same stride and same same rgbx format as cairo, so we can now create a 24bit cairo surface from basebmp backing data which allows us to avoid conversions on basebmp->cairo and allows us to render onto a basebmp with cairo drawing routines, especially the text drawing ones. So with my in-gerrit-build-queue modifications it renders the same as the rest of the gtk3 desktop.

Syndicated 2015-03-02 15:16:00 (Updated 2015-03-02 15:16:43) from Caolán McNamara

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

Advogato User Stats
Users13999
Observer9884
Apprentice745
Journeyer2336
Master1030

New Advogato Members

Recently modified projects

7 Mar 2015 Ludwig van
18 Dec 2014 AshWednesday
2 Dec 2014 Justice4all
11 Nov 2014 respin
20 Jun 2014 Ultrastudio.org
13 Apr 2014 Babel
13 Apr 2014 Polipo
19 Mar 2014 usb4java
8 Mar 2014 Noosfero
17 Jan 2014 Haskell
17 Jan 2014 Erlang
17 Jan 2014 Hy
17 Jan 2014 clj-simulacrum
17 Jan 2014 Haskell-Lisp

New projects

2 Dec 2014 Justice4all
11 Nov 2014 respin
8 Mar 2014 Noosfero
17 Jan 2014 Haskell
17 Jan 2014 Erlang
17 Jan 2014 Hy
17 Jan 2014 clj-simulacrum
17 Jan 2014 Haskell-Lisp
17 Jan 2014 lfe-disco
17 Jan 2014 clj-openstack
17 Jan 2014 lfe-openstack
17 Jan 2014 LFE
1 Nov 2013 FAQ Linux
15 Apr 2013 Gramps
8 Apr 2013 pydiction