I keep seeing 1 star ratings for my writing. Funny thing is.. I write for a living. Maybe they don't appreciate my perspective regarding the DMCA - overreaching, patent like protection, etc.
Or maybe I just suck.
I keep seeing 1 star ratings for my writing. Funny thing is.. I write for a living. Maybe they don't appreciate my perspective regarding the DMCA - overreaching, patent like protection, etc.
Or maybe I just suck.
Draft on translating 'Ash Wednesday'
I once told a toddler the story of Plato's cave. She said, "Well, I'm going on holiday there soon."
When she got home, she told her mum, "I'm going on holiday to a cave where you can only see shadows on the wall."
Her mum said, "You've been talking to Marn, haven't you?"
This entry was originally posted at http://marnanel.dreamwidth.org/384070.html. Please comment there using OpenID.
This is how 2 years of work look...
...after printing it for submission in triplicate:
At least this was somewhat cheap (34.80 euros) because it's printed black and white. Printing in colour was prohibitive.
Next step: bind them in a dark blue hard-cover with golden letters :-)
The Mitsubishi P95D is the latest model in a line of Medical/Scientific monochromatic thermal printers that can often be found attached to the likes of Ultrasound stations.
As of December 20th, it now has first-class Linux support as part of Gutenprint, complete with status/error reporting, mutiple copy support, custom page sizes, and every other feature the printer exports.
I may try to extend support to older models in the family (P93 and P91) or other MedSci thermal printers if there's any interest.
Oh, here's a shot of the P95 in action:
Let Glasgow FlourishAlmost exactly a year ago, I wrote my goodbye letter to Glasgow here. It had been a difficult day where my hitherto reliably steadfast dependence on the places I knew best had let me down. I'd found the city which had usually given me a rare sense of home, wanting. Over the past year I've thought a lot about that day - not least because it worked as a microcosm of the bigger changes my life has passed through these past few years: realising that things were changing outside my control, at a pace I couldn't dictate. I've had to...
John Masefield: On Reading "Bridge to Heaven" (1942)
The Old Father ThamesThere was just a little hint of the old days - rising early and heading out in the dark to get to the beginning of a railtour used to be a fairly commonplace happening. But today it felt like something of a rarity - and I surprised myself by being pretty excited about the trip despite the early hour. After a quick walk from Hoxton to Liverpool Street station I boarded a No. 11 bus which soon set out across the dark, quiet City of London. As we snaked between the Bank of England and St. Paul's Cathedral, only the...
33C3 talk on dissecting cellular modems
This presentation covers some of our recent explorations into a specific type of 3G/4G cellular modems, which next to the regular modem/baseband processor also contain a Cortex-A5 core that (unexpectedly) runs Linux.
We want to use such modems for building self-contained M2M devices that run the entire application inside the modem itself, without any external needs except electrical power, SIM card and antenna.
Next to that, they also pose an ideal platform for testing the Osmocom network-side projects for running GSM, GPRS, EDGE, UMTS and HSPA cellular networks.
The results of our reverse engineering can be found in the wiki at http://osmocom.org/projects/quectel-modems/wiki together with links to the various git repositories containing related tools.
As with all the many projects that I happen to end up doing, it would be great to get more people contributing to them. If you're interested in cellular technology and want to help out, feel free to register at the osmocom.org site and start adding/updating/correcting information to the wiki.
You can e.g. help by
The Enormous Dating Fraud: Match.com, Plenty of Fish, Tinder and OkCupid
The Top 4 dating sites out there; Match.com, Plenty of Fish, Tinder and OkCupid are so completely overrun with fraud now, it’s appalling. (Note: Match.com, Plenty of Fish, Tinder and OkCupid are all owned by the same parent company, along with 40 other dating site properties) I’ve been a free and paid member of these […]
Contribute to Osmocom 3.5G and receive a free femtocell
In 2016, Osmocom gained initial 3.5G support with osmo-iuh and the Iu interface extensions of our libmsc and OsmoSGSN coede. This means you can run your own small open source 3.5G cellular network for SMS, Voice and Data services.
However, the project needs more contributors: Become an active member in the Osmocom development community and get your nano3G femtocell for free.
I'm happy to announce that my company sysmocom hereby issues a call for proposals to the general public. Please describe in a short proposal how you would help us improving the Osmocom project if you were to receive one of those free femtocells.
Details of this proposal can be found at https://sysmocom.de/downloads/accelerate_3g5_cfp.pdf
Please contact mailto:email@example.com in case of any questions.
Rachel Lichtenstein - EstuaryOver the past few years, as my explorations of the Thames have taken me further and further eastwards, I've begun to appreciate the estuary in a different way. It's fair to say that, until recently, the wide expanses of flat empty land almost terrified me. The broad sweep of silver sky broken only by marching ranks of pylons seemed endlessly and bleakly awesome. But it has also always drawn me - the edges of London blurring into the post-industrial wastelands of Essex and Kent are curiously intriguing to me. Haunted by Joseph Conrad and Bram Stoker, and never far from...
On the Usability of Strings
I’ve recently read an article about why programmers should favour Python 2 over Python 3 (”The Case Against Python 3”), and most of it is an incoherent rant that expose the author’s deep misunderstanding of how bytecode is internally used in scripting languages and how “market forces” of backwards-compatibility work against new languages. Somebody else already rebutted those arguments better than I would do, and unlike the original author, his later edits are clear and doesn’t involve “it was meant as a joke”. One interesting a valid technical argument remains: Python 3’s opaque support for Unicode strings can be unintuitive for those used to manipulate strings as transparent sequences of bytes.
Many programming languages came from an era where text representation was either for English, or for Western languages that would neatly fit all their possible characters in 8-bit values. Internationalization, then, meant at worst indicating what “code page” or character encoding the text was. Having started programming on 90s Macintosh computers, the go-to string memory representation was the Pascal string, where its first byte indicated the string length. This meant that performing the wrong memory manipulation on the string, using the wrong encoding to display it, or even attempting to display corrupted memory would at worst display 255 random characters.
There is a strong argument that UTF-8 should be used everywhere, and while it takes the occasion to educate programmers about Unicode (for more complete “Unicode for programmers”, see this article and this more recent guide), doing so seems to conflate the two different design (and usability) issues: What encoding should be used to store Human-readable text, and what abstractions (if any) programming languages should offer to represent strings of text?
The “UTF-8 Everywhere” document already has strong arguments for UTF-8 as the best storage format for text, and looking at the popularity of UTF-8 in web standards, all that remains is to move legacy systems to it.
For strings in programming languages, you could imagine one that has absolutely no support for any form of strings, though it’s difficult to sell the idea of a language that doesn’t even support string literals or an “Hello World” program. The approach of “UTF-8 Everywhere” is very close to that, and seems to indicate the authors’ bias towards C and C++ languages: Transparently use UTF-8 to store text, and shift the burden of not breaking multi-byte code points back to the programmer. The argument that counting characters, or “grapheme clusters”, is seldom needed is misleading: Splitting a UTF-8 string in the middle of a code point will break the validity of the UTF-8 sequence.
In fact, it can be argued that programming languages that offer native abstractions of text strings not only give greater protection against accidentally building invalid byte representations, but also give them a chance to do a myriad of other worthwhile optimizations. Languages that presents strings as immutable sequences of Unicode code points, or that transparently use copy-on-write when characters are changed, can optimize memory by de-duplicating identical strings. Even if de-duplication is done only for literals (like Java), it can greatly help with memory reuse in programs that process large amount of text. The internal memory representation of strings can even be optimized for size based on the biggest code point used in it, like Python 3.3 does.
Of course, the biggest usability issue with using abstracted Unicode strings is that it forces the programmer to explicitly tell how to convert a byte sequence in a string and back. The article “The Case Against Python 3” above mentioned that the language’s runtime should automatically detect the encoding, but that is highly error-prone and CPU intensive. The “UTF-8 Everywhere” argues that since both are using UTF-8, it boils down to memory copy, but then breaking code points is still a risk so you’ll need some kind of UTF-8 encoder and parser.
A poem I wrote at Christmastime when I was 13
They will stand beside you
When all things are good.
And in the times when things are bad
Beside you they have stood.
They always tell the truth to you
As every good friend must
And they are reliable:
Friends you always trust.
They never will say nasty things
About the clothes you wear
They'll stand up for you against others
When you're not there.
You can always trust your friends
To hold your place in queues.
They'll always tell you "You played well",
Even if you lose.
Always keeping by your side:
Friendship never ends.
Yet, after all, we're only human:
Who has friends?
This entry was originally posted at http://marnanel.dreamwidth.org/383502.html. Please comment there using OpenID.
Creando un servicio personal de OpenVPN
He decidido, por fin, crear mi propio servicio VPN. Los motivos principales son poder asegurar navegación privada y cercionarme que uso un servicio de confianza 100% auditado… por mi.
La configuración elegida es una creada por Kyle Manna: https://github.com/kylemanna/docker-openvpn/ ¡Gracias Kyle!
En este caso usamos CentOS 7, pero como no está disponible docker-compose he tenido que retro-portarlo y lo tenéis disponible en un repositorio específico.
cd /etc/yum.repos.d ; wget https://copr.fedorainfracloud.org/coprs/olea/docker-compose/repo/epel-7/olea-docker-compose-epel-7.repo yum install -y docker docker-compose yum install -y docker-lvm-plugin.x86_64 docker-latest.x86_64 yum upgrade -y groupadd docker usermod -G docker -a USUARIO echo "VG=sys" > /etc/sysconfig/docker-storage-setup docker-storage-setup systemctl enable docker systemctl start docker
Si docker ha podido arrancar entonces probablemente está listo para empezar a trabajar.
Obviamente también hay que configurar el DNS del servicio VPN.MISERVIDOR.COM en el servidor correspondiente.
Entrando en materia:
mkdir servicio-VPN.MISERVIDOR.COM cd servicio-VPN.MISERVIDOR.COM git clone https://github.com/kylemanna/docker-openvpn cat <<EOF > docker-compose.yml version: '2' services: openvpn: build: context: docker-openvpn/ cap_add: - NET_ADMIN image: Mi-ID/openvpn ports: - "1194:1194/udp" restart: always volumes: - ./openvpn/conf:/etc/openvpn EOF
Y continuando con las instrucciones indicadas:
docker-compose run --rm openvpn ovpn_genconfig -u udp://VPN.MISERVIDOR.COM
docker-compose run --rm openvpn ovpn_initpki
docker-compose up -d openvpn
docker-compose run --rm openvpn easyrsa build-client-full USUARIO nopass
docker-compose run --rm openvpn ovpn_getclient USUARIO > USUARIO.ovpn
Personalmente me he encontrado el problema varias veces de que el GUI de configuración de NetworkManager no es capaz de importar los certificados criptográficos al configurar una conexión VPN importando ficheros ovpn. Tras investigarlo varias veces he concluido que se debe a un bug documentado que en mi caso no está resuelto en NetworkManager-openvpn-gnome-1.0.8-2.fc23 pero sí en NetworkManager-openvpn-gnome-1.2.4-2.fc24.
Si aún os encontráis con ese problema habría dos alternativas: o actualizar a una versión reciente de NM o conectarse manualmente desde el CLI:
sudo /usr/sbin/openvpn --config USUARIO.ovpn
I released a new version of BDGui , a gui for displaying information about block devices, raid, llvm etc.
Runs only on linux.
I also discovered a free Continuous integration site
Nice an useful site, sadly the free version has only 2 Linux images: ubuntu 12.04 and 14.04 and one OSX
Three simple points to change someone's attitude
[Content note: mention of road accidents, and death of children]
Now more than ever, we on the Left need to change people’s attitudes towards the poor and marginalised. Persuasion has three parts:
(Why should you listen to me about this? Because I’m a writer and I study the structure of stories. Also, because this pattern has stood the test of time: it was set out by Aristotle in 350BCE.)
Facts are vitally important, and they’re what we do best. We have fact-checkers and myth-busting websites coming out of our ears. But people don’t listen to facts alone.
Stories, worldviews, are the framework for facts. If someone’s been sold a lie (“immigrants are taking all the jobs and houses”), they’re sold a story to put it in (which starts with “there’s a shortage of jobs and houses”). Then when you point out the number of houses standing empty, it doesn’t fit the story. So it gets ignored, or twisted into something you didn’t say. The answer to false stories is to spread true stories.
Not convinced? Let me tell you a story.
Once upon a time in 1964, the road safety people ran adverts saying “Don’t drink and drive”. They gave statistics. But the adverts weren’t very effective. So they tried a new idea.
The existing story was “Driving drunk is difficult, so I’m more of a man if I can do it.” The new adverts gave them a better story: Here’s a kid who can’t sleep because her father killed someone. Kill your speed, not a child.
And why should we believe what we’re hearing? Because we’re hearing it from actual people who had been injured in road accidents. Even though the people were fictional characters, it still persuades. And now drinking and driving deaths are one-fifth of what they were 40 years ago.
Persuaded? Share it and persuade your friends.
When I wrote my previous post about Sourceforge, things were looking pretty grim for the site; I (rightly, I think) slammed them for some pretty atrocious security practices.
I invited the SourceForge ops team to get in touch about it, and, to their credit, they did. Even better, they didn't ask for me to take down the article, or post some excuse; they said that they knew there were problems and they were working on a long-term plan to address them.
This week I received an update from said ops, saying:
We have converted many of our mirrors over to HTTPS and are actively working on the rest + gathering new ones. The converted ones happen to be our larger mirrors and are prioritized.
We have added support for HTTPS on the project web. New projects will automatically start using it. Old projects can switch over at their convenience as some of them may need to adjust it to properly work. More info here:
Coincidentally, right after I received this email, I installed a macOS update, which means I needed to go back to Sourceforge to grab an update to my boot manager. This time, I didn't have to do any weird tricks to authenticate my download: the HTTPS project page took me to an HTTPS download page, which redirected me to an HTTPS mirror. Success!
(It sounds like there might still be some non-HTTPS mirrors in rotation right now, but I haven't seen one yet in my testing; for now, keep an eye out for that, just in case.)
If you host a project on Sourceforge, please go push that big "Switch to HTTPS" button. And thanks very much to the ops team at Sourceforge for taking these problems seriously and doing the hard work of upgrading their users' security.
A Walk in the Woods
|I found this tale of Bill Bryson walking the Appalachian Trail (rather incompetently I must say) immensely entertaining. Well written, interesting, generally exaggerated, and leaving me with a desire to get out somewhere and walk some more. I'd strongly recommend this book to people who already care about bush walking, but have found other pursuits to occupy most of their spare time.
Tags for this post: book bill_bryson travel america bush walking
Related posts: Exploring for a navex; Where did SUVs come from?; In A Sunburned Country; Richistan; Why American tech companies seem to get new technology better than Australian ones...; I should try to make it to then 911 exhibit
Accessing 3GPP specs in PDF format
When you work with GSM/cellular systems, the definite resource are the specifications. They were originally released by ETSI, later by 3GPP.
The problem start with the fact that there are separate numbering schemes. Everyone in the cellular industry I know always uses the GSM/3GPP TS numbering scheme, i.e. something like 3GPP TS 44.008. However, ETSI assigns its own numbers to the specs, like ETSI TS 144008. Now in most cases, it is as simple s removing the '.' and prefixing the '1' in the beginning. However, that's not always true and there are exceptions such as 3GPP TS 01.01 mapping to ETSI TS 101855. To make things harder, there doesn't seem to be a machine-readable translation table betwen the spec numbers, but there's a website for spec number conversion at http://webapp.etsi.org/key/queryform.asp
When I started to work on GSM related topics somewhere between my work at Openmoko and the start of the OpenBSC project, I manually downloaded the PDF files of GSM specifications from the ETSI website. This was a cumbersome process, as you had to enter the spec number (e.g. TS 04.08) in a search window, look for the latest version in the search results, click on that and then click again for accessing the PDF file (rather than a proprietary Microsoft Word file).
At some point a poor girlfriend of mine was kind enough to do this manual process for each and every 3GPP spec, and then create a corresponding symbolic link so that you could type something like evince /spae/openmoko/gsm-specs/by_chapter/44.008.pdf into your command line and get instant access to the respective spec.
However, of course, this gets out of date over time, and by now almost a decade has passed without a systematic update of that archive.
To the rescue, 3GPP started at some long time ago to not only provide the obnoxious M$ Word DOC files, but have deep links to ETSI. So you could go to http://www.3gpp.org/DynaReport/44-series.htm and then click on 44.008, and one further click you had the desired PDF, served by ETSI (3GPP apparently never provided PDF files).
So since the usability of this 3GPP specification resource had been artificially crippled, I was annoyed sufficiently well to come up with a solution:
It's such a waste of resources to have to download all those files and then write a script using pdfgrep+awk to re-gain the same usability that the 3GPP chose to remove from their website. Now we can wait for ETSI to disable indexing/recursion on their server, and easy and quick spec access would be gone forever :/
Why does nobody care about efficiency these days?
If you're also an avid 3GPP spec reader, I'm publishing the rather trivial scripts used at http://git.osmocom.org/3gpp-etsi-pdf-links
If you have contacts to the 3GPP webmaster, please try to motivate them to reinstate the direct PDF links.
Don’t Stop Tweeting On My Account
Shortly after my previous post, my good friend David Reid not-so-subtly subtweeted me for apparently yelling at everyone using a twitter thread to be quiet and stop expressing themselves. He pointed out:
Threads are being used to say things which might not otherwise be said.— dreid (@dreid) December 14, 2016
If you only see threads from blowhards stop following blowhards.
This is the truth. There are, indeed, important, substantial essays being written on Twitter, in the form of threads. If I could direct your attention to one that’s probably a better use of your time than what I have to say here, this is a great example:
Who radicalized Dylan Roof? And why aren't more people asking this question?— Christiana A Mbakwe (@Christiana1987) December 13, 2016
Moreover, although the twitter character limit can inhibit the expression of nuance, just having a blog is not a get-out-of-jail-free card for clumsy, hot takes:
@glyph 🎶 isn't it ironic 🎶 that your blog post failed capture nuance 🎶— dreid (@dreid) December 14, 2016
I screwed this one up. I’m sorry.
The point I was trying to primarily focus on in that post is that a twitter thread demands a lot of attention, and that publishers exploiting that aspect of the medium in order to direct more attention to themselves1 are leveraging a limited resource2 and thereby externalizing their marketing costs3. Further, this idiom was invented by4, and has extensively been used by people who don’t really need any more attention than they already have.
If you’re an activist trying to draw attention to an important cause, or a writer trying to find your voice, and social media (or twitter threads specifically) has helped you do that, I am not trying to scold you for growing an audience on - and deriving creative energy from - your platform of choice. If you’re leveraging the focus-stealing power of twitter threads to draw attention to serious social issues, maybe you deserve that attention. Maybe in the face of such issues my convenience and comfort and focus are not paramount. And for people who really don’t want that distraction, the ‘unfollow’ button is, obviously, only a click away.
That’s not to say I think that relying on social media exclusively is a good idea for activists; far from it. I think recent political events have shown that a social media platform is often a knife that will turn in your hand. So I would encourage pretty much anyone trying to cultivate an audience to consider getting an independent web presence where you can host more durable and substantive collections of your thoughts, not because I don’t want you to annoy me, but because it gives you a measure of independence, and avoids a potentially destructive monoculture of social media. Given the mechanics of the technology, this is true even if you use a big hosted service for your long-form stuff, like Medium or Blogger; it’s not just about a big company having a hold on your stuff, but about how your work is presented based on the goals of the product presenting it.
However, the exact specifics of such a recommendation are an extremely complex set of topics, and not topics that I’m confident I’ve thought all the way through. There are dozens more problems with twitter threads for following long-form discussions and unintentionally misrepresenting complex points. Maybe they’re really serious, maybe not.
As far as where the long-form stuff should go, there are very good reasons to want to self-host things, and very good reasons why self-hosting is incredibly dangerous, especially for high-profile activists and intellectuals. There are really good reasons to engage with social media platforms and really good reasons to withdraw.
This is why I didn’t want to address this sort of usage of twitter threading; I didn’t want to dive into the sociopolitical implications of the social media ecosystem. At some point, you can expect a far longer post from me about the dynamics of social media, but it is going to take a serious effort to do it justice.
A final thought before I hopefully stop social-media-ing about social media for a while:
One of the criticisms that I received during this conversation, from David as well as others who contacted me privately, is that I’m criticizing Twitter from a level of remove; implying that since I’m not fully engaged with the medium I don’t really have the right (or perhaps the expertise) to be critical of it. I object to that.
In addition to my previously stated reasons for my reduced engagement - which mostly have to do with personal productivity and creative energy - I also have serious reservations about the political structure of social media. There’s a lot that’s good about it, but I think the incentive structures around it may mean that it is, ultimately, a fundamentally corrosive and corrupting force in society. At the very least, a social media platform is a tool which can be corrosive and corrupting and therefore needs to be used thoughtfully and intentionally to minimize the harm that it can do while retaining as many of its benefits as possible.
I don’t have time to fully explore the problems that I’m alluding to now5 but at this point if I wrote something like “social media platforms are slowly destroying liberal democracy”, I’m not even sure if I’d be exaggerating.
When I explain that I have these concerns, I’m often asked the obvious follow-up: if social media is so bad why don’t I just stop using it entirely?
The problem is, social media companies effectively control access to an enormous audience, which is now difficult to reach without their intermediation. I have friends, as we all probably do, that are hard for me to contact via other channels. An individual cannot effectively boycott a communication tool, and I am not even sure yet that “stop using it” is the right way to combat its problems.
So, I’m not going to stop communicating with my friends because I have concerns about the medium they prefer, and I’m also not going to stop thinking or writing about how to articulate and address those concerns. I think I have as much a right as anyone to do that.
... even if they’re not doing it on purpose ... ↩
the reader’s attention ↩
interrupting the reader repeatedly to get them to pay attention rather than posting stuff as a finished work, allowing the reader to make efficient use of their attention ↩
I’m aware that many people outside of the white male tech nerd demographic - particularly women of color and the LGBTQ community - have made extensive use of the twitter thread for discussing substantive issues. But, as far as my limited research has shown (although the difficulty of doing such research is one of the problems with Twitter), Marc Andreessen was by far the earliest pioneer of the technique and by far its most prominent advocate. I’d be happy for a correction on this point, however. ↩
The draft in progress, which I've been working on for a month, is already one of the longest posts I’ve ever written and it’s barely half done, if that. ↩
A Blowhard At Large
Blogs are free. Put your ideas on your blog.
As Eevee rightfully points out, however, if you’re a massive blowhard in your Tweetstorms, you’re likely a massive blowhard on your blog, too. So why care about the usage of Twitter threads vs. Medium posts vs. anything else for expressions of mediocre ideas?
Here’s the difference, and here’s why my problem with them does have something to do with the medium: if you put your dull, obvious thoughts in a blog2, it’s a single entity. I can skim the introduction and then skip it if it’s tedious, plodding, derivative nonsense.3
Tweetstorms™, as with all social media innovations, however, increase engagement. Affordances to read little bits of the storm abound. Ding. Ding. Ding. Little bits of an essay dribble in, interrupting me at suspiciously precisely calibrated 90-second intervals, reminding me that an Important Thought Leader has Something New To Say.
The conceit of a Tweetstorm™ is that they’re in this format because they’re spontaneous. The hottest of hot takes. The supposed reason that it’s valid to interrupt me at 30-second intervals to keep me up to date on tweet 84 of 216 of some irrelevant commentator’s opinion on the recent trend in chamfer widths on aluminum bezels is that they’re thinking those thoughts in real time! It’s an opportunity to engage with the conversation!
But of course, this is a pretense; transparently so, unless you imagine someone could divine the number after the slash without typing it out first.
The “storm” is scripted in advance, edited, and rehearsed like any other media release. It’s interrupting people repeatedly merely to increase their chances of clicking on it, or reading it. And no Tweetstorm author is meaningfully going to “engage” with their readers; they just want to maximize their view metrics.
Even if I cared a tremendous amount about the geopolitics of aluminum chamfer calibration, this is a terrible format to consume those thoughts in. Twitter’s UI is just atrocious for meaningful consideration of ideas. It’s great for pointers to things (like a link to this post!) but actively interferes with trying to follow a thought to its conclusion.
There’s a whole separate blog in there about just how gross pretty much all social-media UI is, and how much it goes out of its way to show you “what you might have missed”, or things that are “relevant to you” or “people you should follow”, instead of just giving you the actual content you requested from their platform. It’s dark patterns all the way down, betraying the user’s intent for those of the advertisers.
My tone here probably implies that I think everyone doing this is being cynically manipulative. That’s possibly the worst part - I don’t think they are. I think everyone involved is just being slightly thoughtless, trying to do the best that they can in their perceived role. Blowhards are blowing, social media is making you be more social and consume more media. All optimizing for our little niche in society. So unfortunately it’s up to us, as readers, to refuse to consume this disrespectful trash, and pipe up about the destructive aspects of communicating that way.
Personally I’m not much affected by this, because I follow hardly anyone4, I don’t have push enabled, and I would definitely unfollow (or possibly block) someone who managed to get retweeted at such great length into my feed. But a lot of people who are a lot worse than I am about managing the demands on their attention get sucked into the vortex that Tweetstorms™ (and related social-media communication habits) generate.
Attention is a precious resource; in many ways it is the only resource that matters for producing creative work.
But of course, there’s a delicate balance - we must use up that same resource to consume those same works. I don’t think anyone should stop talking. But they should mindfully speak in places and ways that are not abusive of their audience.
This post itself might be a waste of your time. Not everything I write is worth reading. Because I respect my readers, I want to give them the opportunity to ignore it.
And that’s why I don’t use Tweetstorms™5.
Deaf/HoH symbol in Unicode
I'm working on a proposal to add the [dD]eaf/HoH symbol to Unicode. Help, encouragement, and suggestions are very welcome.
The symbol I mean is in image 1 here:
We should probably also include the induction loop symbol (number 2 in the image).
This proposal is about encoding the symbol as an ordinary character: it isn't quite the same thing as an emoji. But some characters can alternatively display as emoji, and in this case I think it should be white on blue, as in 3 above.
At the moment, what we need most of all is examples of the symbols used in running text, as a symbol rather than a diagram off to one side. Here's the sort of thing I mean:
...except that I just made that up, and I'm looking for real examples. Manuals and so on might be good places to look. Can you help?
If you want to see a finished version of the sort of proposal I'm writing, take a look at the proposal to encode power symbols in Unicode. That proposal successfully included the power symbol characters about two years ago. The images in the section called "Evidence of Use in Running Text" are the sort of thing I'm asking for.
This entry was originally posted at http://marnanel.dreamwidth.org/383146.html. Please comment there using OpenID.
Devember 2016 – Day 10 and 11 – Router Builder Start
More cleanup today, but more importantly, started working on the Router Builder. This is the magic part of it all. The part that makes it possible to create routers and merge together commands without actually having to write any code. So it’s getting close to be even more fun.
Happy to say as well that the build stuff I built up yesterday on Devember 10 worked on my Windows machine without issue. So I can successfully run this on both my Windows and Mac without issue. So that’s a plus.
Note: spending an hour working on this might not seem like much, but what I’m able to accomplish is still encouraging.
|I read this book based on the recommendation of Richard Jones, and its really really good. A little sci-fi, a little film noir, and very engaging. I also like that bad things happen to good people in the story -- its gritty and unclean enough to be believable.
I don't want to ruin the book for anyone, but I really enjoyed this and have already ordered the sequels. Oh, and there's a Netflix series based off these books that I'll now have to watch too.
Tags for this post: book james_sa_corey colonization space_travel mystery aliens first_contact
Related posts: Marsbound; Downbelow Station; The Martian; The Moon Is A Harsh Mistress; Starbound; Rendezvous With Rama
Improvements for newer Canon SELPHY models
About a year of so ago I added support for the newer Canon SELPHY printers (CP820, CP910, CP1000, and CP1200) into Gutenprint. Despite using the same media kits as their older siblings, under their plasic bodies they sported a new print engine that worked fairly differently.
Slightly different print sizes, a Y'CbCr image format, and, surprisingly, they appeared to be sane USB Printer class models and not require a special backend to handle communications.
Fast forward to last week, and it turns out that was a premature assessment. While the printers didn't require any special handholding to print a single image, they would lock up if one would send over two jobs back-to-back. Canon still can't implement proper flow control.
Time to reak out the sniffer and capture some multi-page jobs! A quick flurry of hacking later, and the 'canonselphyneo' backend was born. It brings along sane flow control, status reporting, and error detection on par with the selphyneo's older siblings.
I also discovered the 'L' print size was incorrect. All of this will will be in Gutenprint 5.2.12-pre5 or newer, but the current backend can always be grabbed from my selphy_print repository.
Oh, as I write this, I don't have the USB IDs for the CP820 or CP1000 models. I need those so they'll be recognized by the backend. Holler if you have one!
Devember 2016 – Day 8 and 9 – Chrome Caches AJAX Response
I didn’t blog yesterday here, but I did tweet, and I did write code. It was late, but I still got stuff done.
Today was an interesting day. I’m still coding, but I came across an interesting bug I’d like to share.
So, in the code I have now, depending on the request’s Accept header, it will return either HTML or JSON. It’s the same endpoint, but depending on the request made, it will return different results. The result is the same information, just presented differently. One way for humans, another for computers.
Now, in Firefox and Safari, this works just fine. If you go to the page, everything loads up as you’d expect. If you go back in the browser, and then go forward again, the page that is displayed is the same page you’d expect to see. The HTML result.
But in Chrome, it doesn’t work like this. Here is what happens in my case.
First, you make a request to a page /foo/bar. This request has a header entry:
In this case, it’s the same URL, but different requests. This second request returns JSON as expected.
Now, you click the Back button, and go back a page. Then you click the Forward button, and instead of seeing the HTML page as you’d expect, you see the JSON result.
Now, even though Chrome has this bug, you can work around it. When you return the JSON response, you can just send back a response that includes the following headers.
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Expires: Thu, 18 Nov 1971 01:00:00 GMT
This will prevent the AJAX response from being cached, meaning the last cached item will be the HTML page.
So, in thinking about this, I was wondering how else I could use this. After all, if I make a request to the same URL behind the scene and cache it, I can change what’s cached in the browser. Is there ever a case where you’d want to change what was cached without showing the user right away? I don’t know. But it’s interesting. And it would only work on Chrome.
My Plover steno dictionary
Here are some interesting definitions from my personal Plover steno dictionary.
I have a habit of setting up proper nouns with -LZ on the right hand. (It's unlikely to clash with anything; there's no reason beyond that.) So for example:
Devember 2016 – Day 7
Busy night, I had to help on call people in the middle of my coding, so
it delayed things. Still, I put in my hour. Later than I wanted, but
here it is. Started adding Router stuff. Learning about React in the
Open Hardware IEEE 802.15.4 adapter "ATUSB" available again
As a spin-off to that, the ATUSB device was designed: A general-purpose open hardware (and FOSS firmware + driver) IEEE 802.15.4 adapter that can be plugged into any USB port.
This adapter has received a mainline linux kernel driver written by Werner Almesberger and Stefan Schmidt, which was eventually merged into mainline Linux in May 2015 (kernel v4.2 and later).
Earlier in 2016, Stefan Schmidt (the current ATUSB Linux driver maintainer) approached me about the situation that ATUSB hardware was frequently asked for, but currently unavailable in its physical/manufactured form. As we run a shop with smaller electronics items for the wider Osmocom community at sysmocom, and we also frequently deal with contract manufacturers for low-volume electronics like the SIMtrace device anyway, it was easy to say "yes, we'll do it".
As a result, ready-built, programmed and tested ATUSB devices are now finally available from the sysmocom webshop
Note: I was never involved with the development of the ATUSB hardware, firmware or driver software at any point in time. All credits go to Werner, Stefan and other contributors around ATUSB.
New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.
Keep up with the latest Advogato features by reading the Advogato status blog.
If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!