Recent blog entries

27 May 2015 zanee   » (Journeyer)

Haven't posted in a little while because i've been busy being a father, husband, contracts and other things. Recently i've been doing work with AWS and working at a company not directly free-software related however recently i've been working on an automated image build pipeline, where you can also kick off and manage images in a consistent way. As well as exposing an endpoint that can then be used with other internal applications. Pretty straight-forward non exciting stuff.

Today I came across this tech paper over at Google on automated image building with jenkins and kubernetes. This process is just overly convoluted and need not be for creating an automated image pipeline.

I'll provide a short synopsis of a better way to approach this problem and hopefully follow up with something a little bit more concrete when I get the chance to get my blog back up:

Realistically the problem with images at the end of the day is three-fold. One, it takes a very long time to provision a standard image, so you're dealing with time. Two, the image that has been provisioned eventually becomes stale meaning that the software associated with it needs security patches, bugfixes etc. Three, once you have more than 2 or 3 images, you need a lifecycle for managing the retiring, promoting, validating, testing and etc of images.

So your build process has to revolve around the lifecycle of whatever you need an image for. The best way to achieve this is to completely decouple the build process by itself and the best way to do that is to use a message broker. So you have a message broker, and in front of that you build a web client that is primarily used for publishing of what you'd like for your image and finally have the consumer processes sitting in the background getting ready to chew on the workflow of building an image.

There is obviously a lot more to it than this (what's in an image?, how do we manage these images? retire them? archive them? etc) and I'll hopefully get some time to expound on all of this as was done above. However the most anyone should have to care about are the steps involved in provisioning. Meaning "this is what I want installed on my image" or "this is what I want my image to look like". So in the above example it would be whatever the chef-solo steps involve. In my specific case i'm using ansible (because it's better than chef; yeahhhhh wanna fight?!). Then you don't want to poll github because, well.. why? Even if you wanted an image whenever there was a change in your repo it would be an inefficient way to handle building in a pipeline. What happens if you publish a very trivial change, do you do a full rebuild just because of it? No, you don't want to do that, so just use git webhooks. I'm not sure but it looks like Hashicorp's Atlas has a similar approach. Anyway, this with publishing a simple message to a broker and letting a consumer process do the work is a better approach. Especially because things will definitely fail in an image building pipeline, often enough that you simply need a way to handle this gracefully. All this combined with the fact that no one wants to sit around looking at build output of software installing makes for not a fun time.

So yeah, let me get my shit together and post a more simple approach you can do this with packer, rabbitmq, some python pub/consumer code, ansible and github webhooks (if you're using github). I'll do it with AWS and GCE.. I can't link to a repo because it's private unfortunately BUT the method itself can be disclosed.

27 May 2015 AlanHorkan   » (Master)

SK1 Print Design adding support for Palettes (Colour Swatches)

SK1 Print Design is an interesting project. They found the vector graphics program Sketch was useful to their business, and maintained their own customized version, eventually becoming a project all of their own. I'm not involved with SK1 Print Design myself but I do follow their newsfeed on Facebook, where they regularly post information about their work.

They have added import and export support for a variety of Colour Palettes, including SOC (StarOffice Colours, i.e. the OpenDocument standard used by OpenOffice.org and LibreOffice) and CorelDraw XML Palettes and more. For users who already have CorelDraw this should allow them to reuse their existing Pantone palettes.

They are also continuing their work to merge their SK1 and PrintDesign branches. The next release seems very promising.

Syndicated 2015-05-27 14:01:55 from Alan Horkan

27 May 2015 mjg59   » (Master)

This is not the UEFI backdoor you are looking for

This is currently the top story on the Linux subreddit. It links to this Tweet which demonstrates using a System Management Mode backdoor to perform privilege escalation under Linux. This is not a story.

But first, some background. System Management Mode (SMM) is a feature in most x86 processors since the 386SL back in 1990. It allows for certain events to cause the CPU to stop executing the OS, jump to an area of hidden RAM and execute code there instead, and then hand off back to the OS without the OS knowing what just happened. This allows you to do things like hardware emulation (SMM is used to make USB keyboards look like PS/2 keyboards before the OS loads a USB driver), fan control (SMM will run even if the OS has crashed and lets you avoid the cost of an additional chip to turn the fan on and off) or even more complicated power management (some server vendors use SMM to read performance counters in the CPU and adjust the memory and CPU clocks without the OS interfering).

In summary, SMM is a way to run a bunch of non-free code that probably does a worse job than your OS does in most cases, but is occasionally helpful (it's how your laptop prevents random userspace from overwriting your firmware, for instance). And since the RAM that contains the SMM code is hidden from the OS, there's no way to audit what it does. Unsurprisingly, it's an interesting vector to insert malware into - you could configure it so that a process can trigger SMM and then have the resulting SMM code find that process's credentials structure and change it so it's running as root.

And that's what Dmytro has done - he's written code that sits in that hidden area of RAM and can be triggered to modify the state of the running OS. But he's modified his own firmware in order to do that, which isn't something that's possible without finding an existing vulnerability in either the OS or (or more recently, and) the firmware. It's an excellent demonstration that what we knew to be theoretically possible is practically possible, but it's not evidence of such a backdoor being widely deployed.

What would that evidence look like? It's more difficult to analyse binary code than source, but it would still be possible to trace firmware to observe everything that's dropped into the SMM RAM area and pull it apart. Sufficiently subtle backdoors would still be hard to find, but enough effort would probably uncover them. A PC motherboard vendor managed to leave the source code to their firmware on an open FTP server and copies leaked into the wild - if there's a ubiquitous backdoor, we'd expect to see it there.

But still, the fact that system firmware is mostly entirely closed is still a problem in engendering trust - the means to inspect large quantities binary code for vulnerabilities is still beyond the vast majority of skilled developers, let alone the average user. Free firmware such as Coreboot gets part way to solving this but still doesn't solve the case of the pre-flashed firmware being backdoored and then installing the backdoor into any new firmware you flash.

This specific case may be based on a misunderstanding of Dmytro's work, but figuring out ways to make it easier for users to trust that their firmware is tamper free is going to be increasingly important over the next few years. I have some ideas in that area and I hope to have them working in the near future.

comment count unavailable comments

Syndicated 2015-05-27 06:38:17 from Matthew Garrett

26 May 2015 marnanel   » (Journeyer)

trans unicode erratum

trans tech folk, other trans folk, other tech folk:

I'm planning to submit an erratum to Unicode. U+26A5, U+26A6, and U+26A7 all have the informative alias "transgendered sexuality".

code chart
(from http://www.unicode.org/charts/PDF/U2600.pdf. Original encoding proposal: http://www.unicode.org/L2/L2003/03364-n2663-gender-rev.pdf)

I think the first word should be "transgender", and the second word should be something other than "sexuality", but I'm not sure what.

The questions I have are:
1) do you think asking for this change is sensible?
2) what do you think the informative alias "transgendered sexuality" should be changed to?
3) can you think of good sources I can cite when I'm explaining why the change should be made?

If you saw this on Facebook or Twitter or wherever, feel free to answer there and I'll copy the answer here onto Dreamwidth/LJ.

love and hugs.

This entry was originally posted at http://marnanel.dreamwidth.org/334879.html. Please comment there using OpenID.

Syndicated 2015-05-26 22:39:20 (Updated 2015-05-26 22:55:40) from Monument

26 May 2015 bagder   » (Master)

picturing curl’s future

development graph

There will be more stuff over time in the cURL project. Exactly which stuff and how long time it takes for everything, we don’t know. It depends largely on who works on what and how much time said persons can spend on implementing the stuff they work on…

I suspect we might be able to do things slightly faster over time, which is why the red arrow isn’t just a straight line.

I drew this little picture inspired from discussions with friends after a talk I did about curl and how development works in an open source project such as this. We know we will work on things that will improve the products but we don’t see exactly what very far in advance. I tweeted this picture a few days ago, and it turned out very popular.

Syndicated 2015-05-26 20:43:49 from daniel.haxx.se

26 May 2015 mones   » (Journeyer)

Downgrading to stable

This weekend I had to downgrade my home desktop to stable thanks to a strange Xorg bug which I've been unable to identify among the current ones. Both testing and sid versions seem affected and all you can see after booting is this:


The system works fine otherwise and can be accessed via ssh, but restarting kdm doesn't help to fix it, it just changes the pattern. Anyway, as explaining a toddler he cannot watch his favourite youtube cartoons because suddenly the computer screen has become an abstract art work is not easy I quickly decided to downgrade.

Downgrading went fine, using APT pinning to fix stable and apt-get update/upgrade/dist-upgrade after that, but today I noticed libreoffice stopped working with this message:

Warning: failed to launch javaldx - java may not function correctly
/usr/lib/libreoffice/program/soffice.bin: error while loading shared libraries: libreglo.so: cannot open shared object file: No such file or directory


All I found related to that is a post on forums, which didn't help much (neither the original poster nor me). But just found the library was not missing, it was installed:

# locate libreglo.so
/usr/lib/ure/lib/libreglo.so


But that was not part of any ldconfig conf file, hence the fix was easy:

# echo '/usr/lib/ure/lib' > /etc/ld.so.conf.d/libreoffice-ure.conf
# ldconfig


And presto! libreoffice is working again :-)

Syndicated 2015-05-26 10:11:33 from Ricardo Mones

26 May 2015 bagder   » (Master)

2015 curl user poll analysis

My full 30 page document with all details and analyses of the curl user poll 2015 is now available. It shows details of all the questions, most of them with a comparison with last year’s survey. The write-ins are also full of good advice, wisdom and some signs of ignorance or unawareness.

I hope all curl hackers and others generally interested in the project can use my “report” to learn something about our users and our user’s view of the project and our products.

Let’s use this to guide us going forward.

keep-calm-and-improve-curl

Syndicated 2015-05-26 06:23:01 from daniel.haxx.se

25 May 2015 caolan   » (Master)

impress, right click, insert image





Added "insert image" to right click context menu in impress.

Syndicated 2015-05-25 14:11:00 (Updated 2015-05-25 14:11:12) from Caolán McNamara

24 May 2015 sye   » (Journeyer)

From Xi xiaoxing's Temple Univ. profile:

Biographical Sketch

Xiaoxing Xi is the Department Chair and the Laura H. Carnell Professor of Physics at Temple University. Prior to joining Temple in 2009, he was a Professor of Physics and Materials Science and Engineering at the Pennsylvania State University. He received his PhD degree in physics from Peking University and Institute of Physics, Chinese Academy of Science, in 1987. After several years of research at the Karlsruhe Nuclear Research Center, Germany, Bell Communication Research/Rutgers University, and University of Maryland, he joined the Physics faculty at Penn State in 1995.

Research Interests

Xiaoxing Xi’s research focuses on the materials physics underlying the applications of oxide, boride, and transition metal dichalcogenide thin films, in particular epitaxial thin films and heterostructures at the nanoscale. Using various deposition techniques including Laser Molecular Beam Epitaxy and Hybrid Physical-Chemical Vapor Deposition, his group is currently working on the atomic layer-by-layer growth of artificial oxide heterostructures, magnesium diboride thin films for electronic and radio frequency cavity applications, iron pnictide superconductor thin films for phase sensitive measurements, and thin films of 2D layered materials transition metal dichalcogenides. He has published over 300 papers in refereed journals, book chapters, and conference proceedings, and holds three patents in the area of thin films of high-Tc superconductors and magnesium diboride.

Another arrest as reported by New York Times that put to an end a scientist's public service at whose expense and cost ?

23 May 2015 mentifex   » (Master)

It was fun but nevertheless sincere to post AI Has Been Solved on April Fool's Day ten years ago. Mentifex Strong AI always was and always will be an extremely serious AI Lab Project as described in December of 1998 by the Association for Computing Machinery. Mentifex AI is so extremely serious that it has meanwhile been ported into Russian and into German. The resulting Amazon Kindle e-book, Artificial Intelligence in German, has been reviewed with the maximum highest-possible five-star rating. Another e-book, InFerence, describes how the Mentifex AI Minds can think by automated reasoning with logical inference. The MindForth AI prior art program has been cited in a Google patent. Now finally at http://ai.neocities.org/AiSteps.html a third-generation (3G) Mentifex AI Mind is being created in Perl, and Netizens from all over the world are looking into the use of Unicode and Perl to create artificial intelligence in any programming language and in any natural human language. Ladies and gentlemen, start your AI engines.


23 May 2015 joolean   » (Journeyer)

gccgo and Autotools

As part of a personal project, I wrote a small Go program recently to marshal some data from a MySQL database as JSON over HTTP. I hadn't written a lot of (read: any) Go before this, and I found the process relatively painless and the implementation much more concise than the alternatives in Java, PHP, or Python. However, when I went to integrate my program with the rest of the Autotools build for my project, I ran into some obstacles, mostly related to the incomplete / broken support for Go in the current stable version of Autoconf (2.69). There are apparently fixes for the most obvious bugs coming in Autoconf 2.70, but that release has been in a development for years; and even once released, to the best of my knowledge, it won't include important features like tests for available Go libraries. So I spent a few days working on a better Autotools Go integration, which I'm attaching below.

A few notes to make what follows a bit clearer:

First, Go already has a build system, more or less - it's called go. If you've got a library or a program called "mypackage/myproject/myprog," and you've put the source files in the right directory, you can run...

go build mypackage/myproject/myprog

...and wind up with a working, statically-linked executable. What is the right directory? It's the "src" directory under one of the directories in $GOPATH. The Go project has a good amount of documentation on the topic of code organization, but in brief, the layout of a directory that forms a component of your GOPATH should be:

  • pkg/[COMPILER_]$GOOS_$GOARCH/: Compiled library files go here
  • bin/: Compiled executable files go here
  • src/: Source code goes here, grouped by package

The go command is a front-end for the native go compilers (6g, 6l, 8g, 8l, etc.) as well as for gccgo (via the -compiler flag). It figures out where all the external dependencies are in your $GOPATH and passes flags and libraries to the compilers and linkers. If you run gccgo directly - that is, without using the go front-end - you have to assemble these arguments and paths yourself.

`go build' is the mainstream, nicely literate way of executing a build for .go files, and it's how most people familiar with the language will go about it. However, Autotools' existing support for Go unsurprisingly depends on direct interaction with gccgo, and I wouldn't expect that to change in any near-term releases. `go build' is convenient for fast, iterative builds; I find Autotools-based builds useful for packaging a source distribution for delivery to environments that need to be interrogated to locate dependencies. I wanted my project's build to work both for people doing `./configure && make' as well as for people running `go build'.

The files below provide:

  • A behind-the-scenes patch for the broken `AC_PROG_GO' in Autoconf 2.69
  • A new macro implementation - AC_CHECK_GOLIB - that finds and tests dependencies for Go programs and libraries, and which behaves similarly to pkg-config's `PKG_CHECK_MODULES'.
  • A working example of an Autotools build for a small project that uses Autotools for its build and depends on some common Go web service and database libraries.
m4/golib.m4

Provides the patch and macro implementation. Bundle this file with your project to apply the changes locally, or put it in `/usr/local/share/aclocal' to make it available system-wide.

# Undefine the broken _AC_LANG_IO_PROGRAM from autoconf/go.m4...

m4_ifdef([_AC_LANG_IO_PROGRAM(Go)], m4_undefine([_AC_LANG_IO_PROGRAM(Go)]))

# ...and redefine it to use a snippet of Go code that compiles properly.

m4_define([_AC_LANG_IO_PROGRAM(Go)],
[AC_LANG_PROGRAM([import ( "fmt"; "os" )],
[f, err := os.OpenFile("conftest.out", os.O_CREATE | os.O_WRONLY, 0777)
if err != nil {
fmt.Println(err)
os.Exit(1)
}
if err = f.Close(); err != nil {
fmt.Println(err)
os.Exit(1)
}
os.Exit(0)
])])

#
# Support macro to check that a program that uses LIB can be linked.
#
# _AC_LINK_GOLIB(VARIABLE, LIB, [ACTION-IF-FOUND], [ACTION-IF-NOT-FOUND])

AC_DEFUN([_AC_LINK_GOLIB],[

# This little "embedded" shell function outputs a list of dependencies for a
# specified library beyond the set of standard imports.

AS_REQUIRE_SHELL_FN([ac_check_golib_nonstd_deps],
[AS_FUNCTION_DESCRIBE([ac_check_golib_nonstd_deps], [LINENO],
[Find the non-standard dependencies of the target library.])],
[for i in `$ac_check_golib_go list -f {{.Deps}} $[]1 | tr -d [[]]`; do
$ac_check_golib_go list -f {{.Standard}} $i | grep -q false
if [[ $? = 0 ]]; then
echo $i
fi
done])

ac_check_golib_paths=`$ac_check_golib_go list -compiler gccgo \
-f {{.Target}} $2`
ac_check_golib_seeds=`ac_check_golib_nonstd_deps $2`
ac_check_golib_oldseeds=""

# Compute the transitive closure of non-standard imports.

for ac_check_golib_seed in $ac_check_golib_seeds; do
ac_check_golib_oldseeds="$ac_check_golib_oldseeds $ac_check_golib_seed"
ac_check_golib_newseeds=`ac_check_golib_nonstd_deps $ac_check_golib_seed`

for ac_check_golib_newseed in $ac_check_golib_newseeds; do
if ! echo "$ac_check_golib_oldseeds" | grep -q "$ac_check_golib_newseed"
then
ac_check_golib_oldseeds="\
$ac_check_golib_oldseeds $ac_check_golib_newseed"
fi
done

ac_check_golib_seeds="$ac_check_golib_seeds $ac_check_golib_newseeds"
ac_check_golib_path=`$ac_check_golib_go list -compiler gccgo \
-f {{.Target}} $ac_check_golib_seed`
ac_check_golib_paths="$ac_check_golib_paths $ac_check_golib_path"
done

ac_check_golib_save_LIBS="$LIBS"
LIBS="-Wl,-( $LIBS $ac_check_golib_paths -Wl,-)"

AC_LINK_IFELSE([],
[$1[]_GOFLAGS="-I $ac_check_golib_root"
$1[]_LIBS="$ac_check_golib_paths"
AC_MSG_RESULT([yes])
$3
LIBS="$ac_check_golib_save_LIBS"
break],
[AC_MSG_RESULT([no])
m4_default([$4], AC_MSG_ERROR([Go library ($2) not found.]))
LIBS="$ac_check_golib_save_LIBS"])
])

#
# Attempts to locate a Go library LIB somewhere under $GOPATH that can be used
# to compile and link a program that uses it, optionally referencing SYMBOL.
# Calls ACTION-IF-FOUND if a usable library is found, in addition to setting
# VARIABLE_GOFLAGS and VARIABLE_LIBS to the requisite compiler and linker flags.
#
# AC_CHECK_GOLIB(VARIABLE, LIB, [SYMBOL], [ACTION-IF-FOUND],
# [ACTION-IF-NOT-FOUND], [GO])

AC_DEFUN([AC_CHECK_GOLIB],[
AC_ARG_VAR([$1][_GOFLAGS], [Go compiler flags for $2])
AC_ARG_VAR([$1][_LIBS], [linker flags for $2])

AC_MSG_CHECKING([for Go library $2])

ac_check_golib_go="$6"
if test -z "$ac_check_golib_go"; then
ac_check_golib_go="go"
fi

# The gccgo compiler needs the `pkg/gccgo_ARCH` part of the GOPATH entry that
# contains the target library, so use the `go' command to compute the full
# target install directory and then subtract out the library-specific suffix.
# E.g., /home/user/gocode/pkg/gccgo_linux_amd64/foo/bar/libbaz.a ->
# /home/user/gocode/pkg/gccgo_linux_amd64

ac_check_golib_root=`$ac_check_golib_go list -compiler gccgo \
-f {{.Target}} $2`
ac_check_golib_root=`dirname $ac_check_golib_root`
ac_check_golib_path=`dirname $2`

ac_check_golib_root="${ac_check_golib_root%$ac_check_golib_path}"

# Save the original GOFLAGS and add the computed root as an include path.

ac_check_golib_save_GOFLAGS=$GOFLAGS
GOFLAGS="$GOFLAGS -I $ac_check_golib_root"

AS_IF([test -n "$3"],
[AC_COMPILE_IFELSE([AC_LANG_PROGRAM([import ("os"; "$2")],[
if $3 == nil {
os.Exit(1)
} else {
os.Exit(0)
}])],

# Did it compile? Then try to link it.

[_AC_LINK_GOLIB([$1], [$2], [$4], [$5])],

# Otherwise report an error.

[AC_MSG_RESULT([no])
m4_default([$5], AC_MSG_ERROR([Go library ($2) not found.]))])],

# If there was no SYMBOL argument provided to this macro, take that to mean
# this library needs to be imported but won't be referenced, so craft a test
# that exercises that kind of import clause (i.e., one with the `_'
# modifier).

[AC_COMPILE_IFELSE([AC_LANG_PROGRAM([import ("os"; _ "$2")],
[os.Exit(0)])],
[_AC_LINK_GOLIB([$1], [$2], [$4], [$5])],
[AC_MSG_RESULT([no])
m4_default([$5], AC_MSG_ERROR([Go library ($2) not found.]))])])

# Restore the original GOFLAGS.

GOFLAGS="$ac_check_golib_save_GOFLAGS"
])


configure.ac

Food for Autoconf. Note the call to `AC_CONFIG_MACRO_DIR' to make golib.m4 visible.

AC_INIT([My Cool Go Program], [0.1], [me@example.com], [myprog], [])
AC_CONFIG_MACRO_DIR([m4])
AC_CONFIG_SRCDIR([src/mypackage/myproject/myprog.go])

AM_INIT_AUTOMAKE(1.6)

AC_LANG_GO
AC_PROG_GO

AC_CHECK_GOLIB([MARTINI], [github.com/go-martini/martini], [martini.Classic])
AC_CHECK_GOLIB([GORM], [github.com/jinzhu/gorm], [gorm.Open])
AC_CHECK_GOLIB([RENDER], [github.com/martini-contrib/render], [render.Renderer])
AC_CHECK_GOLIB([MYSQL], [github.com/go-sql-driver/mysql])

AC_CONFIG_FILES([Makefile])
AC_OUTPUT


Makefile.am

Food for Automake. From what I can tell, Automake has no explicit support for building Go programs, but it does include general support for defining build steps for arbitrary source languages. Note the use of the ".go.o" suffix declaration to specify the compilation steps for .go source files and the "LINK" variable definition to specify a custom link step. The `FOO_GOFLAGS' and `FOO_LIBS' variables are created by the expansion of `AC_CHECK_GOLIB([FOO]...)' in configure.ac.

bin_PROGRAMS = myprog

myprog_SOURCES = src/mypackage/myproject/myprog.go
myprog_userd_GOFLAGS = $(GOFLAGS) @MARTINI_GOFLAGS@ @RENDER_GOFLAGS@ \
@GORM_GOFLAGS@ @MYSQL_GOFLAGS@

myprog_DEPENDENCIES = builddir
myprog_LDADD = @MARTINI_LIBS@ @RENDER_LIBS@ @GORM_LIBS@ @MYSQL_LIBS@
myprog_LINK = $(GOC) $(GOFLAGS) -o bin/$(@F)

builddir:
if [ ! -d bin ]; then mkdir bin; fi

.go.o:
$(GOC) -c $(myprog_GOFLAGS) -o $@ $<

CLEANFILES = bin/myprog

Here's a snippet from the output of configure when the integration is working:

checking for gccgo... gccgo
checking whether the Go compiler works... yes
checking for Go compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... no
checking for suffix of object files... o
checking for go... /usr/bin/go
checking for Go library github.com/go-martini/martini... yes
checking for Go library github.com/jinzhu/gorm... yes
checking for Go library github.com/martini-contrib/render... yes
checking for Go library github.com/go-sql-driver/mysql... yes

To apply this code to your own project, copy / adapt the snippets above along with your own Go code into the following directory layout.

configure.ac
Makefile.am
m4/golib.m4
src/[package]/[project]/*.go
bin/

If you add this project root to your GOPATH, you should be able to run `go build package/project' in addition to `./configure && make'.

Problems? Let me know.

23 May 2015 hypatia   » (Journeyer)

Saturday 23 May 2015

It’s been alternatively sunny and cloudy in our last week in our current house. Dark clouds gathered and thunder rumbled as we heard that second hand furniture buyers are booked up into June, and can’t come and help us with our nice wardrobes which we’d be sad to trash. The sun shone and birds sang when the friends we had over for dinner on Thursday turned out to be moving in the same week we are, only to an apartment with absolutely no storage whatsoever, and they would take our furniture from us. Little rainclouds descend every time some unreliable jerk from Gumtree fails to pick up stuff from our front porch. And so on.

Overall, at the moment we are proving to be a cheap way for other people to furnish. Earlier today two weedy young removalists came today and effortlessly hefted our sofa bed, bookcase and barbecue to Julia’s place. (I got to assume the risk of transporting the gas bottle for the barbecue; that they don’t do.) Our older bikes are off to Bikes For Humanity. Our largesse is getting down to a cheap white cupboard and some plastic outdoor chairs. Thank goodness.

Tonight the up and down reached amusing proportions. Because we will now have a cross-suburb childcare run to do, we’re considering buying a car again after several delightful years car-free, and tonight Andrew did our first test drive for a car on sale by a private seller. All went well with the drive, fortunately, well enough that we took the vehicle identification in order to run the standard checks. And so we sat in a McDonalds running the history checks… to discover that it had a write-off history. I guess there are situations where I’d buy a repaired write-off, maybe (although for the last couple of years that hasn’t even been a thing that’s possible to do in NSW) but buying from a private seller who didn’t disclose it isn’t one of those times. Then on the way home, A had such a nasty cough that we had to stop the car so that Andrew could take her out and hold her up so she’d stop sounding like she was choking on a fully grown pig. She was overtired and frantic and he had to fight her back into her carseat. Then we made it another couple of kilometres before I shut V’s window using the driver controls… right onto his hand, which he’d stuck out the window.

V’s hand is fine. A can still inhale. We don’t have a car that’s a undisclosed repaired write-off. Sunny day.

Syndicated 2015-05-23 12:38:38 from puzzling.org

22 May 2015 tampe   » (Journeyer)

again forward chaining

Hi people!

I was playing with guile log and the prolog theirin to introduce forward chaining building up databases and lookup tables. So let's go on to a nice examples in graph theory.

Consider the problem with a huge graph, but the graph consists of clusters and don't have
much arrows going between the clusters. Also the number of clusters are not that large and the individual clusters are not that large. The task is to setup an effective system that calculates a maping from one node to the next globally if there is a chain linking them. So what you can do is to calculate a lookup table for the individual cluster and also a relational mapping of the cluster themslf. We also need to map the individual interface nodes.

The inteface of library(forward_chaining) is as follows. There is a directive set_trigger/1 that defines the name of the trigger function that will be calculated. Then this atom will be used in consequent rules defining a forwars chaining indicated with =f> as an operator that is similart to :- , --> etc in prolog. Also the mappings will be effectively stored in lookup tables in dynamic predicates, so one need to declare those as well, the prelude is therefore,


:- use_module(library(forward_chaining)).

:- set_trigger(t).

:- dynamic(arrow/2).
:- dynamic(parent/2).
:- dynamic(parrow/2).
:- dynamic(interface/4).

Now for the rules,


arrow(X,Y),parent(X,XX),parent(Y,YY) =f>
{XX==YY} -> parrow(X,Y) ;
(parrow(XX,YY),interface(X,Y,XX,YY)).

This rule will maintain databases arrow/2 of arrows introduced, parent/2 a database
of cluster relations and as a conequence if the clusters are the same make a parrow/2 relation
or parraw/2 and interface/4 relation. The parrow is goverend by the transitive law


parrow(X,Y),parrow(Y,X) =f> parrow(X,Z).

parrow(X,Y) will tell if Y can be gotten from X inside
the same cluster and
parrow(XX,YY) will tell if YY cluster can be gotten from the XX but not nessesary. (This is
used to cut off branches later)

That''s the forward chaining part, we make some custom functions to add data to the database e.g.


set_arrow(X,Y) :- fire(t,arrow(X,Y)).
set_parent(X,Y) :- fire(t,parent(X,Y)).

You issue these functions for each arrow relation and cluster relation in the system. And the databases will be setuped just fine through the triggering system inherent in forward chaining.

The meat


'x->y?'(X,Y) :-
parent(X,XX),parent(Y,YY),
XX== YY -> parrow(X,Y) ; i(X,Y,XX,YY).

this is plain backward chaining, not defining any databases. We just dispatch depending if the clusters are the same or not. If they are the same, it's a microsecond away in the lookup table of
parrow/2, else we dispatch to i. i is interesting, here it is:


i(X,Y,XX,YY) :-
parrow(XX,YY),
(
(interface(Z,W,XX,YY),parrow(X,Z),parrow(W,Y)) ;
interface(Z,W,XX,ZZ),parrow(X,Z),i(W,Y,ZZ,YY)
).

Well XX must relate to YY aka parrow/2. But that is just a rough estimate, a hash value, if they are the same we must do more work. we first try to go to an interface node directly from XX to YY via interface Z,W. for all of those we try to match a parrow/2 lookup as it is defined whithin the same cluster but that may fail and then we try to jump to via an intermediate cluster.

An lookup table for the whole global system is expensive memory wize and you easy blow
guile-log's limit of 10000 elements in the database. But the lookup tables for these systems are very hardly optimized for fast lookup. Now just doing the lookup tables for the individual clusters
will make it scalable for larger system then if these tricks where not used. I find this system is
a nice middle ground between creating gigantic lookup tables and do eveythng in searches that
can take quite some time.

have fun!!!

22 May 2015 marnanel   » (Journeyer)

On Josh Duggar and Mike Huckabee

TW child abuse, sexual assault

so, this is what i have to say about Josh Duggar.
Q: what's it called when you hush up your own children being raped to preserve your reputation?
A: it's called Omelas. and if you, like Mike Huckabee, care nothing about walking away from Omelas, i don't want to know you. that's all.

This entry was originally posted at http://marnanel.dreamwidth.org/334831.html. Please comment there using OpenID.

Syndicated 2015-05-22 18:44:48 from Monument

21 May 2015 hypatia   » (Journeyer)

Photo circle shots

I recently ran a “photo circle”, consisting of a small group of people sending prints of their own photographs to each other. It was a fun way to prod myself to take non-kid photos.

My four photos were:

Photo circle: sun in the eucalypts

I took Sun in the eucalypts in the late afternoon of Easter Sunday, as the sun was sinking behind the eucalypts at Centennial Park’s children’s bike track. I tried to take one with the sun shining through the trees but didn’t get the lens flare right. I like the contrast between the sunlit tree and the dark tree in this one. It feels springlike, for an autumn scene.

The other three are a very different type of weather shot, taken during Sydney’s extreme rainfall of late April and very early May:

Photo circle: rainstorm

This one has the most post-processing by far: it was originally shot in portrait and in colour. I was messing around with either fast or slow shutter speeds while it poured with rain at my house; I have a number of similar photos where spheres of water are suspended in the air. None of them quite work but I will continue to play with photographing rain with a fast shutter speed. In the meantime, the slow shutter speed here works well. I made the image monochrome in order to make the rain stand out more. In the original image the green tree and the rich brown fencing and brick rather detract from showing exactly how rainy it was.

Photo circle: Sydney rain storm

This was shot from Gunners’ Barracks in Mosman (a historical barracks, not an active one) as a sudden rainstorm rolled over Sydney Harbour. The view was good enough, but my lens not wide enough, to see it raining on parts of the harbour and not on other parts. All the obscurity of the city skyline in this shot is due to rain, not fog.

Photo circle: ferry in the rain

This is the same rainstorm as the above shot; they were taken very close together. It may not be immediately obvious, but the saturation on this shot is close to maximum in order to make the colours of the ferry come up at all. I was the most worried about this shot on the camera, it was very dim. It comes up better in print than on screen, too. The obscurity is again entirely due to the rain, and results in the illusion that there is only one vessel on Sydney Harbour. Even in weather like this, that’s far from true. I felt very lucky to capture this just before the ferry vanished into the rain too.

Syndicated 2015-05-21 23:08:18 from puzzling.org

21 May 2015 sye   » (Journeyer)

Art for Aunty's sake

Art Appraising and it's Lunacy

I find this below the fold dialogue on able2know.org posted in 2010 interesting after reading "Chaotic Art Fraud Case Ends in Guilty Verdict by Ross Todd, The Recorder " as verdict/sentence coming down upon the mockery in court ...



Harris, who was wearing a business suit, said Brugnara was making a "mockery" of the court and making it impossible for the government to get a fair trial by talking over objections.

"If any attorney did what Mr. Brugnara did today, they'd be thrown in jail," she said.

Alsup agreed. The judge said he'd "never seen such an abusive performance" as Brugnara's cross-examination of Long, adding that she'd likely have some sort of "post-traumatic stress" as a result of the experience.

Read more: http://www.therecorder.com/id=1202725294902/Disruptive-Defendant-Keeps-Testing-A
lsups-Patience#ixzz3ahIfsQGQ


BusterSiren

1

Reply Tue 24 Aug, 2010 09:51 am

I've found this conversation very helpful - even stuff that was posted in 2004, 6 years ago!

I too got caught up in a Picasso scam involving Yamet Arts. A woman named Rose Long who presented herself as an art dealer/appraiser from Memphis, TN was here in New York and she sold me a Picasso litho (don Quixote) for $4,500, which she said was an original numbered Picasso. However, when she delivered it, I noticed it wasn't numbered. I asked her about that and she said "oh, that doesn't mean anything"!

Not knowing much about art, I didn't think much of it until a few months later my girlfriend said, "hey, how come that's not numbered!? And where's the certificate of authenticity!"

So I called Rose and asked about the CoA and she said she'd send me one. Well, what she sent was a fax from 1971 from Yamet Arts simply stating that they had "the following Picassos in stock"! Then it listed several Picassos. That told me nothing! How was that a certificate of authenticty?

So now I started to get mad. I called her up and said "how is this a certificate of authenticity - it's merely a statement of inventory - from 1971!" She went through some long convoluted explanation and then said, "hey, if you don't feel comfortable with it or if you feel like I'm in some way lying to you or cheating you, then you can always return it."

Well, being the sucker that I am I decided that Mrs. Long (wife of Memphis attorney Mike Long), must be telling the truth so I said, "no, no problem, don't worry about it."

Well, a few years later my girlfriend said, "Why don't we take that to Sotheby's and get them to give you an appraisal." I had no idea you could even do that! But that's what we did. When the appraiser walked out and saw it she immediately began shaking her head as if to say "that's worthless"!

She walked over, took a look and said, "$300". Which is basically her way of saying, "worthless." I think the frame was probably worth $200 of that!

Anyway, I got scammed by Rose Long on this Picasso, using a 1971 fax from Yamet Arts. I don't think Yamet had anything to do with this, but a search on Yamet turns up a lot of stories of involvement in scams.

Oh, I keep meaning to finish a website I put up about Rose Long and this art scam, Edit [Moderator]: Link removed. I want to copy all the unbelievable emails I got from her husband Mike Long, as well as a copy of the Yamet supposed "certificate of authenticity."

URL: http://able2know.org/topic/21658-4

Poll --> Can you see the character of Mr. Brugnara in any of our beloved K5 superhero?

21 May 2015 bagder   » (Master)

status update: http2 multiplexed uploads

I wrote a previous update about my work on multiplexing in curl. This is a follow-up to describe the status as of today.

I’ve successfully used the http2-upload.c code to upload 600 parallel streams to the test server and they were all sent off fine and the responses received were stored fine. MAX_CONCURRENT_STREAMS on the server was set to 100.

This is using curl git master as of right now (thus scheduled for inclusion in the pending curl 7.43.0 release).  I’m not celebrating just yet, but it is looking pretty good. I’ll continue testing.

Commit b0143a2a3 was crucial for this, as I realized we didn’t store and use the read callback in the easy handle but in the connection struct which is completely wrong when many easy handles are using the same connection! I don’t recall the exact reason why I put the data in that struct (I went back and read the commit messages etc) but I think this setup is correct conceptually and code-wise, so if this leads to some side-effects I think we need to just fix it.

Next up: more testing, and then taking on the concept of server push to make libcurl able to support it. It will certainly be a subject for future blog posts…

cURL

Syndicated 2015-05-21 07:34:44 from daniel.haxx.se

19 May 2015 jas   » (Master)

Scrypt in IETF

Colin Percival and I have worked on an internet-draft on scrypt for some time. I realize now that the -00 draft was published over two years ago, turning this effort today somewhat into archeology rather than rocket science. Still, having a published RFC that is easy to refer to from other Internet protocols will hopefully help to establish the point that PBKDF2 alone no longer provides state-of-the-art protection for password hashing.

I have written about password hashing before where I give a quick introduction to the basic concepts in the context of the well-known PBKDF2 algorithm. The novelty in scrypt is that it is designed to combat brute force and hardware accelerated attacks on hashed password databases. Briefly, scrypt expands the password and salt (using PBKDF2 as a component) and then uses that to create a large array (typically tens or hundreds of megabytes) using the Salsa20 core hash function and then de-references that large array in a random and sequential pattern. There are three parameters to the scrypt function: a CPU/Memory cost parameter N (varies, typical values are 16384 or 1048576), a blocksize parameter r (typically 8), and a parallelization parameter p (typically a low number like 1 or 16). The process is described in the draft, and there are further discussions in Colin’s original scrypt paper.

The document has been stable for some time, and we are now asking for it to be published. Thus now is good time to provide us with feedback on the document. The live document on gitlab is available if you want to send us a patch.

Syndicated 2015-05-19 12:55:18 from Simon Josefsson's blog

19 May 2015 johnnyb   » (Journeyer)

Just recently posted an article on whether doubt was the engine of science.

16 May 2015 marnanel   » (Journeyer)

Humans of Manchester

Three people I met today:

1) His granddaughter was a chorister at the cathedral, and she has to work very hard at Chetham's both on her schoolwork and on practicing. He'd done his national service in the army as a young man. It was a terrible two years, being ordered around by people who weren't fit to lick your boots. But he was glad of it, because he'd learned to play the system, and this knowledge comes in useful anywhere. If you were "a follower" you'd probably have got far more bored than he did. The other good thing about it was that having to learn discipline meant you got self-discipline thrown in, and that had been really useful for organising himself after he was demobbed.

2) She was in charge of all the cathedral volunteers: there were about seventy of them of all faiths and none. She herself was a Roman Catholic, which she said made very little difference in an Anglican cathedral. When she was a young girl living in Ireland, her grandfather was asked to send the kids to the local Church of Ireland school by the headmaster. The school's intake was too low to be sustainable that year otherwise. Her grandfather agreed. Soon he saw the RC priest walking down his front path to talk to him. He wouldn't go out, but he told someone to tell the priest that he was doing what was best for the community.

3) He was in the Arndale Centre, begging via psych manipulation techniques. If I hadn't been trying to get to the loo, I'd have had more fun with this.

He, walking up: "So, do YOU speak English?"
Me: "Yeeeessss...?"
He: "Ah, I like the way you say yeeesss. My name's Daniel. What's yours?" (puts out hand; I shake it automatically; he now has eye contact. He smiles warmly. I grow increasingly suspicious.)
Me: "I'm Thomas."
He: "Well, Thomas, I was..."
Me: "Look, what's this about?"
He: "I was just wondering whether you could spare me some money for a coffee."

I gave him £1 (which was more than I could really afford) for a good try, and for teaching me a beautiful opening line. "So, do YOU speak English?" breaks the ice, and indicates he's been trying to talk to a bunch of people so he's frustrated and you'll want to help him, and makes you want to do better than all the people so far. [Edit: It also has an unpleasant racist dogwhistle side that I'd missed entirely-- thanks to Abigail for pointing it out.]

This entry was originally posted at http://marnanel.dreamwidth.org/334388.html. Please comment there using OpenID.

Syndicated 2015-05-16 20:45:51 (Updated 2015-05-17 14:22:27) from Monument

16 May 2015 caolan   » (Master)

crash testing, 1 import failure

moggi described here our crash testing infrastructure. Basically we have a document horde mostly populated through get-bugzilla-attachments-by-mimetype which downloads all the attachments from our bugzilla (and a whole bunch of other bugzillas) that are in formats which LibreOffice can open. We then import the lot of them with the above testing harness looking for crashes and aborts. A new report tends to appear every 1-3 days.

These documents are filed in bugzillas. In many cases they were filed specifically because they were causing some sort of trouble for someone, so there are a lot of hostile documents in there.

We currently have 76221 documents in the horde, the most recent run reports one, one single solitary failure (an assert in a .doc about invalid positioning of a cross-reference bookmark in a document with change-tracking enabled).

Here's a graph over time of our failure rate. Where failure is either a straight forward crash, or a triggered assert. The builds are dbgutil, extra-debugging, extra-checking, assert-enabled, exception-specification-enabled builds.


You get temporary local peaks every now and then when either a new assert is added or someone introduces a bug. We have two purposes here, immediate regression discovery and historic bug removal.

We also have export crash testing, where the numbers aren't as shiny yet, but are on an equivalent downward trend. More on that in a while when we figure out how to fix this final import stinker.

Syndicated 2015-05-16 20:01:00 (Updated 2015-05-16 20:01:34) from Caolán McNamara

15 May 2015 caolan   » (Master)

gtk3 native theming menubar

After something of a struggle I appear to have the right gtk3 menubar theming combination for the selected item now after image...


before image...

Syndicated 2015-05-15 16:29:00 (Updated 2015-05-15 16:29:30) from Caolán McNamara

15 May 2015 AlanHorkan   » (Master)

How to open .pdn files? or: Things I wish I'd known earlier.

Paint.net is a graphics program uses its own binary file format .pdn that almost no other program can open. Paint.net has a large community and many plugins are available including a third part plugin that adds support for OpenRaster. Paint.net is written in C# and requires the Microsoft .Net runtime, meaning current versions work only on Windows Vista or later.

If you need to open PDN files without using Paint.net there is an answer! Lazpaint can open .pdn files and also natively supports OpenRaster.

In hindsight using Lazpaint would have been easier than taking a flat image and editing it to recreate the layer information I wanted. Although I respect the work done by Paint.net it is yet another example of time wasted and hassle caused by proprietary file formats and vendor lock-in.

Syndicated 2015-05-15 15:51:24 from Alan Horkan

15 May 2015 bagder   » (Master)

RFC 7540 is HTTP/2

HTTP/2 is the new protocol for the web, as I trust everyone reading my blog are fully aware of by now. (If you’re not, read http2 explained.)

Today RFC 7540 was published, the final outcome of the years of work put into this by the tireless heroes in the HTTPbis working group of the IETF. Closely related to the main RFC is the one detailing HPACK, which is the header compression algorithm used by HTTP/2 and that is now known as RFC 7541.

The IETF part of this journey started pretty much with Mike Belshe’s posting of draft-mbelshe-httpbis-spdy-00 in February 2012. Google’s SPDY effort had been going on for a while and when it was taken to the httpbis working group in IETF, where a few different proposals on how to kick off the HTTP/2 work were debated.

HTTP team working in LondonThe first “httpbis’ified” version of that document (draft-ietf-httpbis-http2-00) was then published on November 28 2012 and the standardization work began for real. HTTP/2 was of course discussed a lot on the mailing list since the start, on the IETF meetings but also in interim meetings around the world.

In Zurich, in January 2014 there was one that I only attended remotely. We had the design team meeting in London immediately after IETF89 (March 2014) in the Mozilla offices just next to Piccadilly Circus (where I took the photos that are shown in this posting). We had our final in-person meetup with the HTTP team at Google’s offices in NYC in June 2014 where we ironed out must of the remaining issues.

In between those two last meetings I published my first version of http2 explained. My attempt at a lengthy and very detailed description of HTTP/2, including describing problems with HTTP/1.1 and motivations for HTTP/2. I’ve since published eleven updates.

HTTP team in London, debating protocol detailsThe last draft update of HTTP/2 that contained actual changes of the binary format was draft-14, published in July 2014. After that, the updates were in the language and clarifications on what to do when. There are some functional changes (added in -16 I believe) for like when which sort of frames are accepted that changes what a state machine should do, but it doesn’t change how the protocol looks on the wire.

RFC 7540 was published on May 15th, 2015

I’ve truly enjoyed having had the chance to be a part of this. There are a bunch of good people who made this happen and while I am most certainly forgetting key persons, some of the peeps that have truly stood out are: Mark, Julian, Roberto, Will, Tatsuhiro, Patrick, Martin, Mike, Nicolas, Mike, Jeff, Hasan, Herve and Willy.

http2 logo

Syndicated 2015-05-14 23:18:05 from daniel.haxx.se

14 May 2015 caolan   » (Master)

more gtk3 theming

Continuing on the Gtk3 theming work. Now got the combobox and editbox rendering and sizes correct along with new gtk3-alike focus rectangles. Here's the after...

Here's the before of what the gtk3 effort looked like in 4-4
Here's the equivalent 4-4 gtk2 effort. Note that now in the above gtk3 theming we have a single focus rectangle for the full combobox rather than a focus rectangle around the non-button part of the widget and that, as in a normal gtk3 combobox, the background isn't set to blue when selected. I always hated that out of character blue listbox/combobox selection color. So certain elements of the gtk3 theming now slightly surpass the gtk2 one which is nice. Though clearly the spinbuttons are still effectively imaginary ones as they look nothing like the native gtk3 ones.

I also fixed (for both gtk2 and gtk3) that notorious checkbox issue where unchecking a checkbox would leave a portion of the check still drawn outside the checkbox rectangle.

Syndicated 2015-05-14 19:56:00 (Updated 2015-05-14 19:56:52) from Caolán McNamara

14 May 2015 marnanel   » (Journeyer)

Repeal of the Human Rights Act

Some politics:

There has been talk of repealing the Human Rights Act recently. This is the legislation which makes the European Declaration of Human Rights binding on the UK. The EDHR is nothing to do with the European Union-- it was created after WWII as a check on states becoming totalitarian in the future. So repealing it worries me.

I keep hearing people say, "How can we let the Human Rights Act apply to murderers? What about the human rights of the people they killed?" But if the Human Rights Act applied only to "nice" people, it wouldn't be necessary. It exists to provide a baseline for absolutely everyone, no matter how much the state or the public dislike them.

Amnesty is getting a petition together against the repeal of the Act. I've signed it, and if this worries you as much as it worries me, please sign it too. You can find it at http://keeptheact.uk/ .

Anyone reading this post to the end deserves a cup of coffee, so I've put some on.



This entry was originally posted at http://marnanel.dreamwidth.org/334313.html. Please comment there using OpenID.

Syndicated 2015-05-14 12:20:54 from Monument

14 May 2015 slef   » (Master)

Recorrecting Past Mistakes: Window Borders and Edges

A while ago, I switched from tritium to herbstluftwm. In general, it’s been a good move, benefitting from active development and greater stability, even if I do slightly mourn the move from python scripting to a shell client.

One thing that was annoying me was that throwing the pointer into an edge didn’t find anything clickable. Window borders may be pretty, but they’re a pretty poor choice as the thing that you can locate most easily, the thing that is on the screen edge.

It finally annoyed me enough to find the culprit. The .config/herbstluftwm/autostart file said “hc pad 0 26″ (to keep enough space for the panel at the top edge) and changing that to “hc pad 0 -8 -7 26 -7″ and reconfiguring the panel to be on the bottom (where fewer windows have useful controls) means that throwing the pointer at the top or the sides now usually finds something useful like a scrollbar or a menu.

I wonder if this is a useful enough improvement that I should report it as an enhancement bug.

Syndicated 2015-05-14 04:58:02 from Software Cooperative News » mjr

12 May 2015 jas   » (Master)

Certificates for XMPP/Jabber

I am revamping my XMPP server and I’ve written down notes on how to set up certificates to enable TLS.

I will run Debian Jessie with JabberD 2.x, using the recent jabberd2 jessie-backport. The choice of server software is not significant for the rest of this post.

Running XMPP over TLS is a good idea. So I need a X.509 PKI for this purpose. I don’t want to use a third-party Certificate Authority, since that gives them the ability to man-in-the-middle my XMPP connection. Therefor I want to create my own CA. I prefer tightly scoped (per-purpose or per-application) CAs, so I will set up a CA purely to issue certificates for my XMPP server.

The current XMPP specification, RFC 6120, includes a long section 13.7 that discuss requirements on Certificates.

One complication is the requirement to include an AIA for OCSP/CRLs — fortunately, it is not a strict “MUST” requirement but a weaker “SHOULD”. I note that checking revocation using OCSP and CRL is a “MUST” requirement for certificate validation — some specification language impedence mismatch at work there.

The specification demand that the CA certificate MUST have a keyUsage extension with the digitalSignature bit set. This feels odd to me, and I’m wondering if keyCertSign was intended instead. Nothing in the XMPP document, nor in any PKIX document as far as I am aware of, will verify that the digitalSignature bit is asserted in a CA certificate. Below I will assert both bits, since a CA needs the keyCertSign bit and the digitalSignature bit seems unnecessary but mostly harmless.

My XMPP/Jabber server will be “chat.sjd.se” and my JID will be “simon@josefsson.org”. This means the server certificate need to include references to both these domains. The relevant DNS records for the “josefsson.org” zone is as follows, see section 3.2.1 of RFC 6120 for more background.

_xmpp-client._tcp.josefsson.org.	IN	SRV 5 0 5222 chat.sjd.se.
_xmpp-server._tcp.josefsson.org.	IN	SRV 5 0 5269 chat.sjd.se.

The DNS records or the “sjd.se” zone is as follows:

chat.sjd.se.	IN	A	...
chat.sjd.se.	IN	AAAA	...

The following commands will generate the private key and certificate for the CA. In a production environment, you would keep the CA private key in a protected offline environment. I’m asserting a expiration date ~30 years in the future. While I dislike arbitrary limits, I believe this will be many times longer than the anticipated lifelength of this setup.

openssl genrsa -out josefsson-org-xmpp-ca-key.pem 3744
cat > josefsson-org-xmpp-ca-crt.conf << EOF
[ req ]
x509_extensions = v3_ca
distinguished_name = req_distinguished_name
prompt = no
[ req_distinguished_name ]
CN=XMPP CA for josefsson.org
[ v3_ca ]
subjectKeyIdentifier=hash
basicConstraints = CA:true
keyUsage=critical, digitalSignature, keyCertSign
EOF
openssl req -x509 -set_serial 1 -new -days 11147 -sha256 -config josefsson-org-xmpp-ca-crt.conf -key josefsson-org-xmpp-ca-key.pem -out josefsson-org-xmpp-ca-crt.pem

Let’s generate the private key and server certificate for the XMPP server. The wiki page on XMPP certificates is outdated wrt PKIX extensions. I will embed a SRV-ID field, as discussed in RFC 6120 section 13.7.1.2.1 and RFC 4985. I chose to skip the XmppAddr identifier type, even though the specification is somewhat unclear about it: section 13.7.1.2.1 says that it “is no longer encouraged in certificates issued by certification authorities” while section 13.7.1.4 says “Use of the ‘id-on-xmppAddr’ format is RECOMMENDED in the generation of certificates”. The latter quote should probably have been qualified to say “client certificates” rather than “certificates”, since the latter can refer to both client and server certificates.

Note the use of a default expiration time of one month: I believe in frequent renewal of entity certificates, rather than use of revocation mechanisms.

openssl genrsa -out josefsson-org-xmpp-server-key.pem 3744
cat > josefsson-org-xmpp-server-csr.conf << EOF
[ req ]
distinguished_name = req_distinguished_name
prompt = no
[ req_distinguished_name ]
CN=XMPP server for josefsson.org
EOF
openssl req -sha256 -new -config josefsson-org-xmpp-server-csr.conf -key josefsson-org-xmpp-server-key.pem -nodes -out josefsson-org-xmpp-server-csr.pem
cat > josefsson-org-xmpp-server-crt.conf << EOF
subjectAltName=@san
[san]
DNS=chat.sjd.se
otherName.0=1.3.6.1.5.5.7.8.7;UTF8:_xmpp-server.josefsson.org
otherName.1=1.3.6.1.5.5.7.8.7;UTF8:_xmpp-client.josefsson.org
EOF
openssl x509 -sha256 -CA josefsson-org-xmpp-ca-crt.pem -CAkey josefsson-org-xmpp-ca-key.pem -set_serial 2 -req -in josefsson-org-xmpp-server-csr.pem -out josefsson-org-xmpp-server-crt.pem -extfile josefsson-org-xmpp-server-crt.conf

With this setup, my XMPP server can be tested by the XMPP IM Observatory. You can see the c2s test results and the s2s test results. Of course, there are warnings regarding the trust anchor issue. It complains about a self-signed certificate in the chain. This is permitted but not recommended — however when the trust anchor is not widely known, I find it useful to include it. This allows people to have a mechanism of fetching the trust anchor certificate should they want to. Some weaker cipher suites trigger warnings, which is more of a jabberd2 configuration issue and/or a concern with jabberd2 defaults.

My jabberd2 configuration is simple — in c2s.xml I add a <id> entity with the “require-starttls”, “cachain”, and “pemfile” fields. In s2s.xml, I have the <pemfile>, <resolve-ipv6>, and <require-tls> entities.

Some final words are in order. While this setup will result in use of TLS for XMPP connections (c2s and s2s), other servers are unlikely to find my CA trust anchor, let alone be able to trust it for verifying my server certificate. I’m happy to read about Peter Saint-Andre’s recent SSL/TLS work, and in particular I will follow the POSH effort.

Syndicated 2015-05-12 13:43:08 from Simon Josefsson's blog

11 May 2015 jas   » (Master)

Laptop decision fatigue

I admit defeat. I have made some effort into researching recent laptop models (see first and second post). Last week I asked myself what the biggest problem with my current 4+ year old X201 is. I couldn’t articulate any significant concern. So I have bought another second-hand X201 for semi-permanent use at my second office. At ~225 USD/EUR, including another docking station, it is an amazing value. I considered the X220-X240 but they have a different docking station, and were roughly twice the price — making up the cost for a Samsung 850 PRO SSD for it. Thanks everyone for your advice, anyway!

Syndicated 2015-05-11 19:31:45 from Simon Josefsson's blog

11 May 2015 hypatia   » (Journeyer)

Monday 11 May 2015

When I left you, I was hiding out in my hotel room in San Francisco feeling sad. I did end up having a perfectly nice time, that’s always part of travel too. A highlight was walking through the Mission and running into someone we knew, and then dinner at Bar Tartine. Oh, and chicken and margaritas at Zuni Cafe the following day.

It’s possible that I live to eat rather than eat to live.

It’s also possible that I’d leave the house a lot more if I didn’t have kids. Travel is my visit into a childfree world.

I also saw some sweet toy poodle puppies. I didn’t eat them.

I had fantasies of spending the Saturday driving out of San Francisco, but ended up spending the entire day in my very dark hotel room as well. No surprises there. I’d like to be the sort of person who flies to Canada, works really hard, flies to the US, works really hard, and then on her day off goes driving on unknown roads in search of wine, redwoods, beaches, or something like that. It turns out that after all that work travel I am the kind of person who huddles in a hotel room with a laptop. I regret nothing.

On the Sunday I walked up, I think, Octavia Street, quite quickly, or at least by Val’s measure. That was painful, but it turns out that walking up hills slowly is even more painful. Either that, or I’ve just grown tied of cajoling children up hills after all this time. Just think, I walked up a whole hill without having an argument with anyone and without anyone wanting me to carry them while I was already carrying their bag, nappies, toys, and/or bike. And then I sat up in Lafayette Park having surreal thoughts about what I would need to get done the next day in Sydney. Intercontinental travel is very implausible.

I increasingly find flying odd too. I was in the middle of a group of four on the way back, so I basically had a slumber party with three strange men, all of whom studiously ignored me, albeit one time with difficulty when I dropped a shoe on one man who had been sleeping up until that point. Of all the things you’d think to do imprisoned in a flying metal tube, would sleeping sandwiched between strangers and watching Captain America: The First Avenger while shoes rain down rise to the top of your list?

I arrived back in the pouring rain. The pilots warned us coming in that the wind was approaching 100km/hr, but, fortunately (apparently) right behind the runway. It seemed a smooth enough landing.

I had heard it was raining in Sydney and I should have thought more carefully about the source. When the guy in the electronics shop in San Francisco has heard about rain in Sydney, there’s quite some rain in Sydney. Not as much, and not as tragically, as in the Hunter Valley, but enough that rain blew through the taxi rank at the airport as people wrestled with their luggage to extract any coats they had.

You should know that I am burying the lede in all of this. As I wrote the last entry, Andrew was preparing our side of the contracts to buy a house, and the exchange of contracts took place the following day. At the moment it’s very strange and hard to cope with, as we have to do a lot of work (finance, removalists, getting rid of furniture, figuring out schools and such) without any of the pay-off of hanging pictures or having built-ins at long last or being free of our current rental and its endless mysterious water problems. I have dark memories of the fog we walked around in for weeks after we moved to this suburb. Not to mention decidedly mixed feelings about leaving the first suburb in Sydney where we’ve ever been on chatting terms with other adults as we go about our daily business.

Good things will come of this, in the medium term, and if we work for them. Now to face into the wind.

Syndicated 2015-05-11 11:42:56 from puzzling.org

11 May 2015 dyork   » (Master)

Celebrating 15 Years of "Blogging", Courtesy of Advogato!

It was 15 years ago tonight, on May 10, 2000, that I created my account here on Advogato and posted my first entry.

Little did I know how much that action would ultimately change my life. I wrote about the journey since that time tonight.

Fun to see!

10 May 2015 dmarti   » (Master)

Bonus links and a point of order

Interested in the Targeted Advertising Considered Harmful material, and looking for next steps for making web ads work better for sites and brands? Sure you are.

New blog in progress: blog.aloodo.org. This is about small changes that site and brands can do to get better web ads. A few simple rules...

  • No calls for collective action. That's what the adtech people are trying to do on fraud, and it's not working.

  • No long-term projects. The "backlog" never gets done. Web sites have to work at the speed of git push, not the speed of cheese tweets. Every to-do item on blog.aloodo.com will be as simple as adding a social button widget to a page template, or simpler.

  • No appeals to privacy. Privacy is an important philosophical concept, which reasonable people disagree on, and which we do not have time for. We can fix obvious bugs without discovering the meaning of a complicated word.

  • No assumptions that users are changing. We ignore surveillance marketing people when they say that Consumers want to connect and share with their beloved brands, and we need to ignore Users are becoming concerned about PII and autonomy just as much.

  • Work with norms and laws, don't change them. The niche for brogrammers doing creepy and/or illegal stuff in order to do a business is filled. More than filled.

Anyway, feed. Blog.

Bonus links

Timothy B Lee: How to be better at PR

Mark Duffy: Copyranter: Digital is destroying all creativity

BOB HOFFMAN: Agencies Cheating Clients, Says Former Mediacom CEO. No Shit, Says Me.

Tales of a Developer Advocate: Detecting injected content from third-parties on your site

Francis: The advert wars

Darren Herman: Mozilla’s mission in the context of digital advertising

jm: Epsilon Interactive breach the Fukushima of the Email Industry (CAUCE)

Warc: Brands still look to print

Kurt Wagner: Snapchat’s Discover Publishers Are Asking for Big Ad Rates — And They’re Getting Them

Sell! Sell!: Building Real Brands: The Difference Between Building A House, And Painting A Picture Of A House.

Monica Chew: How do I turn on Tracking Protection? Let me count the ways.

Evan Soltas: The Rent Hypothesis

Sell! Sell!: Advertising Is Losing Maverick Thinking - What's The Solution?

Alexandra Bruell: Media-Agency Kickbacks. Yes, They're Real. (via The Ad Contrarian)

Jeff Kagan: Google Glass Should Stay Gone

Samuel Gibbs: Facebook 'tracks all visitors, breaching EU law'

djbriane: Meerkat Vs Periscope: Tech journalist is a sickly mess | BGR

Bruce Schneier: Survey of Americans' Privacy Habits Post-Snowden

Monica Chew: Two Short Stories about Tracking Protection

Joseph Lichterman: The Economist’s Tom Standage on digital strategy and the limits of a model based on advertising

Mike Proulx: There Is No More Social Media -- Just Advertising

Maciej Zawadziński, ClearCode: How the U.N.’s new privacy move will shake up the adtech industry

BOB HOFFMAN: How Do You Untrain A Generation?

Todd Garland: Context is Everything: How to Counter AdBlock

Jason Kint: Debunked: Five Excuses for Dismissing Do Not Track

Adotas: Proximity Networking: Can You Buy Me Now?

Adotas: Celtra offers “Programmatic Creative” for brands and agencies to better target customers

Alex Kantrowitz: Brands Are Swiftly Taking Automated Digital Ad Buying Operations In-House

Digg Top Stories: How Click Farms Have Inflated Social Media Currency

Mona Patel: When Big Data Becomes More Valuable Than Your Products/Services

Ed: Whys and Hows of Suggested Tiles

JWZ: Wherein I ridicule Facebook some more, then collaborate with the Panopticon

Media Briefing TheMediaBriefing Analysis: Who are the fraudsters costing the ad industry billions? (via blog.aloodo.org)

Jordan Weissmann: One of Today's Pulitzer Prize Winners Left Journalism Because It Couldn't Pay His Rent. Now He's in PR. (via Digiday)

Freddie: the supervillain’s guide to saving the internet

Garett Sloane: Here's How Europe Is Stifling the Ad Business for Google, Facebook and Others (via Technology & Marketing Law Blog) (via Technology & Marketing Law Blog)

Gregory Raifman: How the Advertising Industry Can Get Rid of 'Bad Ads'

MediaPost | MediaDailyNews: Google Names Ad Networks Responsible For Ad Injectors

Google Security PR: New Research: The Ad Injection Economy

Don Marti: Why adtech fraud would make the worst heist movie ever (had to put one from the new blog in here, right?)

Syndicated 2015-05-10 15:07:31 from Don Marti

8 May 2015 ctrlsoft   » (Journeyer)

The Samba Buildfarm

Portability has always been very important to Samba. Nowadays Samba is mostly used on top of Linux, but Tridge developed the early versions of his SMB implementation on a Sun workstation.

A few years later, when the project was being picked up, it was ported to Linux and eventually to a large number of other free and non-free Unix-like operating systems.

Initially regression testing on different platforms was done manually and ad-hoc.

Once Samba had support for a larger number of platforms, including numerous variations and optional dependencies, making sure that it would still build and run on all of these became a non-trivial process.

To make it easier to find regressions in the Samba codebase that were platform-specific, tridge put together a system to automatically build Samba regularly on as many platforms as possible. So, in Spring 2001, the build farm was born - this was a couple of years before other tools like buildbot came around.

The Build Farm

The build farm is a collection of machines around the world that are connected to the internet, with as wide a variety of platforms as possible. In 2001, it wasn't feasible to just have a single beefy machine or a cloud account on which we could run virtual machines with AIX, HPUX, Tru64, Solaris and Linux so we needed access to physical hardware.

The build farm runs as a single non-privileged user, which has a cron job set up that runs the build farm worker script regularly. Originally the frequency was every couple of hours, but soon we asked machine owners to run it as often as possible. The worker script is as short as it is simple. It retrieves a shell script from the main build farm repository with instructions to run and after it has done so, it uploads a log file of the terminal output to samba.org using rsync and a secret per-machine password.

Some build farm machines are dedicated, but there have also been a large number of the years that would just run as a separate user account on a machine that was tasked with something else. Most build farm machines are hosted by Samba developers (or their employers) but we've also had a number of community volunteers over the years that were happy to add an extra user with an extra cron job on their machine and for a while companies like SourceForge and HP provided dedicated porter boxes that ran the build farm.

Of course, there are some security usses with this way of running things. Arbitrary shell code is downloaded from a host claiming to be samba.org and run. If the machine is shared with other (sensitive) processes, some of the information about those processes might leak into logs.

Our web page has a section about adding machines for new volunteers, with a long list of warnings.

Since then, various other people have been involved in the build farm. Andrew Bartlett started contributing to the build farm in July 2001, working on adding tests. He gradually took over as the maintainer in 2002, and various others (Vance, Martin, Mathieu) have contributed patches and helped out with general admin.

In 2005, tridge added a script to automatically send out an e-mail to the committer of the last revision before a failed build. This meant it was no longer necessary to bisect through build farm logs on the web to find out who had broken a specific platform when; you'd just be notified as soon as it happened.

The web site

Once the logs are generated and uploaded to samba.org using rsync, the web site at http://build.samba.org/ is responsible for making them accessible to the world. Initially there was a single perl file that would take care of listing and displaying log files, but over the years the functionality has been extended to do much more than that.

Initial extensions to the build farm added support for viewing per-compiler and per-host builds, to allow spotting trends. Another addition was searching logs for common indicators of running out of disk space.

Over time, we also added more samba.org-projects to the build farm. At the moment there are about a dozen projects.

In a sprint in 2009, Andrew Bartlett and I changed the build farm to store machine and build metadata in a SQLite database rather than parsing all recent build log files every time their results were needed.

In a follow-up sprint a year later, we converted most of the code to Python. We also added a number of extensions; most notably, linking the build result information with version control information so we could automatically email the exact people that had caused the build breakage, and automatically notifying build farm owners when their machines were not functioning.

autobuild

Sometime in 2011 all committers started using the autobuild script to push changes to the master Samba branch. This script enforces a full build and testsuite run for each commit that is pushed. If the build or any part of the testsuite fails, the push is aborted. This alone massively reduced the number of problematic changes that was pushed, making it less necessary for us to be made aware of issues by the build farm.

The rewrite also introduced some time bombs into the code. The way we called out to our ORM caused the code to fetch all build summary data from the database every time the summary page was generated. Initially this was not a problem, but as the table grew to 100,000 rows, the build farm became so slow that it was frustrating to use.

Analysis tools

Over the years, various special build farm machines have also been used to run extra code analysis tools, like static code analysis, lcov, valgrind or various code quality scanners.

Summer of Code

Of the last couple of years the build farm has been running happily, and hasn't changed much.

This summer one of our summer of code students, Krishna Teja Perannagari, worked on improving the look of the build farm - updating it to the current Samba house style - as well as various performance improvements in the Python code.

Jenkins?

The build farm still works reasonably well, though it is clear that various other tools that have had more developer attention have caught up with it. If we would have to reinvent the build farm today, we would probably end up using an off-the-shelve tool like Jenkins that wasn't around 14 years ago. We would also be able to get away with using virtual machines for most of our workers.

Non-Linux platforms have become less relevant in the last couple of years, though we still care about them.

The build farm in its current form works well enough for us, and I think porting to Jenkins - with the same level of platform coverage - would take quite a lot of work and have only limited benefits.

(Thanks to Andrew Bartlett for proofreading the draft of this post.)

Syndicated 2015-02-08 00:06:23 from Stationary Traveller

8 May 2015 dkg   » (Master)

Cheers to audacity!

When paultag recently announced a project to try to move debian infrastructure to python3, my first thought was how large that undertaking would likely be. It seems like a classic engineering task, full of work and nit-picky details to get right, useful/necessary in the long-term, painful in the short-term, and if you manage to pull it off successfully, the best you can usually hope for is that no one will notice that it was done at all.

I always find that kind of task a little off-putting and difficult to tackle, but I was happy to see someone driving the project, since it does need to get done. Debian is potentially also in a position to help the upstream python community, because we have a pretty good view of what things are being used, at least within our own ecosystem.

I'm happy to say that i also missed one of the other great benefits of paultag's audacious proposal, which is how it has engaged people who already knew about debian but who aren't yet involved. Evidence of this engagement is already visible on the py3porters-devel mailing list. But if that wasn't enough, I ran into a friend recently who told me, "Hey, I found a way to contribute to debian finally!" and pointed me to the py3-porters project. People want to contribute to the project, and are looking for ways in.

So cheers to the people who propose audacious projects and make them inviting to everyone, newcomers included. And cheers to the people who step up to potentially daunting work, stake out a task, roll up their sleeves, and pitch in. Even if the py3porters project doesn't move all of debian's python infrastructure to pyt3 as fast as paultag wants it to, i think it's already a win for the project as a whole. I am looking forward to seeing what comes out of it (and it's reminding me i need to port some of my own python work, too!)

The next time you stumble over something big that needs doing in debian, even something that might seem impossible, please make it inviting, and dive in. The rest of the project will grow and improve from the attempt.

Tags: py3-porters

Syndicated 2015-05-08 18:43:00 from Weblogs for dkg

7 May 2015 Stevey   » (Master)

On de-duplicating uploaded file-content.

This evening I've been mostly playing with removing duplicate content. I've had this idea for the past few days about object-storage, and obviously in that context if you can handle duplicate content cleanly that's a big win.

The naive implementation of object-storage involves splitting uploaded files into chunks, storing them separately, and writing database-entries such that you can reassemble the appropriate chunks when the object is retrieved.

If you store chunks on-disk, by the hash of their contents, then things are nice and simple.

The end result is that you might upload the file /etc/passwd, split that into four-byte chunks, and then hash each chunk using SHA256.

This leaves you with some database-entries, and a bunch of files on-disk:

/tmp/hashed/ef267892ee080862c96a8d2d05de62f48e20f0875f27379e7d58c73ea4455bf1
/tmp/hashed/a378977155fb42bb006496321cbe31f74cbda803c3f6ca590f30e76d1afad921
..
/tmp/hashed/3805b0245bc8375be7125ae228eef711552ac082ffb9bf8756e2964a2393a9de

In my toy-code I wrote out the data in 4-byte chunks, which is grossly ineffeciant. But the value of using such small pieces is that there is liable to be a lot of collisions, and that means we save-space. It is a trade-off.

So the main thing I was experimenting with was the size of the chunks. If you make them too small you lose I/O due to the overhead of writing out so many small files, but you gain because collisions are common.

The rough testing I did involved using chunks of 16, 32, 128, 255, 512, 1024, 2048, and 4096 bytes. As sizes went up the overhead shrank, but also so did the collisions.

Unless you could handle the case of users uploading a lot of files like /bin/ls which are going to collide 100% of the time with prior uploads using larger chunks just didn't win as much as I thought they would.

I wrote a toy server using Sinatra & Ruby, which handles the splitting/hashing/and stored block-IDs in SQLite. It's not so novel given that it took only an hour or so to write.

The downside of my approach is also immediately apparent. All the data must live on a single machine - so that reassmbly works in the simple fashion. That's possible, even with lots of content if you use GlusterFS, or similar, but it's probably not a great approach in general. If you have large capacity storage avilable locally then this might would well enough for storing backups, etc, but .. yeah.

Syndicated 2015-05-07 00:00:00 from Steve Kemp's Blog

7 May 2015 bagder   » (Master)

HTTP/2 for TCP/IP Geeks

I attended a TCP/IP Geeks Stockholm meetup yesterday and did a talk about HTTP/2. Below is the slide set, but as usual it might not be entirely self explanatory…

HTTP/2 – for TCP/IP Geeks Stockholm from Daniel Stenberg

Syndicated 2015-05-07 06:22:44 from daniel.haxx.se

7 May 2015 mako   » (Master)

Books Room

Mika trying to open the books room. And failing.Is the locked “books room” at McMahon Hall at UW a metaphor for DRM in the academy? Could it be, like so many things in Seattle, sponsored by Amazon?

Mika noticed the room several weeks ago but felt that today’s International Day Against DRM was a opportune time to raise the questions in front of a wider audience.

Syndicated 2015-05-07 04:11:19 (Updated 2015-05-07 04:21:06) from copyrighteous

6 May 2015 bagder   » (Master)

curl user poll 2015

Now is the time. If you use curl or libcurl from time to time, please consider helping us out with providing your feedback and opinions on a few things:

https://goo.gl/FyToBn

It’ll take you a couple of minutes and it’ll help us a lot when making decisions going forward.

The poll is hosted by Google and that short link above will take you to:

https://docs.google.com/forms/d/1uQNYfTmRwF9RX5-oq_HV4VyeT1j7cxXpuBIp8uy5nqQ/viewform

Syndicated 2015-05-06 12:44:57 from daniel.haxx.se

6 May 2015 louie   » (Master)

Come work with me – developer edition!

It has been a long time since I was able to say to developer friends “come work with me” in anything but the most abstract “come work under the same roof” kind of sense. But today I can say to developers “come work with me” and really mean it. Which is fun :)

Details: Wikimedia’s new community tech team is hiring for a community tech developer and a team lead. This will be extremely community-intensive work, so if you enjoy and get energy from working with a community and helping them achieve their goals, this could be a great role for you. This team will work intensely with my department to ensure that we’re correctly identifying and prioritizing the needs of our most active editors. If that sounds like fun, get in touch :)

[And I realize that I’ve been bad and not posted here, so here’s my new job announce: “my department” is the Foundation’s new Community Engagement department, where we work to support healthy contributor communities and help WMF-community collaboration. It is a detour from law, but I’ve always said law was just a way to help people do their thing — so in that sense is the same thing I’ve always been doing. It has been an intense roller coaster of a first two months, and I look forward to much more of the same.]

Syndicated 2015-05-06 05:51:20 from Luis Villa » Blog

6 May 2015 mikal   » (Journeyer)

Ancillary Justice




ISBN: 9780356502403
LibraryThing
I loved this book. The way the language works takes a little while to work out, but then blends into the background. The ideas here are new and interesting and I look forward to other work of Ann's. Very impressed with this book.

Tags for this post: book ann_leckie combat ai aliens
Related posts: Mona Lisa Overdrive; East of the Sun, West of the Moon; Count Zero; Emerald Sea; All The Weyrs of Pern; Against the Tide


Comment

Syndicated 2015-05-05 20:48:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

5 May 2015 caolan   » (Master)

new area fill toolbar dropdown

The GSOC 2014 Color Selector is in LibreOffice 4.4, but it's not used for the "area fill" dropdown in impress or draw. So I spent a little time today for LibreOffice 5.0 to hack things up so that instead of using the old color drop down list for that we now have the new color selector in the toolbar instead. Gives access to custom colors, multiple palettes, and recently used colors all in one place.

LibreOffice 5.0
And here's the old one for reference, I've backported the above change to Fedora 22's 4.4.X to address some in-house vented frustration at selecting colors in impress.
LibreOffice 4.4


Syndicated 2015-05-05 19:46:00 (Updated 2015-05-05 19:46:31) from Caolán McNamara

5 May 2015 mones   » (Journeyer)

Bye bye DebConf15

Yep, I had planned to go, but given the last mail from registration seems there's a overwhelming number of sponsorship requests, so I've dediced to withdraw my request. There's lots of people doing much more important things for Debian than me which deserve that help. Having to complete my MSc project does also help to take this decision, of course.

I guess the Debian MIA meeting will have to wait for next planetary alignment ;-) well, not really, any other member of the team can set up it, hint! hint!

See you in DebConf17 or a nearby local event!

Syndicated 2015-05-04 08:37:12 from Ricardo Mones

5 May 2015 pabs3   » (Master)

The #newinjessie game: developer & QA tools

Continuing the #newinjessie game:

There are a number of development and QA tools that are new in jessie:

  • autorevision: store VCS meta-data in your release tarballs and use it during build
  • git-remote-bzr: bidirectional interaction with Bzr repositories for git users
  • git-remote-hg: bidirectional interaction with Mercurial repositories for git users
  • corekeeper: dump core files when ELF programs crash and send you mail
  • adequate: check installed Debian packages for various issues
  • duck: check that the URLs in your Debian package are still alive
  • codespell: search your code for spelling errors and fix them
  • iwyu: include only the headers you use to reduce compilation time
  • clang-modernize: modernise your C++ code to use C++11
  • shellcheck: check shell scripts for potential bugs
  • bashate: check shell scripts for stylistic issues
  • libb-lint-perl: check Perl code for potential bugs and style issues
  • epubcheck: validate your ePub docs against the standard
  • i18nspector: check the work of translators for common issues

Syndicated 2015-05-05 05:10:06 from Advogato

4 May 2015 Stevey   » (Master)

A weekend of migrations

This weekend has been all about migrations:

Host Migrations

I've migrated several more systems to the Jessie release of Debian GNU/Linux. No major surprises, and now I'm in a good state.

I have 18 hosts, and now 16 of them are running Jessie. One of them I won't touch for a while, and the other is a KVM-host which runs about 8 guests - so I won't upgraded that for a while (because I want to schedule the shutdown of the guests for the host-reboot).

Password Migrations

I've started migrating my passwords to pass, which is a simple shell wrapper around GPG. I generated a new password-managing key, and started migrating the passwords.

I dislike that account-names are stored in plaintext, but that seems known and unlikely to be fixed.

I've "solved" the problem by dividing all my accounts into "Those that I wish to disclose post-death" (i.e. "banking", "amazon", "facebook", etc, etc), and those that are "never to be shared". The former are migrating, the latter are not.

(Yeah I'm thinking about estates at the moment, near-death things have that effect!)

Syndicated 2015-05-04 00:00:00 from Steve Kemp's Blog

4 May 2015 bagder   » (Master)

HTTP/2 in curl, status update

http2 logoI’m right now working on adding proper multiplexing to libcurl’s HTTP/2 code. So far we’ve only done a single stream per connection and while that works fine and is HTTP/2, applications will still want more when switching to HTTP/2 as the multiplexing part is one of the key components and selling features of the new protocol version.

Pipelining means multiplexed

As a starting point, I’m using the “enable HTTP pipelining” switch to tell libcurl it should consider multiplexing. It makes libcurl work as before by default. If you use the multi interface and enable pipelining, libcurl will try to re-use established connections and just add streams over them rather than creating new connections. Yes this means that A) you need to use the multi interface to get the full HTTP/2 stuff and B) the curl tool won’t be able to take advantage of it since it doesn’t use the multi interface! (An old outstanding idea is to move the tool to use the multi interface and this would yet another reason why this could be a good idea.)

We still have some decisions to make about how we want libcurl to act by default – especially when we can expect application to use both HTTP/1.1 and HTTP/2 at the same time. Since we don’t know if the server supports HTTP/2 until after a certain point in the negotiation, we need to decide on how to do when we issue N transfers at once to the same server that might speak HTTP/2… Right now, we get the best HTTP/2 behavior by telling libcurl we only want one connection per host but that is probably not ideal for an application that might use a mix of HTTP/1.1 and HTTP/2 servers.

Downsides with abusing pipelining

There are some drawbacks with using that pipelining switch to allow multiplexing since users may very well want HTTP/2 multiplexing but not HTTP/1.1 pipelining since the latter is just riddled with interop problems.

Also, re-using the same options for limited connections to host names etc for both HTTP/1.1 and HTTP/2 may not at all be what real-world applications want or need.

One easy handle, one stream

libcurl API wise, each HTTP/2 stream is its own easy handle. It makes it simple and keeps the API paradigm very much in the same way it works for all the other protocols. It comes very natural for the libcurl application author. If you setup three easy handles, all identifying a resource on the same server and you tell libcurl to use HTTP/2, it makes perfect sense that all these three transfers are made using a single connection.

As multiplexed data means that when reading from the socket, there is data arriving that belongs to other streams than just a single one. So we need to feed the received data into the different “data buckets” for the involved streams. It gives us a little internal challenge: we get easy handles with no socket activity to trigger a read, but there is data to take care of in the incoming buffer. I’ve solved this so far with a special trigger that says that there is data to take care of, that it should make a read anyway that then will get the data from the buffer.

Server push

HTTP/2 supports server push. That’s a stream that gets initiated from the server side without the client specifically asking for it. A resource the server deems likely that the client wants since it asked for a related resource, or similar. My idea is to support server push with the application setting up a transfer with an easy handle and associated options, but the URL would only identify the server so that it knows on which connection it would accept a push, and we will introduce a new option to libcurl that would tell it that this is an easy handle that should be used for the next server pushed stream on this connection.

Of course there are a few outstanding issues with this idea. Possibly we should allow an easy handle to get created when a new stream shows up so that we can better deal with a dynamic number of  new streams being pushed.

It’d be great to hear from users who have ideas on how to use server push in a real-world application and how you’d imagine it could be used with libcurl.

Work in progress code

My work in progress code for this drive can be found in two places.

First, I do the libcurl multiplexing development in the separate http2-multiplex branch in the regular curl repo:

https://github.com/bagder/curl/tree/http2-multiplex.

Then, I put all my test setup and test client work in a separate repository just in case you want to keep up and reproduce my testing and experiments:

https://github.com/bagder/curl-http2-dev

Feedback?

All comments, questions, praise or complaints you may have on this are best sent to the curl-library mailing list. If you are planning on doing a HTTP/2 capable applications or otherwise have thoughts or ideas about the API for this, please join in and tell me what you think. It is much better to get the discussions going early and work on different design ideas now before anything is set in stone rather than waiting for us to ship something semi-stable as the closer to an actual release we get, the harder it’ll be to change the API.

Not quite working yet

As I write this, I’m repeatedly doing 99 parallel HTTP/2 streams with no data corruption… But there’s a lot more to be done before I’ll call it a victory.

Syndicated 2015-05-04 08:18:56 from daniel.haxx.se

3 May 2015 AlanHorkan   » (Master)

Usability and Playability

I could be programming but instead today I am playing games and watching television and films. I have always been a fan of Tetris which is a classic, but I am continuing to play an annoyingly difficult game, that to be honest, I am not sure I even enjoy all that much, but it is strangely compelling. My interest in usability coincides with my interest in playability. Each area has their own jargon but are very similar, the biggest difference is that games will intentionally make things difficult. Better games go to great lengths to make the difficulties challenging without being frustrating, gradually increasing the difficulty as they progress, and engaging the user without punishing them for mistakes. (Providing save points in a game game is similar to providing an undo system in an application, both make the system more forgiving and the users allow users to recover from mistakes, rather than punishing and them and forcing them to do things all over again.)

There is a great presentation about making games more juicy (short article including video) which I think most developers will find interesting. Essentially the presentation explains that a game can be improved significantly without adding any core features. The game functionality remains simple but the usability and playability is improved, providing a fuller more immersive experience. The animation added to the game is not merely about showing off, but provides a great level of feedback and interactivity. Theme music and sound effects also add to the experience, and again provide greater feedback to the user. The difference between the game at the start and at the end of the presentation is striking, stunning even.

I am not suggesting that flashy animation or theme music is a good idea for every application but (if the toolkit and infrastructure already provided is good enough) it is worth reconsidering that a small bit of "juice" like animations or sounds effect could be useful, not just in games, in any program. There are annoying bad examples too but when done correctly it is all about providing more feedback for users, and helping make applications feel more interactive and responsive.
For a very simple example I have seen a many users accidentally switch from Insert to Overwrite mode and not know how not get out of it, and unfortunately many things must be learned by trial and error. Abiword changes the shape and colour of the cursor (from a vertical line to a red block) and it could potentially also provide a sound effect when switching modes. Food for thought (alternative video link at Youtube).

Syndicated 2015-05-03 22:38:18 from Alan Horkan

3 May 2015 benad   » (Apprentice)

The Mystery of Logitech Wireless Interferences

As I mentioned before, I got a new gaming PC a few months ago. Since it sits below my TV, I also bought with it a new wireless keyboard and mouse, the Logitech K360 and M510, respectively. I'm used to Bluetooth mice and keyboards, but it seems that in the PC world Bluetooth is not as commonplace as in Macs, so the standard is to use some dongle. Luckily, Logitech use a "Unifying Receiver" so that both the keyboard and mouse can share a single USB receiver, freeing an additional port. In addition, the Alienware Alpha has a hidden USB 2.0 port underneath it, which seems to be the ideal place for the dongle and freeing all the external ports.

My luck stopped there though. Playing some first-person shooters, I noticed that the mouse was quite imprecise, and from time to time the keyboard would lag for a second or so. Is that why "PC gaming purists" swear by wired mice and keyboards? I moved the dongle to the back or front USB ports, and the issue remained. As a test, I plugged in my wired Logitech G500 mouse with the help of a ridiculously long 3-meter USB cable, and it seems to have solved that problem. But I remained with this half-working wireless keyboard, and with that USB cable an annoying setup.

I couldn't figure out what was wrong, and willing to absorb the costs, until I found this post on the Logitech forums. Essentially, it doesn't play well with USB 3.0. I'm not talking about issues when you plus it the receiver in a USB 3.0 port, since that would have been a non-issue with the USB 2.0 port I was using underneath the Alpha. Nope. Just the mere presence of a USB 3.0 in the proximity of the receiver creates "significant amount of RF noise in the 2.4GHz band" used by Logitech. To be fair (and they insist on mentioning it), this seems to be a systemic issue with all 2.4GHz devices, and not just Logitech.

So I did a test. I took this really long USB cable and connected the receiver to it, making the receiver sit right next to the mouse and keyboard at the opposite side of the room where the TV and Alpha are located. And that solved the issue. Of course, to avoid that new "USB cable across the room" issue, I used a combination of a short half-meter USB cable and a USB hub with another half-meter cable to place the receiver at the opposite side of the TV cabinet. Again, the interference was removed.

OK, I guess all is fine and my mouse and keyboard are fully functional, but what about those new laptops with USB 3.0 on each port? Oh well, next time I'll stick to Bluetooth.

Syndicated 2015-05-03 21:48:04 from Benad's Blog

3 May 2015 yosch   » (Master)

Microsoft releasing an open font!

So, after the pleasant but rather unexpected news of Adobe's Source * font families released openly and developed on a public git repo, now we have Microsoft starting to release fonts under the OFL for one of their many projects!

Who would have thought that this could actually happen, that such big font producers would even consider doing this?

But I guess cross-platform web technologies and the corresponding culture tends to carry with it the values of interoperability, consistency and flexibility... And it just makes sense to have unencumbered licensing for that. There must be some value in pursuing that approach, right?

The Selawik font (only Latin coverage at this point) is part of (bootstrap)-WinJS and is designed to be a open replacement for Segoe UI.

A quick look at the metadata reveals:

Full name: Selawik
Version: 1.01
Copyright: (c) 2015 Microsoft Corporation (www.microsoft.com), with Reserved Font Name Selawik. Selawik is a trademark of Microsoft Corporation in the United States and/or other countries.
License: This Font Software is licensed under the SIL Open Font License, Version 1.1.
License URL: http://opensource.org/licenses/OFL-1.1
Designer: Aaron Bell
Designer URL: http://www.microsoft.com/typography
Manufacturer: Microsoft Corporation
Vendor URL: http://www.microsoft.com/typography
Trademark: Selawik is a trademark of the Microsoft group of companies.


Quite a contrast from the very exclusive licenses attached to the fonts commissioned for Windows...

(Oh and the apparent toponym with an Inupiat name is a nice touch too).


1 May 2015 aleix   » (Journeyer)

Fixing your wife's Nexus 5

DISCLAIMER: I'm not responsible for what happens to your phone if you decide to proceed with the instructions below.

Are you experiencing:

  • Boot loop with Lollipop (Android 5.x).

  • Have downgraded to Kitkat (Andorid 4.x) and there's no service, camera crashes, google play store crashes, google earth tells you it needs SD internal storage and crashes.

At this point the phone seems practically unusable except wifi and only with Kitkat, Lollipop doesn't even boot.

It might be that the /persist partition is corrupted. So, don't despair, here's how I fixed it after looking around a bit:

  • Download adb and fastboot. On Ubuntu this is:

    $ sudo apt-get install android-tools-adb android-tools-fastboot
    
  • Power off your phone.

  • Connect your phone to your computer through USB.

  • Boot into the bootloader by pressing volume down and power buttons at the same time.

  • Unlock it:

    $ fastboot oem unlock
    
  • On the phone you must select the option to wipe everything. WARNING: This will wipe all contents on the device.

  • Download TWRP (an improved recovery mode).

  • Flash it:

    $ fastboot flash recovery openrecovery-twrp-2.8.5.2-hammerhead.img
    
  • Reboot again into the bootloader.

  • Once in the bootloader, choose the Recovery mode. It will then start TWRP.

  • On your computer you now type:

    $ adb shell
    

    If everything went well this should give you a root prompt.

  • Fix /persist partition.

    # e2fsck -y /dev/block/platform/msm_sdcc.1/by-name/persist
    
  • Re-create /persist file system.

    # make_ext4fs /dev/block/platform/msm_sdcc.1/by-name/persist
    
  • Exit the adb shell.

  • Download the latest Nexus 5 factory image and untar it.

  • Finally,inside the untared directory run:

    $ ./flash-all.sh
    
  • Your phone should be fixed!

  • As a last step you might want to lock it again. So, go into the booatloader again and this time run:

    $ fastboot oem lock
    

Good luck!

These are the couple of websites I used. Thank you to the guys who wrote it!

http://www.droid-life.com/2013/11/04/how-to-root-the-nexus-5/
http://forum.xda-developers.com/google-nexus-5/general/guide-to-fix-persist-partition-t2821576

Syndicated 2015-05-01 18:32:52 from aleix's blog

1 May 2015 dkg   » (Master)

Preferred Packaging Practices

I just took a few minutes to write up my preferred Debian packaging practices.

The basic jist is that i like to use git-buildpackage (gbp) with the upstream source included in the repo, both as tarballs (with pristine-tar branches) and including upstream's native VCS history (Joey's arguments about syncing with upstream git are worth reading if you're not already convinced this is a good idea).

I also started using gbp-pq recently -- the patch-queue feature is really useful for at least three things:

  • rebasing your debian/patches/ files when a new version comes out upstream -- you can use all your normal git rebase habits! and
  • facilitating sending patches upstream, hopefully reducing the divergence, and
  • cherry-picking new as-yet-unreleased upstream bugfix patches into a debian release.

My preferred packaging practices document is a work in progress. I'd love to improve it. If you have suggestions, please let me know.

Also, if you've written up your own preferred packaging practices, send me a link! I'm hoping to share and learn tips and tricks around this kind of workflow at debconf 15 this year.

Syndicated 2015-05-01 19:41:00 from Weblogs for dkg

1 May 2015 bagder   » (Master)

talking curl on the changelog

The changelog is the name of a weekly podcast on which the hosts discuss open source and stuff.

Last Friday I was invited to participate and I joined hosts Adam and Jerod for an hour long episode about curl. It all started as a response to my post on curl 17 years, so we really got into how things started out and how curl has developed through the years, how much time I’ve spent on it and if I could mention a really great moment in time that stood out over the years?

They day before, they released the little separate teaser we made about about the little known –remote-name-all command line option that basically makes curl default to do -O on all given URLs.

The full length episode can be experienced in all its glory here: https://changelog.com/153/

Syndicated 2015-05-01 09:54:16 from daniel.haxx.se

30 Apr 2015 caolan   » (Master)

gtk3 notebook theming

Starting to work on the gtk3 theming now. Here's a before and after shot of today's notebook color and font theming improvements.

Before:

After:
And a a random native gtk3 notebook for comparison


Syndicated 2015-04-30 15:19:00 (Updated 2015-04-30 15:20:14) from Caolán McNamara

30 Apr 2015 gary   » (Master)

Remote debugging with GDB

→ originally posted on developerblog.redhat.com

This past few weeks I’ve been working on making remote debugging in GDB easier to use. What’s remote debugging? It’s where you run GDB on one machine and the program being debugged on another. To do this you need something to allow GDB to control the program being debugged, and that something is called the remote stub. GDB ships with a remote stub called gdbserver, but other remote stubs exist. You can write them into your own program too, which is handy if you’re using minimal or unusual hardware that cannot run regular applications… cellphone masts, satellites, that kind of thing. I bet you didn’t know GDB could do that!

If you’ve used remote debugging in GDB you’ll know it requires a certain amount of setup. You need to tell GDB how to access to your program’s binaries with a set sysroot command, you need to obtain a local copy of the main executable and supply that to GDB with a file command, and you need to tell GDB to commence remote debugging with a target remote command.

Until now. Now all you need is the target remote command.

This new code is really new. It’s not in any GDB release yet, let alone in RHEL or Fedora. It’s not even in the nightly GDB snapshot, it’s that fresh. So, with the caveat that none of these examples will work today unless you’re using a Git build, here’s some things you can do with gdbserver using the new code.

Here’s an example of a traditional remote debugging session, with the things you type in bold. In one window:

abc$ ssh xyz.example.com
xyz$ gdbserver :9999 --attach 5312
Attached; pid = 5312
Listening on port 9999

gdbserver attached to process 5312, stopped it, and is waiting for GDB to talk to it on TCP port 9999. Now, in another window:

abc$ gdb -q
(gdb) target remote xyz.example.com:9999
Remote debugging using xyz.example.com:9999
...lots of messages you can ignore...
(gdb) bt
#0 0x00000035b5edf098 in *__GI___poll (fds=0x27467a0, nfds=8,
timeout=<optimized out>) at ../sysdeps/unix/sysv/linux/poll.c:83
#1 0x00000035b76449f9 in ?? () from target:/lib64/libglib-2.0.so.0
#2 0x00000035b76451a5 in g_main_loop_run ()
from target:/lib64/libglib-2.0.so.0
#3 0x0000003dfd34dd17 in gtk_main ()
from target:/usr/lib64/libgtk-x11-2.0.so.0
#4 0x000000000040913d in main ()

Now you have GDB on one machine (abc) controlling process 5312 on another machine (xyz) via gdbserver. Here I did a backtrace, but you can do pretty much anything you can with regular, non-remote GDB.

I called that a “traditional” remote debugging session because that’s how a lot of people use this, but there’s a more flexible way of doing things if you’re using gdbserver as your stub. GDB and gdbserver can communicate over stdio pipes, so you can chain commands, and the new code to remove all the setup you used to need makes this really nice. Lets do that first example again, with pipes this time:

abc$ gdb -q
(gdb) target remote | ssh -T xyz.example.com gdbserver - --attach 5312
Remote debugging using | ssh -T xyz.example.com gdbserver - --attach 5312
Attached; pid = 5312
Remote debugging using stdio
...lots of messages...
(gdb)

The “-” in gdbserver’s argument list replaces the “:9999” in the previous example. It tells gdbserver we’re using stdio pipes rather than TCP port 9999. As well as configuring everything with single command, this has the advantage that the communication is through ssh; there’s no security in GDB’s remote protocol, so it’s not the kind of thing you want to do over the open internet.

What else can you do with this? Anything you can do through stdio pipes! You can enter Docker containers:

(gdb) target remote | sudo docker exec -i e0c1afa81e1d gdbserver - --attach 58
Remote debugging using | sudo docker exec -i e0c1afa81e1d gdbserver - --attach 58
Attached; pid = 58
Remote debugging using stdio
...

Notice how I slipped sudo in there too. Anything you can do over stdio pipes, remember? If you’re using Kubernetes you can use kubectl exec, or with OpenShift osc exec.

gdbserver can do more than just attach, you can start programs with it too:

(gdb) target remote | sudo docker exec -i e0c1afa81e1d gdbserver - /bin/sh
Remote debugging using | sudo docker exec -i e0c1afa81e1d gdbserver - /bin/sh
Process /bin/sh created; pid = 89
stdin/stdout redirected
Remote debugging using stdio
...

Or you can start it without any specific program, and then tell it what do do from within GDB. This is by far the most flexible way to use gdbserver. You can control more than one process, for example:

(gdb) target extended-remote | ssh -T root@xyz.example.com gdbserver --multi -
Remote debugging using | gdbserver --multi -
Remote debugging using stdio
(gdb) attach 774
...messages...
(gdb) add-inferior
Added inferior 2
(gdb) inferior 2
[Switching to inferior 2 [<null>] (<noexec>)]
(gdb) attach 871
...messages...
(gdb) info inferiors
Num Description Executable
* 2 process 871 target:/usr/sbin/httpd
  1 process 774 target:/usr/libexec/mysqld

Ready to debug that connection issue between your webserver and database?

Syndicated 2015-04-30 13:14:25 from gbenson.net

30 Apr 2015 mikal   » (Journeyer)

Coding club day one: a simple number guessing game in python

I've recently become involved in a new computer programming club at my kids' school. The club runs on Friday afternoons after school and is still very new so we're still working through exactly what it will look like long term. These are my thoughts on the content from this first session. The point of this first lesson was to approach a programming problem where every child stood a reasonable chance of finishing in the allotted 90 minutes. Many of the children had never programmed before, so the program had to be kept deliberately small. Additionally, this was a chance to demonstrate how literal computers are about the instructions they're given -- there is no room for intuition on the part of the machine here, it does exactly what you ask of it.

The task: write a python program which picks a random number between zero and ten. Ask the user to guess the number the program has picked, with the program telling the user if they are high, low, or right.

We then brainstormed the things we'd need to know how to do to make this program work. We came up with:
  • How do we get a random number?
  • What is a variable?
  • What are data types?
  • What is an integer? Why does that matter?
  • How do we get user input?
  • How do we do comparisons? What is a conditional?
  • What are the possible states for the game?
  • What is an exception? Why did I get one? How do I read it?


With that done, we were ready to start programming. This was done with a series of steps that we walked through as a group -- let's all print hello work. Now let's generate a random number and print it. Ok, cool, now let's do input from a user. Now how do we compare that with the random number? Finally, how do we do a loop which keeps prompting until the user guesses the random number?

For each of these a code snippet was written on the whiteboard and explained. It was up to the students to put them together into a program which actually works.

Due to limitations in the school's operating environment (no local python installation and repl.it not working due to firewalling) we used codeskulptor.org for this exercise. The code that the kids ended up with looks like this:

29 Apr 2015 dorward   » (Journeyer)

A self-indulgent rant about software with a happy ending

Last night I volunteered to convert a couple of documents to PDF for a friend.

‘It'll be easy’, I thought, ‘it'll only take a few minutes’.

The phrase "Ha" comes to mind.

Adobe Acrobat can't import DOCX files. This wasn't a huge surprise and I was prepared.

One a quick trip to Pages later and … one document came out blank while the other was so badly misaligned that it was unusable.

‘Never mind’, thought I, ‘there are other options’.

OpenOffice rendered both DOCX files as blank. This was not progress.

‘Fine, fine, let's see what MS Office is like these days’.

There was a free trial of the upcoming Office for Mac available. A 2.5GB download later and I had a file which would, when double clicked, make an icon appear in the dock for about two seconds before quitting.

At this point, I admit I was getting frustrated.

Off to Office 365 I went. I'd even have gone so far as to give Microsoft my £5.95 for a month of access to it, if they'd let me login. I was presented with a blank page after entering my Live credentials.

I got the same result after switching web browser to one that wasn't laden down with the features that make the WWW bearable.

Did Microsoft not want my money?

(The more I deal with DOCX, the less I like it).

By this point, it was past midnight, I was running out of options, and I didn't want to let my friend down.

Then I found the rather wonderful convertonelinefree.com (Gosh, this paragraph looks a bit spammy, it isn't though.) and I had the DOCX files converted a minute later.

So time to talk about Adobe software… in a blog post where I've been ranting about software. Brace yourselves…

I really like Acrobat CC. (Has the sky fallen? No? OK, then. Let us continue.)

I don't know what someone who has used earlier versions a lot will think of the dramatic UI changes, but as an occasional user, it is really rather nice.

It combined my two files without a hitch and did a near perfect job of identifying all the form fields I wanted to be editable.

The step-by-step UI is rather nice and makes it easy to find the various tools to edit the document.

Syndicated 2015-04-29 08:17:05 from Dorward's Ramblings

27 Apr 2015 Stevey   » (Master)

Validating puppet manifests via git hooks.

It looks like I'll be spending a lot of time working with puppet over the coming weeks.

I've setup some toy deployments on virtual machines, and have converted several of my own hosts to using it, rather than my own slaughter system.

When it comes to puppet some things are good, and some things are bad, as exected, and as any similar tool (even my own). At the moment I'm just aiming for consistency and making sure I can control all the systems - BSD, Debian GNU/Linux, Ubuntu, Microsoft Windows, etc.

Little changes are making me happy though - rather than using a local git pre-commit hook to validate puppet manifests I'm now doing that checking on the server-side via a git pre-receive hook.

Doing it on the server-side means that I can never forget to add the local hook and future-colleagues can similarly never make this mistake, and commit malformed puppetry.

It is almost a shame there isn't a decent collection of example git-hooks, for doing things like this puppet-validation. Maybe there is and I've missed it.

It only crossed my mind because I've had to write several of these recently - a hook to rebuild a static website when the repository has a new markdown file pushed to it, a hook to validate syntax when pushes are attempted, and another hook to deny updates if the C-code fails to compile.

Syndicated 2015-04-27 00:00:00 from Steve Kemp's Blog

27 Apr 2015 mjg59   » (Master)

Reducing power consumption on Haswell and Broadwell systems

Haswell and Broadwell (Intel's previous and current generations of x86) both introduced a range of new power saving states that promised significant improvements in battery life. Unfortunately, the typical experience on Linux was an increase in power consumption. The reasons why are kind of complicated and distinctly unfortunate, and I'm at something of a loss as to why none of the companies who get paid to care about this kind of thing seemed to actually be caring until I got a Broadwell and looked unhappy, but here we are so let's make things better.

Recent Intel mobile parts have the Platform Controller Hub (Intel's term for the Southbridge, the chipset component responsible for most system i/o like SATA and USB) integrated onto the same package as the CPU. This makes it easier to implement aggressive power saving - the CPU package already has a bunch of hardware for turning various clock and power domains on and off, and these can be shared between the CPU, the GPU and the PCH. But that also introduces additional constraints, since if any component within a power management domain is active then the entire domain has to be enabled. We've pretty much been ignoring that.

The tldr is that Haswell and Broadwell are only able to get into deeper package power saving states if several different components are in their own power saving states. If the CPU is active, you'll stay in a higher-power state. If the GPU is active, you'll stay in a higher-power state. And if the PCH is active, you'll stay in a higher-power state. The last one is the killer here. Having a SATA link in a full-power state is sufficient to keep the PCH active, and that constrains the deepest package power savings state you can enter.

SATA power management on Linux is in a kind of odd state. We support it, but we don't enable it by default. In fact, right now we even remove any existing SATA power management configuration that the firmware has initialised. Distributions don't enable it by default because there are horror stories about some combinations of disk and controller and power management configuration resulting in corruption and data loss and apparently nobody had time to investigate the problem.

I did some digging and it turns out that our approach isn't entirely inconsistent with the industry. The default behaviour on Windows is pretty much the same as ours. But vendors don't tend to ship with the Windows AHCI driver, they replace it with the Intel Rapid Storage Technology driver - and it turns out that that has a default-on policy. But to make things even more awkwad, the policy implemented by Intel doesn't match any of the policies that Linux provides.

In an attempt to address this, I've written some patches. The aim here is to provide two new policies. The first simply inherits whichever configuration the firmware has provided, on the assumption that the system vendor probably didn't configure their system to corrupt data out of the box[1]. The second implements the policy that Intel use in IRST. With luck we'll be able to use the firmware settings by default and switch to the IRST settings on Intel mobile devices.

This change alone drops my idle power consumption from around 8.5W to about 5W. One reason we'd pretty much ignored this in the past was that SATA power management simply wasn't that big a win. Even at its most aggressive, we'd struggle to see 0.5W of saving. But on these new parts, the SATA link state is the difference between going to PC2 and going to PC7, and the difference between those states is a large part of the CPU package being powered up.

But this isn't the full story. There's still work to be done on other components, especially the GPU. Keeping the link between the GPU and an internal display panel active is both a power suck and requires additional chipset components to be powered up. Embedded Displayport 1.3 introduced a new feature called Panel Self-Refresh that permits the GPU and the screen to negotiate dropping the link, leaving it up to the screen to maintain its contents. There's patches to enable this on Intel systems, but it's still not turned on by default. Doing so increases the amount of time spent in PC7 and brings corresponding improvements to battery life.

This trend is likely to continue. As systems become more integrated we're going to have to pay more attention to the interdependencies in order to obtain the best possible power consumption, and that means that distribution vendors are going to have to spend some time figuring out what these dependencies are and what the appropriate default policy is for their users. Intel's done the work to add kernel support for most of these features, but they're not the ones shipping it to end-users. Let's figure out how to make this right out of the box.

[1] This is not necessarily a good assumption, but hey, let's see

comment count unavailable comments

Syndicated 2015-04-27 18:33:44 from Matthew Garrett

25 Apr 2015 tampe   » (Journeyer)

The escape of the batch curse

Consider the following problem, assume that we can generate two random sequences l1,l2 of numbers between 0 and 9, take a transform that to each number map it to the length of when it appears again modulo 10, call this map M. Max be the transform of a sequence by taking the max of the current value and the next. Let Plus be the summation of two such sequences modulo 10. we also assume that we now that the second sequence, l2 has the property that elementwize,


Max(M(l1)) .leq. Max(M(l2)),

how do we go about to generate


M(Plus(Max(M(l1)),Max(M(l2)))).

The idea of the solution I would like to play with is to generate a special variable, that when you create it, the value is not known, but you can place it in the right order and then when it's all it's dependants are available the result will be executed. I've played with these ideas a long time a ago here on this blog, but now there is the addition of backtracking that come into play and that we use guile-log and prolog. So what is the main trick that enables this.

Define two predicates, delay and force that is used as follows


plusz(Z,X,Y) :- delay(plusz(Z,X,Y),X,Y) ;
(ZZ is X + Y, force(Z,ZZ)

we want to take the addition of X and Y, if X and Y both have been forced dealy will fail, else it will delay the evaluation of plusz(Z,X,Y) and execute that function at the time when both have been forced, to put the value in Z we need to execute special code to force the value if Z as well have been blessed as a delayed value. That's it, its defined in about 50 rows of guile-log code, nothing huge.

The setup to generate sequence is to maintain state and define transforms that initiate the state and update the state, given such transforms one have enough to generate the sequence, so one need to make sense of the following ideoms


next(S,SS) :- ..
start(S) :- ..

Lets see how it can look for our example in prolog,


next_all2([Z1,Z2,Id,S1,S2,Z],[ZZ1,ZZ2,IId,SS1,SS2,ZZ]) :-
next_M(Z1,[R1,P1,C1|_],ZZ1),
next_M(Z2,[R2,P2,C2|_],ZZ2),
moving_op(2,maxz,0,U1,C1,S1,SS1),
moving_op(2,maxz,0,U2,C2,S2,SS2),
fail_if(P2,(U1 .leq. U2)),
plus10z(C, U1 ,U2),
next_M(Z,[_,_,CZ|_],ZZ),
plusz(IId,Id ,C),
writez(_,IId,C,R1,R2).



next_M(Z,X,ZZ)

next_M(Z,X,ZZ), will be the seuence M(l), e.g. it's a construct that generate state information Z->ZZ with the current value X=[l_i,M(l)_i,Redo ...], with l_i the i'th generated random value, M(l)_i the number of times it takes befor l_i appear again in the sequence modulo 10, and Redo will be the backtracking object so that everything restarts from the generation of random value l_i.


moving_op(N,Op,InitSeed,MaxRes,ValIn,S,SS)

N is the length of the window, Op is the reducing operator op(Z,X,Y), InitSeed is the initial value of the reduction. MaxRes is the current result of e.g. the max operation on the window perhaps delayed, ValIn is the value in the sequence S state in and SS is the state out.


fail_if(P,(U1 .leq. U2),U1,U2)

when U1 and U2 have been forced and U1


plus10z(C, U1 ,U2),


plus modulo 10 of Max(M(l1) and Max(M(l2))


plusz(IId,Id ,C),

This is a convolution of the generation of solutions C, the result IId_i will be non delayed if and only if when all C_k k


writez(_,IId,C,R1,R2).

Write out the result C and the R1 and R2 generated random valued for l1 and l2.

As you see this approach make sure the combination of values are in the right synchronization and that the solution allow to destruct the problem in reusable more abstract components that one quite easy sew together, that's the power of this idea, you want to change the algorithm, easy to do, the number of buggs will be smaller due to the composability of the approach, neat! Also this approach will be memory safe due to the neat gc that guile-log has regarding logical variables so everything would work on as long sequences that you are prepared to wait for.

Cheers!

25 Apr 2015 mikal   » (Journeyer)

Tuggeranong Trig (again)

The cubs at my local scout group are interested in walking to a trig, but have some interesting constraints around mobility for a couple of their members. I therefore offered to re-walk Tuggeranong Trig in Oxley with an eye out for terrain. I think this walk would be very doable for cubs -- its 650 meters with only about 25 meters of vertical change. The path is also ok for a wheelchair I think.

             

Interactive map for this route.

Tags for this post: blog pictures 20150415-tuggeranong_trig photo canberra bushwalk trig_point
Related posts: Goodwin trig; Big Monks; Narrabundah trig and 16 geocaches; Cooleman and Arawang Trigs; One Tree and Painter; A walk around Mount Stranger

Comment

Syndicated 2015-04-24 18:04:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

24 Apr 2015 bagder   » (Master)

curl on the NASDAQ tower

Apigee posted this lovely picture over at twitter. A curl command line on the NASDAQ tower.

curl-nasdaq-cropped

Syndicated 2015-04-24 16:54:47 from daniel.haxx.se

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

Advogato User Stats
Users13999
Observer9885
Apprentice744
Journeyer2336
Master1030

New Advogato Members

Recently modified projects

25 May 2015 Molins framework for PHP5
25 May 2015 Beobachter
25 Apr 2015 Justice4all
7 Mar 2015 Ludwig van
7 Mar 2015 Stinky the Shithead
18 Dec 2014 AshWednesday
11 Nov 2014 respin
20 Jun 2014 Ultrastudio.org
13 Apr 2014 Babel
13 Apr 2014 Polipo
19 Mar 2014 usb4java
8 Mar 2014 Noosfero
17 Jan 2014 Haskell
17 Jan 2014 Erlang
17 Jan 2014 Hy

New projects

2 Dec 2014 Justice4all
11 Nov 2014 respin
8 Mar 2014 Noosfero
17 Jan 2014 Haskell
17 Jan 2014 Erlang
17 Jan 2014 Hy
17 Jan 2014 clj-simulacrum
17 Jan 2014 Haskell-Lisp
17 Jan 2014 lfe-disco
17 Jan 2014 clj-openstack
17 Jan 2014 lfe-openstack
17 Jan 2014 LFE
1 Nov 2013 FAQ Linux
15 Apr 2013 Gramps
8 Apr 2013 pydiction