Older blog entries for etbe (starting at number 936)

Standardising Android

Don Marti wrote an amusing post about the lack of standards for Android phones and the fact that the iPhone has a better accessory market as a result [1].

I’d like to see some Android phones get standardised in a similar manner to the PC. The big thing about the IBM PC compatible market was that they all booted the same way, ran the same OS and applications, had the same expansion options, connectors, etc. The early PCs sucked in many ways (there were many other desktop computers in the 80′s that were better in various ways) but the larger market made the PC win.

The PC even killed the Mac! This is something we should remember now when discussing the iPhone.

I’d like to see different Android phones that can run the same OS with the same boot loader. Having HTC, LG, Samsung, and others all sell phones that can run the same version of CyanogenMod and have the same recovery options if a mistake is made when loading CyanogenMod shouldn’t be any more difficult than having IBM, Compaq, HP, DEC, Dell, and others selling PCs that run the same versions of all the OSs of the day and had the same recovery options.

Then there should be options for common case sizes. From casual browsing in phone stores it seems that most phones on sale in Australia are of a tablet form without a hardware keyboard, they have a USB/charger socket, an audio socket, and hardware buttons for power, volume up/down, and “home” – with the “settings” and “back” buttons being through the touch-screen on the Galaxy S but hardware in most others. A hardware button to take a picture is available in some phones.

The variation in phone case design doesn’t seem to be that great and there seems to be a good possibility for a few standards for common formats, EG large tablet, small tablet, and large tablet with hardware keyboard. The phone manufacturers are currently competing on stupid things like how thin a phone can be while ignoring real concerns of users such as having a phone that can last for 24 hours without being charged! But they could just as easily compete on ways of filling a standard case size, with options for screen resolution, camera capabilities, CPU, GPU, RAM, storage, etc. There could also be ways of making a standard case with several options, EG having an option for a camera that extends from the back of the case for a longer focal length – such an option wouldn’t require much design work for a second version of anything that might connect to the phone.

Also standards would need to apply for a reasonable period of time. One advantage that Apple has is that it has only released a few versions of the iPhone and each has been on sale for a reasonable amount of time (3 different sizes of case in 4 years). Some of the Android phones seem to only be on sale in mass quantities for a few months before being outdated, at which time many of the stores will stop getting stock of matching accessories.

Finally I’d be a lot happier if there was good support for running multiple Android phones with the same configuration. Then I could buy a cheap waterproof phone for use at the beach and synchronise all the configuration before leaving home. This is a feature that would be good for manufacturers as it would drive the average rate of phone ownership to something greater than 1 phone per person.

Syndicated 2012-01-03 14:14:00 from etbe - Russell Coker

Links December 2011

Barry Ritholtz wrote an insightful post quoting Federal Reserve Bank of Kansas City President Thomas Hoenig, who warns that the nation’s biggest banks are putting the U.S. capitalist society at risk [1]. Big banks oppose capitalism.

Glenn Greenwald has written an insightful article for Salon about the modern definition of American excellence being the killing of supposedly bad people without any due process [2].

Mazuma Mobile buys used mobile phones [3]. They can send a post-pack to ship your old mobile to them. This is good for the environment and also saves some money.

Sam Varghese has written an informative article about the Trans Pacific Partnership Agreement that will probably end up benefiting US corporations at the expense of Australian citizens [4].

Cory Doctorow has written an informative article for The Guardian about the BBC DRM plans[5]. He received information that was denied in a FOI request which shows how poor the BBC case is and how bad the Ofcom oversight is.

Sam Harris has written an insightful blog post about self-defense [6]. He also has many other posts that are worth reading.

Aparna Roa gave an interesting TED presentation about her robotic art [7].

Syndicated 2011-12-31 12:56:40 from etbe - Russell Coker

Sociological Images

I’ve recently been reading the Sociological Images blog [1]. That site has lots of pictures and videos that are relevant to the study of Sociology (most of which have a major WTF factor) and it’s run by people who have Ph.Ds in Sociology so the commentary is insightful. Since reading that I’ve started photographing relevant things.

woman in straight-jacket advertising energy prices

I can’t work out the logic behind the above advert for Energy Watch which was on a billboard near Ringwood Station in Melbourne, Australia. The only thing that it clear is that it spreads bad ideas about mental illness and psychiatric treatment. It doesn’t make me want to do business with them.

Antons full display

The above picture is a shop-front for the Antons clothing store (I’m not sure if they are a tailor or if they sell ready to wear). It was taken on Lonsdale St, Melbourne where the store apparently used to be, now they are in Melbourne Central.

Antons left display, African and Southern European Antons right display, Northern European and Japanese

The above pictures show more detail. Unfortunately the combination of lighting and my camera (Xperia X10 phone camera) wasn’t adequate to show the apparent ethnic differences between the two men. It seems that the most likely Australian interpretation of the ethnic groups that are represented are African (maybe Afro-American), Southern-European or maybe American Hispanic, North-Western European, and Japanese. It’s good to have mannequins representing the fact that not everyone in Australia is white, but different facial expressions for different races seems a strange choice (admittedly it might be a choice made by mannequin manufacturers). Also the Japanese woman with fan idea is rather outdated.

I’ve just started reading You May Ask Yourself: An Introduction to Thinking Like a Sociologist (Second Edition) by Dalton Conley. I’ve only read the first chapter, but that was good enough that the entire book has to be good enough to recommend.

Syndicated 2011-12-31 02:15:36 from etbe - Russell Coker

Sociological Images

I’ve recently been reading the Sociological Images blog [1]. That site has lots of pictures and videos that are relevant to the study of Sociology (most of which have a major WTF factor) and it’s run by people who have Ph.Ds in Sociology so the commentary is insightful. Since reading that I’ve started photographing relevant things.

woman in straight-jacket advertising energy prices

I can’t work out the logic behind the above advert for Energy Watch which was on a billboard near Ringwood Station in Melbourne, Australia. The only thing that it clear is that it spreads bad ideas about mental illness and psychiatric treatment. It doesn’t make me want to do business with them.

Antons full display

The above picture is a shop-front for the Antons clothing store (I’m not sure if they are a tailor or if they sell ready to wear). It was taken on Lonsdale St, Melbourne where the store apparently used to be, now they are in Melbourne Central.

Antons left display, African and Southern European Antons right display, Northern European and Japanese

The above pictures show more detail. Unfortunately the combination of lighting and my camera (Xperia X10 phone camera) wasn’t adequate to show the apparent ethnic differences between the two men. It seems that the most likely Australian interpretation of the ethnic groups that are represented are African (maybe Afro-American), Southern-European or maybe American Hispanic, North-Western European, and Japanese. It’s good to have mannequins representing the fact that not everyone in Australia is white, but different facial expressions for different races seems a strange choice (admittedly it might be a choice made by mannequin manufacturers).

I’ve just started reading You May Ask Yourself: An Introduction to Thinking Like a Sociologist (Second Edition) by Dalton Conley. I’ve only read the first chapter, but that was good enough that the entire book has to be good enough to recommend.

Syndicated 2011-12-31 02:08:36 from etbe - Russell Coker

My Blog Server was Cracked

On the 1st of August I noticed that the server which runs my blog among other things was having an occasional SEGV from a sshd process. Unfortunately I was busy and didn’t pay much attention to this, which turned out to be a big mistake.

On the 12th of September I started investigating this properly and noticed that when someone tried to connect to ssh with password authentication sshd would SEGV after it was denied access to a shared memory region or a semaphore which had a SE Linux type of unconfined_t. I added some SE Linux auditallow rules and discovered that the memory region in question was created by the ssh client. Shortly after that I came to the conclusion that this wasn’t some strange feature of ssh (or one of the many shared objects it uses) but hostile activity. The ssh client appeared to be storing passwords that it used in a shared memory region and sshd was also collecting passwords in the same region and presumably offering them to a ssh client which uses some extension to the ssh protocol.

The sshd process was crashing because it couldn’t handle EPERM on access to shared memory or semaphores. Presumably if the system in question wasn’t running SE Linux then the exploit would have remained undetected for a lot longer.

At this stage we don’t know how the attacker got in. Presumably one of the people with root access ran a ssh client on a compromised system and had their password sniffed. One such client system was mysteriously reinstalled at about that time, the sysadmin of the system in question claimed to have no backups which made it impossible to determine if that system had been compromised. I believe that the sysadmin of the client system knew that their system was compromised, kept that information secret, and allowed other systems to become and remain compromised.

The attacker made no good effort to conceal their presence, they replaced ssh, sshd, and ssh-add and didn’t bother changing the Debian checksums so the debsums program flagged the files as modified. Note that I have kept copies of the files in question and am willing to share them with anyone who wants to analyse them.

Steinar H. Gunderson has named this trojan Ebury [1].

Recovery

By the evening of the 13th of September I had the system mostly working again. Jabber still isn’t working because ejabberd is difficult to get working at the best of times, I am now investigating whether there is a better Jabber server to use, but as I don’t use Jabber often this hasn’t been a priority for me.

Some of the WordPress plugins I use and all of the WordPress themes that are installed were outside the Debian packaging system, as I couldn’t be sure that they hadn’t been altered (because the people who wrote WordPress plugins don’t keep old versions online) I had to upgrade to the newer versions. Of course the newer versions weren’t entirely compatible so I had to use a different theme and I couldn’t get all plugins working. Link Within no longer works, not that it ever worked properly [2], I wanted to try Outbrain again but their web site won’t let me login (and they haven’t responded to my support request). Does anyone know of a good WordPress plugin to provide links to related content? Either related content on my blog or on the Internet in general will be OK.

Some people have asked me about the change in appearance of my blog. It was simply impossible (for someone with my PHP skills) to get my blog looking the same way as it did before the server was cracked. I think that the new look is OK and don’t mind if people think it looks likw a VW advert – VW make great cars, I was very satisfied with the VW Passat I used to drive.

Future Plans

I had bought some Yubikeys (USB devices that generate one-time passwords) [3] to control access to that server, if I had configured the software to use them then this might not have happened. The use of one-time password devices can prevent passive password-sniffing attacks. It would still allow active attacks (such as using ControlPath/ControlMaster options on the ssh client to allow a hostile party to connect later (EG the -M, -S, and “-o ControlPersist” options for the ssh client). It’s a pity that there doesn’t seem to be a way to configure the ssh server to disable ControlMaster.

Conclusion

It would be good to have some changes to sshd to allow more restrictions on what a client can request, as ControlMaster functionality isn’t needed by most users it should be possible to disable it.

SE Linux doesn’t protect against a compromised client system or any other way of stealing passwords. It did do a good job of stopping Ebury from doing all the things it wanted to do and thus making me aware of the problem. So I count this as a win for SE Linux.

Yubikeys are the cheapest and easiest way of managine one-time passwords. I had already bought some for use on the system in question but hadn’t got around to configuring them. I have to make that a priority.

Syndicated 2011-12-31 00:01:06 from etbe - Russell Coker

Secure Boot and Protecting Against Root

There has been a lot of discussion recently about the recent Microsoft ideas regarding secure boot, in case you have missed it Michael Casadevall has written a good summary of the issue [1].

Recently I’ve seen a couple of people advocate the concept of secure boot with the stated idea that “root” should be unable to damage the system, as Microsoft Software is something that doesn’t matter to me I’ll restrict my comments to how this might work on Linux.

Restricting the “root” account is something that is technically possible, for much of the past 9 years I have been running SE Linux “Play Machines” which have UID 0 (root) restricted by SE Linux such that they can’t damage the system [2] – there are other ways of achieving similar goals. But having an account with UID 0 that can’t change anything on the system doesn’t really match what most people think of as “root”, I just do it as a way of demonstrating that SE Linux controls all access such that cracking a daemon which runs as root won’t result in immediately controlling the entire system.

As an aside my Play Machine is not online at the moment, I hope to have it running again soon.

Root Can’t Damage the System

One specific claim was that “root” should be unable to damage the system. While a secure boot system can theoretically result in a boot to single user mode without any compromise that doesn’t apply to fully operational systems. For a file owned by root to be replaced the system security has to be compromised in some way. The same compromise will usually work every time until the bug is fixed and the software is upgraded. So the process of cracking root that might be used to install hostile files can also be used at runtime to exploit running processes via ptrace and do other bad stuff.

Even if the attacker is forced to compromise the system at every boot this isn’t a great win for the case of servers with months of uptime or for the case of workstations that have confidential data that can be rapidly copied over the Internet. There are also many workstations that are live on the Internet for months nowadays.

Also the general claim doesn’t really make sense on it’s own. “root” usually means the account that is used for configuring the system. If a system can be configured then the account which is used to configure it will be able to do unwanted things. It is theoretically possible to run workstations without external root access (EG have them automatically update to the latest security fixes). Such a workstation configuration MIGHT be able to survive a compromise by having a reboot trigger an automatic update. But a workstation that is used in such a manner could be just re-imaged as it would probably be used in an environment where data-less operation makes sense.

An Android phone could be considered as an example of a Linux system for which the “root” user can’t damage the system if you consider “root” to mean “person accessing the GUI configuration system”. But then it wouldn’t be difficult to create a configuration program for a regular Linux system that allows the user to change some parts of the system configuration while making others unavailable. Besides there are many ways in which the Android configuration GUI permits the user to make the system mostly unusable (EG by disabling data access) or extremely expensive to operate (EG by forcing data roaming). So I don’t think that Android is a good example of “root” being prevented from doing damage.

Signing All Files

Another idea that I saw advocated was to have the “secure boot” concept extended to all files. So you have a boot loader that loads a signed kernel which then loads only signed executables and then every interpreter (Perl, Python, etc) will also check for signatures on files that they run. This would be tricky with interpreters that are designed to run from standard input (most notably /bin/sh but also many other interpreters).

Doing this would require changing many programs, I guess you would even have to change mount to check the signature on /etc/fstab etc. This would be an unreasonably large amount of work.

Another possibility would be to change the kernel such that it checks file signatures and has restrictions on system calls such as open() and the exec() family of calls. In concept it would be possible to extend SE Linux or any other access control system to include access checks on which files need to be signed (some types such as etc_t and bin_t would need to be signed but others such as var_t wouldn’t).

Of course this would mean that no sysadmin work could be performed locally as all file changes would have to come from the signing system. I can imagine all sorts of theoretically interesting but practically useless ways of implementing this such as having the signing system disconnected from the Internet with USB flash devices used for one-way file transfer – because you can’t have the signing system available to the same attacks as the host system.

The requirement to sign all files would reduce the use of such a system to a tiny fraction of the user-base. Which would then raise the question of why anyone would spend the effort on that task when there are so many other ways of improving security that involve less work and can be used by more people.

Encrypted Root Filesystem

One real benefit of a secure boot system is for systems using encrypted filesystems. It would be good to know that a hostile party hasn’t replaced the kernel and initrd when you are asked for the password to unlock the root filesystem. This would be good for the case where a laptop is left in a hotel room or other place where a hostile party could access it.

Another way of addressing the same problem is to boot from a USB device so that you can keep a small USB boot device with you when it’s inconvenient to carry a large laptop (which works for me). Of course it’s theoretically possible for the system BIOS to be replaced with something that trojans the boot process (EG runs the kernel in a virtual machine). But I expect that if someone who is capable of doing that gets access to my laptop then I’m going to lose anyway.

Conclusion

The secure boot concept does seem to have some useful potential when the aim is to reboot the system and have it automatically apply security fixes in the early stages of the boot process. This could be used for Netbooks and phones. Of course such a process would have to reset some configuration settings to safe defaults, this means replacing files in /etc and some configuration files in the user’s home directory. So such a reboot and upgrade procedure would either leave the possibility that files in /etc were still compromised or it would remove some configuration work and thus give the user an incentive to avoid applying the patch.

Any system that tries to extend signature checks all the way would either be vulnerable to valid but hostile changes to system configuration (such as authenticating to a server run by a hostile party) or have extreme ease of use issues due to signing everything.

Also a secure boot will only protect a vulnerable system between the time it is rebooted and the time it returns to full operation after the reboot. If the security flaw hasn’t been fixed (which could be due to a 0-day exploit or an exploit for which the patch hasn’t been applied) then the system could be cracked again.

I don’t think that a secure boot process offers real benefits to many users.

Syndicated 2011-12-28 04:16:22 from etbe - Russell Coker

Some Notes on DRBD

DRBD is a system for replicating a block device across multiple systems. It’s most commonly used for having one system write to the DRBD block device such that all writes are written to a local disk and a remote disk. In the default configuration a write is not complete until it’s committed to disk locally and remotely. There is support for having multiple systems write to disk at the same time, but naturally that only works if the filesystem drivers are aware of this.

I’m installing DRBD on some Debian/Squeeze servers for the purpose of mirroring a mail store across multiple systems. For the virtual machines which run mail queues I’m not using DRBD because the failure conditions that I’m planning for don’t include two disks entirely failing. I’m planning for a system having an outage for a while so it’s OK to have some inbound and outbound mail delayed but it’s not OK for the mail store to be unavailable.

Global changes I’ve made in /etc/drbd.d/global_common.conf

In the common section I changed the protocol from “C” to “B“, this means that a write() system call returns after data is committed locally and sent to the other node. This means that if the primary node goes permanently offline AND if the secondary node has a transient power failure or kernel crash causing the buffer contents to be lost then writes can be lost. I don’t think that this scenario is likely enough to make it worth choosing protocol C and requiring that all writes go to disk on both nodes before they are considered to be complete.

In the net section I added the following:

sndbuf-size 512k;
data-integrity-alg sha1;

This uses a larger network sending buffer (apparently good for fast local networks – although I’d have expected that the low delay on a local Gig-E would give a low bandwidth delay product) and to use sha1 hashes on all packets (why does it default to no data integrity).

Reserved Numbers

The default port number that is used is 7789. I think it’s best to use ports below 1024 for system services so I’ve setup some systems starting with port 100 and going up from there. I use a different port for every DRBD instance, so if I have two clustered resources on a LAN then I’ll use different ports even if they aren’t configured to ever run on the same system. You never know when the cluster assignment will change and DRBD port numbers seems like something that could potentially cause real problems if there was a port conflict.

Most of the documentation assumes that the DRBD device nodes on a system will start at /dev/drbd0 and increment, but this is not a requirement. I am configuring things such that there will only ever be one /dev/drbd0 on a network. This means that there is no possibility of a cut/paste error in a /etc/fstab file or a Xen configuration file causing data loss. As an aside I recently discovered that a Xen Dom0 can do a read-write mount of a block device that is being used read-write by a Xen DomU, there is some degree of protection against a DomU using a block device that is already being used in the Dom0 but no protection against the Dom0 messing with the DomU’s resources.

It would be nice if there was an option of using some device name other than /dev/drbdX where X is a number. Using meaningful names would reduce the incidence of doing things to the wrong device.

As an aside it would be nice if there was some sort of mount helper for determining which devices shouldn’t be mounted locally and which mount options are permitted – it MIGHT be OK to do a read-only mount of a DomU’s filesystem in the Dom0 but probably all mounting should be prevented. Also a mount helper for such things would ideally be able to change the default mount options, for example it could make the defaults be nosuid,nodev (or even noexec,nodev) when mounting filesystems from removable devices.

Initial Synchronisation

After a few trials it seems to me that things generally work if you create DRBD on two nodes at the same time and then immediately make one of them primary. If you don’t then it will probably refuse to accept one copy of the data as primary as it can’t seem to realise that both are inconsistent. I can’t understand why it does this in the case where there are two nodes with inconsistent data, you know for sure that there is no good data so there should be an operation to zero both devices and make them equal. Instead there

The solution sometimes seems to be to run “drbdsetup /dev/drbd0 primary -” (where drbd0 is replaced with the appropriate device). This seems to work well and allowed me to create a DRBD installation before I had installed the second server. If the servers have been connected in Inconsistent/Inconsistent state then the solution seems to involve running “drbdadm -- --overwrite-data-of-peer primary db0-mysql” (for the case of a resource named db0-mysql defined in /etc/drbd.d/db0-mysql.res).

Also it seems that some commands can only be run from one node. So if you have a primary node that’s in service and another node in Secondary/Unknown state (IE disconnected) with data state Inconsistent/DUnknown then while you would expect to be able to connect from the secondary node is appears that nothing other than a “drbdadm connect” command run from the primary node will get things going.

Syndicated 2011-12-17 08:59:30 from etbe - Russell Coker

Some Notes on DRBD

DRBD is a system for replicating a block device across multiple systems. It’s most commonly used for having one system write to the DRBD block device such that all writes are written to a local disk and a remote disk. In the default configuration a write is not complete until it’s committed to disk locally and remotely. There is support for having multiple systems write to disk at the same time, but naturally that only works if the filesystem drivers are aware of this.

I’m installing DRBD on some Debian/Squeeze servers for the purpose of mirroring a mail store across multiple systems. For the virtual machines which run mail queues I’m not using DRBD because the failure conditions that I’m planning for don’t include two disks entirely failing. I’m planning for a system having an outage for a while so it’s OK to have some inbound and outbound mail delayed but it’s not OK for the mail store to be unavailable.

Global changes I’ve made in /etc/drbd.d/global_common.conf

In the common section I changed the protocol from “C” to “B“, this means that a write() system call returns after data is committed locally and sent to the other node. This means that if the primary node goes permanently offline AND if the secondary node has a transient power failure or kernel crash causing the buffer contents to be lost then writes can be lost. I don’t think that this scenario is likely enough to make it worth choosing protocol C and requiring that all writes go to disk on both nodes before they are considered to be complete.

In the net section I added the following:

sndbuf-size 512k;
data-integrity-alg sha1;

This uses a larger network sending buffer (apparently good for fast local networks – although I’d have expected that the low delay on a local Gig-E would give a low bandwidth delay product) and to use sha1 hashes on all packets (why does it default to no data integrity).

Reserved Numbers

The default port number that is used is 7789. I think it’s best to use ports below 1024 for system services so I’ve setup some systems starting with port 100 and going up from there. I use a different port for every DRBD instance, so if I have two clustered resources on a LAN then I’ll use different ports even if they aren’t configured to ever run on the same system. You never know when the cluster assignment will change and DRBD port numbers seems like something that could potentially cause real problems if there was a port conflict.

Most of the documentation assumes that the DRBD device nodes on a system will start at /dev/drbd0 and increment, but this is not a requirement. I am configuring things such that there will only ever be one /dev/drbd0 on a network. This means that there is no possibility of a cut/paste error in a /etc/fstab file or a Xen configuration file causing data loss. As an aside I recently discovered that a Xen Dom0 can do a read-write mount of a block device that is being used read-write by a Xen DomU, there is some degree of protection against a DomU using a block device that is already being used in the Dom0 but no protection against the Dom0 messing with the DomU’s resources.

It would be nice if there was an option of using some device name other than /dev/drbdX where X is a number. Using meaningful names would reduce the incidence of doing things to the wrong device.

As an aside it would be nice if there was some sort of mount helper for determining which devices shouldn’t be mounted locally and which mount options are permitted – it MIGHT be OK to do a read-only mount of a DomU’s filesystem in the Dom0 but probably all mounting should be prevented. Also a mount helper for such things would ideally be able to change the default mount options, for example it could make the defaults be nosuid,nodev (or even noexec,nodev) when mounting filesystems from removable devices.

Initial Synchronisation

After a few trials it seems to me that things generally work if you create DRBD on two nodes at the same time and then immediately make one of them primary. If you don’t then it will probably refuse to accept one copy of the data as primary as it can’t seem to realise that both are inconsistent. I can’t understand why it does this in the case where there are two nodes with inconsistent data, you know for sure that there is no good data so there should be an operation to zero both devices and make them equal. Instead there

The solution sometimes seems to be to run “drbdsetup /dev/drbd0 primary -” (where drbd0 is replaced with the appropriate device). This seems to work well and allowed me to create a DRBD installation before I had installed the second server. If the servers have been connected in Inconsistent/Inconsistent state then the solution seems to involve running “drbdadm — –overwrite-data-of-peer primary db0-mysql” (for the case of a resource named db0-mysql defined in /etc/drbd.d/db0-mysql.res).

Also it seems that some commands can only be run from one node. So if you have a primary node that’s in service and another node in Secondary/Unknown state (IE disconnected) with data state Inconsistent/DUnknown then while you would expect to be able to connect from the secondary node is appears that nothing other than a “drbdadm connect” command run from the primary node will get things going.

Syndicated 2011-12-17 08:58:30 from etbe - Russell Coker

Hetzner Failover Konfiguration

The Wiki documenting how to configure IP failover for Hetzner servers [1] is closely tied to the Linux HA project [2]. This is OK if you want a Heartbeat cluster, but if you want manual failover or an automatic failover from some other form of script then it’s not useful. So I’ll provide the simplest possible documentation.

Below is a sample of shell code to get the current failover settings and change them to point the IP address to a different server. In my tests this takes between 19 and 20 seconds to complete, when the command completes the new server will be active and no IP packets will be lost – but TCP connections will be broken if the servers don’t support shared TCP state.

# username and password for the Hetzner robot
USERPASS=USER:PASS
# public IP
IP=10.1.2.3
# new active server
ACTIVE=10.2.3.4
# get current values
curl -s -u $USERPASS https://robot-ws.your-server.de/failover.yaml/$IP
# change active server
curl -s -u $USERPASS https://robot-ws.your-server.de/failover.yaml/$IP -d active_server_ip=$ACTIVE

Below is the output of the above commands showing the old state and the new state.

failover:
ip: 10.1.2.3
netmask: 255.255.255.255
server_ip: 10.2.3.3
active_server_ip: 10.2.3.4
failover:
ip: 10.1.2.3
netmask: 255.255.255.255
server_ip: 10.2.3.4
active_server_ip: 10.2.3.4

Syndicated 2011-12-14 22:44:05 from etbe - Russell Coker

Cocolo Chocolate

Cocolo Overview

I recently wrote about buying a fridge for storing chocolate [1].

Jason Lewis (the co-founder of Organic Trader [2]) read that post and sent me some free samples of Cocolo chocolate [3] (Cocolo is an Organic Trader product that is made in Switzerland).

It’s interesting to note that Cocolo seem very focussed on a net presence [3], their URL is printed on the back of the packet in an equal size font to the main label on the front (although the front label is in upper case). The main web page has a prominent link to their Twitter page which appears to be updated a couple of times a month.

PIcture of Cocolo chocolate packaging

Cocolo makes only organic fair-trade chocolate. Every pack lists the percentage of ingredients that are Fairtrade (presumably milk and some other ingredients are sourced locally in Switzerland and Fairtrade doesn’t apply to them). Their chocolate packages have the URL www.fairtrade.com.au printed on them and their web site links to an international Fairtrade organisation. The packages also list the organic and Fairtrade certification details and state that they are GMO free. The final geek data on the package is advise to store the chocolate at a temperature between 16C and 18C (I have now set my fridge thermostat to 17C). The above picture shows the front of a pack of Dark Orange chocolate and the back of a pack of Milk chocolate.

Reviews

One thing that is different about Cocolo is that they use only unrefined evaporated organic cane sugar juice to sweeten their chocolate. This gives it a hint of molasses in the flavor. Children who like white sugar with brown coloring might not appreciate this, but I think that the use of natural cane sugar juice will be appreciated by most people who appreciate products with complex and subtle flavors.

The Milk chocolate contains a minimum of 32% cocoa solids, this compares to the EU standard of a minimum of 25% for milk chocolate and the UK standard of a minimum of 20% for “Family Milk Chocolate”. The EU standard for dark chocolate specifies a minimum of 35% cocoa solids, so it seems that Cocolo milk chocolate is almost as strong as dark chocolate. If you are used to eating dark and bittersweet chocolate then the Cocolo milk chocolate is obviously not that strong, but it is also significantly more concentrated than most milk chocolate that is on the market. The high chocolate content combined with the evaporated cane sugar extract gives a much stronger flavor than any of the milk chocolates that I have eaten in recent times.

The Dark Mint Crisp chocolate has a minimum of 61% cocoa mass. The mint crisp is in very small pieces that give a good texture to the chocolate with a faint crunch when you bite it. It has a good balance of mint and chocolate flavors.

The Dark Orange chocolate contains 58% cocoa solids and has a subtle orange flavor.

The white chocolate tastes quite different from most white chocolate. While most white chocolate is marketed to children the Cocolo white chocolate will probably appeal more to adults than children. This is one of the few white chocolates that I’ve wanted to eat since the age of about 14.

They also have many other flavors, most common types of chocolate (such as with almonds or hazelnuts) are available.

I highly recommend Cocolo products!

Syndicated 2011-12-09 05:20:14 from etbe - Russell Coker

927 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!