Older blog entries for etbe (starting at number 976)

Another USB Flash Failure

I previously wrote about a failure of a USB flash device in my Internet gateway [1]. I have since had another failure in the same system, so both the original 4G devices are now dead. That’s two dead devices in 10 weeks. It could be that the USB devices that I got for free at an exhibition were just really cheap, I’m sure that they weren’t expecting them to be used in that way. The devices from the same batch which are used for their intended purpose (sneaker-net file sharing) are still working well. But in any case I’m not going to resume this experiment until warmer weather. At this time of year some extra heat dissipation from computer gear in my home is more like a feature and less like a bug.

The second USB device to fail appeared to have it’s failure in the Ext4 journal (the errors were reported at around sector 2000), I didn’t keep a record of the problem with the first device, but from memory I think it was much the same.

Rumor has it that cheap flash storage devices don’t implement wear-levelling to avoid patent infringement. If that rumor is correct then any filesystem that uses a fixed journal in the same way as Ext3/4 is probably unsuitable for any serious use on such devices, while a filesystem based on Copy On Write will probably perform better. In Spring I’ll try using BTRFS on cheap USB flash devices and see if that works better. I have another spare device from the same batch to test so I can eliminate hardware differences. I can’t do enough tests to be a good statistical sample, but if a device lasts from Spring to Autumn using BTRFS with the same use that caused failures with Ext4 in a few weeks then I will consider it a strong indication that BTRFS is better than Ext3/4 for such uses.

For the next 5 months or so I’ll be using a hard drive in my Internet gateway system again.

Related posts:

  1. Flash Storage Update Last month I wrote about using USB flash storage devices...
  2. flash for main storage I was in a discussion about flash on a closed...
  3. USB Flash Storage For some years I have had my Internet gateway/firewall system...

Syndicated 2012-05-22 06:04:01 from etbe - Russell Cokeretbe - Russell Coker

What I REALLY Want from the NBN

Generally I haven’t had a positive attitude towards the NBN. It doesn’t seem likely to fulfill the claims of commercial success and would be a really bad thing to privatise anyway. Also it hasn’t seemed to offer any great benefits either. The claim that it will enable lots of new technical developments which we can’t even imagine yet that aren’t possible with 25Mb/s ADSL but which also don’t require more than the 100Mb/s speed of the NBN never convinced me.

But one thing it could really do well is to give better Internet access in remote areas. Ideally with static or near-static IPv6 addresses (because we have already run out of IPv4 addresses). Currently 3G networks do all sorts of nasty NAT things to deal with the lack of IPv4 addresses which causes a lot of needless pain if you have a server connected via 3G. One of the NBN plans is for wireless net access to remote homes, with some sanity among the people designing the network such NBN connections would all have static IPv6 subnets as long as they don’t move.

I’m currently working on a project that involves servers on 3G links. I don’t have a lot of options on implementation due to hardware and software constraints. So if the ISPs using the NBN and the NBN itself (for the wireless part) could just give us all IPv6 static ranges then lots of problems would be solved.

Of course I don’t have high hopes for this. One of the many ways that the NBN has been messed up is in allowing the provision of lower speed connections. As having an ADSL2+ speed NBN connection is the cheapest option a lot of people will choose it. Therefore the organisations providing services will have to do so with the expectation that most NBN customers have ADSL2+ speed and thus they won’t provide services to take advantage of higher speeds.

Related posts:

  1. RPC and SE Linux One ongoing problem with TCP networking is the combination of...
  2. A New Strategy for Xen MAC Allocation When installing Xen servers one issue that arises is how...
  3. New Net Connections On Thursday my new InterNode ADSL2+ service was connected [1]....

Syndicated 2012-05-13 08:13:22 from etbe - Russell Cokeretbe - Russell Coker

A Quick Review of the Mac Mini with OS/X Lion compared to Linux

A client just lent me a new Mac Mini with OS/X Lion to play with. I think it’s interesting to compare it with regular PCs running Linux.

Hardware

The Mac Mini is tiny. It’s volume can be compared to that of a laptop. The entire outside apart from the base is made from aluminium which helps dissipate heat, it’s not as effective as copper but a lot better than plastic. The ports on the system are sound input/output, 4*USB, Ethernet, Firewire, Thunderbolt (replacement for Firewire), SDXC, and HDMI. It ships with a HDMI to DVI-D adapter which is convenient if you have an older monitor (or if you have a recent monitor but no HDMI cable as I do).

To open the case you unscrew the bottom, this is much like opening a watch. Also like opening a watch it’s not particularly easy to screw it back on tightly, I will probably return the Mac Mini without managing to completely screw the base in.

The hardware is very stylish and intricately designed, what we expect from Apple. It’s also quiet. In every way it’s a much better system than the workstation I’m using to write this blog post. The difference of course is that this workstation was free and the Mac Mini cost just over $1000 including the RAM upgrade. A Mac Mini could be a decent Linux workstation and if I see one about to be recycled I’ll be sure to grab it!

Installation

The Mac OS comes pre-installed so I didn’t get to do a full installation. When I first booted it up it asked me if I wanted to migrate the configuration from an existing server, I don’t know how well this works as I don’t have a second Mac system but the concept is a good one. Maybe having full support for such a migration process would be a good release goal for a Linux distribution.

After determining that the installation is a fresh one I was asked for a mac.com email address or other form of registration. I skipped this step as I don’t have such an email address, but it could be useful. Red Hat has “Kickstart” to allow configuration of an OS install based on a file from a server (via NFS or HTTP). Debian supports “preseeding” to take OS configuration options from a file at install time [1] and the same option can be used for later stages of OS autoconfiguration.

One thing that would be really useful is to allow the user to enter a URL for configuration data for an individual account or for all accounts, so someone with an account on one workstation could upload the configuration (which would be either encrypted or sanitised to not have secret data) and then download it when first logging in to a new system. I can easily take a tar archive of my home directory to a new system, but people like my parents don’t have the skill to do that.

One of the final stages of system configuration was to identify the keyboard. The system asked me to press the key to the right of the left shift key and then the key to the left of the right shift key and then offered me three choices of keyboard. That was an interesting way of reducing the list of possible keyboards offered to the user and thus preventing the user from selecting one that is grossly incorrect.

Cloud Storage

When first logging in I was asked for an iCloud [2] login. iCloud doesn’t seem like a service that should be trusted, it’s based in the US and has been designed to facilitate access by government agencies. Ubuntu One [3] is a similar service that is run by a more reputable organisation, but the data is still stored by Amazon (a US corporation) which seems like a security risk. Ubuntu One isn’t in Debian (which is strange as Ubuntu is based on Debian) so it was too much effort for me to determine whether it encrypts data in a way that protects the users against US surveillance.

The cost of Ubuntu One storage is $4 per month with music streaming. A better option is to use a self-hosted OwnCloud installation for a private or semi-private cloud [4]. A cheap server from someone like Hetzner (E49 per month for 3TB of RAID-1 storage) [5] is a good option for OwnCloud hosting. A cheap Hetzner server is about $US64 per month (at current conversion rates) which is equivalent to about 16 users of Ubuntu One for music streaming. So if 20 people shared a Hetzner server they could save money when compared to Ubuntu One while also getting a lot more storage. I’ve got about 300G of unused disk space on the Hetzner server that hosts my blog and when the system is migrated to a newer Hetzner server with 3TB disks it will have 2.5TB of unused space, I could store a lot of cloud data in that!

The main features of iCloud and Ubuntu One seem to be distribution of random data files (anything you wish), streaming music to various playing systems, and copying pictures from phones as soon as they are taken. These are all great features but it’s a pity that they don’t appear to support distributed document storage. Apple Pages apparently allows documents to be immediately saved to the cloud. I’d like to be able to save a file with Libre Office at home and then access it from my netbook using the cloud, of course that would require encryption for secret files but that’s not so hard to do. One advantage with such distributed storage is that when combined with offline-IMAP for email it would almost entirely remove the need for backups of the desktop systems I maintain for my relatives. I could have all their pictures and documents go to the cloud and all their email stay on the server so if their desktop PC dies I could just give them a new PC and get it all back from the cloud! OwnCloud supports replication, so if I got two servers I would be covered against a server failure. But I think that for a small server with less than a dozen users it’s probably better to just take some down-time when things go wrong and do regular backups to an array of cheap SATA disks.

App Store

Apple has an “App Store” in the OS. The use of such a store on a desktop OS is a new thing for me. It’s basically the same as the Android Market (Google Play) but on the desktop. I think that there is a real scope for an organisation such as Canonical to provide such a market service for Linux. I think that there is a lot of potential for apps to be sold for less than $10 to a reasonable number of Linux users. A small payment would be inconvenient for the seller if they have to interact with the customer in any way and also inconvenient for the buyer if they are entering all their credit card details into a web site for the sale. But for repeat sales with one company being an intermediary it would be convenient for everyone. A market program for a desktop Linux system could provide a friendly interface to selecting free apps from repositories (for Debian, Ubuntu, Fedora, or other distributions) and also have the same interface used for selecting paid applications.

Conclusion

This isn’t much of a review of Apple OS/X or the Mac Mini. Thinking about ways of implementing the best features of Lion on Linux is a lot more interesting. I admire Apple in the same way that I admire sharks, they are really good at what they do but they don’t care about my best interests any more than a hungry shark cares about me.

Update

I got the currency conversion wrong in the first version of this article. It seems that to save money via a shared Hetzner server instead of Ubuntu One about 20 users would be needed instead of 10. But that’s still not too many and would still give a lot more storage. It would be a little more difficult to arrange though, probably anyone who is seriously into computers knows 10 people who would want to share such a service (including people like their parents who want things to just work and don’t understand what’s happening). But getting 20 people would be more difficult.

Related posts:

  1. Xen and SE Linux – EWeek review of RHEL5 The online magazine EWeek has done a review of RHEL5....
  2. Servers vs Phones Hetzner have recently updated their offerings to include servers with...
  3. Modern Laptops Suck One of the reasons why I’m moving from a laptop...

Syndicated 2012-05-06 15:50:50 from etbe - Russell Cokeretbe - Russell Coker

A Quick Review of the Mac Mini with OS/X Lion compared to Linux

A client just lent me a new Mac Mini with OS/X Lion to play with. I think it’s interesting to compare it with regular PCs running Linux.

Hardware

The Mac Mini is tiny. It’s volume can be compared to that of a laptop. The entire outside apart from the base is made from aluminium which helps dissipate heat, it’s not as effective as copper but a lot better than plastic. The ports on the system are sound input/output, 4*USB, Ethernet, Firewire, Thunderbolt (replacement for Firewire), SDXC, and HDMI. It ships with a HDMI to DVI-D adapter which is convenient if you have an older monitor (or if you have a recent monitor but no HDMI cable as I do).

To open the case you unscrew the bottom, this is much like opening a watch. Also like opening a watch it’s not particularly easy to screw it back on tightly, I will probably return the Mac Mini without managing to completely screw the base in.

The hardware is very stylish and intricately designed, what we expect from Apple. It’s also quiet. In every way it’s a much better system than the workstation I’m using to write this blog post. The difference of course is that this workstation was free and the Mac Mini cost just over $1000 including the RAM upgrade. A Mac Mini could be a decent Linux workstation and if I see one about to be recycled I’ll be sure to grab it!

Installation

The Mac OS comes pre-installed so I didn’t get to do a full installation. When I first booted it up it asked me if I wanted to migrate the configuration from an existing server, I don’t know how well this works as I don’t have a second Mac system but the concept is a good one. Maybe having full support for such a migration process would be a good release goal for a Linux distribution.

After determining that the installation is a fresh one I was asked for a mac.com email address or other form of registration. I skipped this step as I don’t have such an email address, but it could be useful. Red Hat has “Kickstart” to allow configuration of an OS install based on a file from a server (via NFS or HTTP). Debian supports “preseeding” to take OS configuration options from a file at install time [1] and the same option can be used for later stages of OS autoconfiguration.

One thing that would be really useful is to allow the user to enter a URL for configuration data for an individual account or for all accounts, so someone with an account on one workstation could upload the configuration (which would be either encrypted or sanitised to not have secret data) and then download it when first logging in to a new system. I can easily take a tar archive of my home directory to a new system, but people like my parents don’t have the skill to do that.

One of the final stages of system configuration was to identify the keyboard. The system asked me to press the key to the right of the left shift key and then the key to the left of the right shift key and then offered me three choices of keyboard. That was an interesting way of reducing the list of possible keyboards offered to the user and thus preventing the user from selecting one that is grossly incorrect.

Cloud Storage

When first logging in I was asked for an iCloud [2] login. iCloud doesn’t seem like a service that should be trusted, it’s based in the US and has been designed to facilitate access by government agencies. Ubuntu One [3] is a similar service that is run by a more reputable organisation, but the data is still stored by Amazon (a US corporation) which seems like a security risk. Ubuntu One isn’t in Debian (which is strange as Ubuntu is based on Debian) so it was too much effort for me to determine whether it encrypts data in a way that protects the users against US surveillance.

The cost of Ubuntu One storage is $4 per month with music streaming. A better option is to use a self-hosted OwnCloud installation for a private or semi-private cloud [4]. A cheap server from someone like Hetzner (E49 per month for 3TB of RAID-1 storage) [5] is a good option for OwnCloud hosting. A cheap Hetzner server is about $US37 per month (at current conversion rates) which is equivalent to about 9 users of Ubuntu One for music streaming. So if 10 people shared a Hetzner server they could save money when compared to Ubuntu One while also getting a lot more storage. I’ve got about 300G of unused disk space on the Hetzner server that hosts my blog and when the system is migrated to a newer Hetzner server with 3TB disks it will have 2.5TB of unused space, I could store a lot of cloud data in that!

The main features of iCloud and Ubuntu One seem to be distribution of random data files (anything you wish), streaming music to various playing systems, and copying pictures from phones as soon as they are taken. These are all great features but it’s a pity that they don’t appear to support distributed document storage. Apple Pages apparently allows documents to be immediately saved to the cloud. I’d like to be able to save a file with Libre Office at home and then access it from my netbook using the cloud, of course that would require encryption for secret files but that’s not so hard to do. One advantage with such distributed storage is that when combined with offline-IMAP for email it would almost entirely remove the need for backups of the desktop systems I maintain for my relatives. I could have all their pictures and documents go to the cloud and all their email stay on the server so if their desktop PC dies I could just give them a new PC and get it all back from the cloud! OwnCloud supports replication, so if I got two servers I would be covered against a server failure. But I think that for a small server with less than a dozen users it’s probably better to just take some down-time when things go wrong and do regular backups to an array of cheap SATA disks.

App Store

Apple has an “App Store” in the OS. The use of such a store on a desktop OS is a new thing for me. It’s basically the same as the Android Market (Google Play) but on the desktop. I think that there is a real scope for an organisation such as Canonical to provide such a market service for Linux. I think that there is a lot of potential for apps to be sold for less than $10 to a reasonable number of Linux users. A small payment would be inconvenient for the seller if they have to interact with the customer in any way and also inconvenient for the buyer if they are entering all their credit card details into a web site for the sale. But for repeat sales with one company being an intermediary it would be convenient for everyone. A market program for a desktop Linux system could provide a friendly interface to selecting free apps from repositories (for Debian, Ubuntu, Fedora, or other distributions) and also have the same interface used for selecting paid applications.

Conclusion

This isn’t much of a review of Apple OS/X or the Mac Mini. Thinking about ways of implementing the best features of Lion on Linux is a lot more interesting. I admire Apple in the same way that I admire sharks, they are really good at what they do but they don’t care about my best interests any more than a hungry shark cares about me.

Related posts:

  1. Xen and SE Linux – EWeek review of RHEL5 The online magazine EWeek has done a review of RHEL5....
  2. Servers vs Phones Hetzner have recently updated their offerings to include servers with...
  3. Modern Laptops Suck One of the reasons why I’m moving from a laptop...

Syndicated 2012-05-06 15:26:50 from etbe - Russell Cokeretbe - Russell Coker

Liberty and Mobile Phones

I own two mobile phones at the moment, I use a Samsung Galaxy S running Cyanogenmod [1] (Android 2.3.7) for most things, and I have a Sony Ericsson Xperia X10 running Android 2.1 that I use for taking photos, some occasional Wifi web browsing, and using some applications.

Comparing Android Hardware

The hardware for the Xperia X10 is better than that of the Galaxy S in many ways. It has a slightly higher resolution (480*854 vs 480*800), a significantly better camera (8.1MP with a “flash” vs 5MP without), and a status LED which I find really handy (although some people don’t care about it).

The only benefit of the Galaxy S hardware is that it has 16G of internal storage (of which about 2G can be used for applications) and 512M of RAM while the Xperia X10 has 1G of internal storage and 384M of RAM. These are significant issues, I have had applications run out of RAM on the Xperia X10 and I have been forced to uninstall applications to make space.

Overall I consider the Xperia X10 to be a significantly better piece of hardware as I am willing to trade off some RAM and internal storage to get a better resolution screen and a better camera. The problem is that Sony Ericsson have locked down their phones as much as possible and they don’t even give users the option of making a useful backup – they inspired my post about 5 principles of backups [2].

The fact that the Galaxy S allows installing CyanogenMod which then gives me the liberty to do whatever I want with my phone is a massive feature. It outweighs the hardware benefits of the Sony Ericsson phones over Samsung phones prior to the Galaxy Nexus and Galaxy Note.

For an individual user the ability to control their own hardware is a feature. Such an ability wouldn’t be much use if there wasn’t a community of software developers, so if you buy an Android phone that isn’t supported by CyanogenMod or another free Android distribution then whether it is locked probably won’t matter to you. But for any popular Android phone that’s sold on the mass market it seems that if it’s not locked then it will get a binary distribution of Android in a reasonable amount of time.

Comparing with Apple

It seems that Apple is the benchmark for non-free computing at the moment. The iPhone is locked down and Apple takes steps to re-lock phones that can be rooted – as opposed to the Android vendors who ship phones and then don’t bother to update the firmware for any reason. The Apple app market is more expensive and difficult to enter and if an app isn’t in the market then you have to pay if you want to install it on a small number of development/test phones. This compares to Android where the Google market is cheaper and easier to enter and anyone can distribute an app outside the market and have people use it.

But for an individual this doesn’t necessarily cause any problems. I have friends and clients who use iPhones and are very happy with them. In terms of software development it’s a real benefit to have a large number of systems running the same software. As Apple seems to have higher margins and larger volume than any other phone vendor as well as shipping only one phone at any time (compared to every other phone vendor which seems to ship at least 3 different products for different use cases) they are in a much better economic position to get the software development right. As far as I can tell the hardware and software of the iPhone is of very high quality. The iPad (which has a similar market position) is also a quality product. The fact that the Apple app market is more difficult to enter (both in terms of Apple liking the application and the cost of entry) also has it’s advantages, I get the impression that the general quality of iPhone apps is quite high as opposed to Android where there are a lot of low quality apps and many more fraudulent apps than there should be.

The lack of choice in Apple hardware (one phone and one tablet) is a disadvantage for the user. There is no option for a phone with a slide-out keyboard, a large screen (for the elderly and people with fat fingers), or any of the other features that some Android phones have. The lack of a range of sizes for the iPad is also a disadvantage. But it seems that Apple has produced hardware that is good enough for most users so there aren’t many complaints about a lack of choice.

It seems to me that the biggest disadvantage of the closed Apple ecosystem is for society in general. Anyone who wants to write a mobile app to do something which might be considered controversial would probably think twice about whether to develop for the iPhone/iPad as Apple could remove the app at a whim which would waste all the software development work that was invested in writing the app. Google seem to have much less interest in removing apps from their store and if they do remove an app then with some inconvenience it can be distributed on the web without involving them – so the work won’t be wasted.

How Much Freedom Should a Vendor Provide?

The Apple approach of locking everything down is clearly working for them at the moment. The Samsung approach of taking the Google prescribed code and allowing users to replace it is good for the users and works well. The Sony Ericsson approach of taking the Google code, adding some proprietary code, and then locking the phone down is bad for the users and I think it will be bad for Sony Ericsson. People are more likely to tell others about negative experiences and negative reviews are more likely to be noticed than positive reviews. So while many people are reasonably happy with Sony Ericsson products (until they find themselves unable to restore from a backup) it’s still not a good situation for Sony Ericsson marketing.

It seems that there are benefits to hardware vendors for being really open and for locking their users in properly. But being somewhat open isn’t a good choice, particularly for a vendor that ships poor quality proprietary apps such as the Sony Ericsson ones.

In terms of application distribution Google isn’t as nice as they appear. The Skyhook case revealed that Google will do whatever it takes to prevent apps that compete with Google apps from being installed by default [3]. Google is also trying to make money from DRM sales via Youtube which it denies to rooted phones [4]. Again it seems to me that the best options here are being more open than Google is and being as closed as Apple. Google might gain some useful benefits from applying DRM (even though everyone with technical knowledge knows that it doesn’t work) but the Skyhook shenanigans have got to be costing Google more than it’s worth.

How to make Android devices more Free

The F-droid market is an alternative to the Google App market which only has free software [5]. On it’s web site there are links to download the source for the applications, including the source and binaries for old versions. In the Google App market if an upgrade breaks your system then you just lose, with F-droid you can revert to the old version.

A self-hosted OwnCloud installation for a private or semi-private cloud [6] can be used as an alternative to the Google Music store as well as for hosting any other data that you want to store online.

The Open Street Map for Android (Osmand) project provides an alternative to the Google Map service [7]. Osmand allows you to download all the vector data for the regions you will ever visit so it can run without Internet access. But it doesn’t have the ability to search for businesses and the search for an address functionality is clunky and doesn’t accept plain text, which among other things precludes pasting data that’s copied from email or SMS. While Osmand provides some important features that Google Maps will probably never provide, it doesn’t provide some of the most used features of Google Maps so uninstalling Google Maps isn’t a good option at the moment.

The K9mail project provides a nice IMAP client for Android [8]. Use K9 with a mail server that you run and you won’t need to use Gmail.

There are alternatives to all the Google applications. It seems that apart from the lack of commercial data and search ability in Osmand an Android device that is used for many serious purposes wouldn’t lack much if it had no Google apps.

Google seems to be going too far in controlling Android. Escaping from their control and helping others to do the same seems to be good for society and good for the users who don’t need apps which are only available in proprietary form.

Related posts:

  1. Dual SIM Phones vs Amaysim vs Contract for Mobile Phones Currently Dick Smith is offering two dual-SIM mobile phones for...
  2. My Ideal Mobile Phone Based on my experience testing the IBM Seer software on...
  3. Old Mobile Phones as Toys In the past I have had parents ask for advice...

Syndicated 2012-05-05 14:31:09 from etbe - Russell Cokeretbe - Russell Coker

Acoustiblok/Thermablok

Acoustiblok is an interesting product for blocking sound, it works by dissipating sound energy through friction within the sound barrier materiel [1]. They sell it in varieties that are designed for use within walls and for use as fences. As it isn’t solid it won’t reflect sound so it can be used to line the walls to stop sound being reflected back at you. It’s design is based on NASA research.

The web site claims that a 3mm sheet of Acoustiblok gives a greater noise reduction than 12 inches (30.7cm) of poured concrete. I am a little dubious about that claim as I’ve read a report of someone using three layers of Acoustiblok to make a quiet room for recording music (and to be used as a play-room for an Autistic child). I find it difficult to imagine someone needing a meter of concrete to stop any sort of noise that they might encounter in a residential area so the fact that someone needed three layers of Acoustiblok is an indication that it might not be quite as good as they claim (although there is the possibility that Acoustiblok was badly installed). I wonder whether the claims about concrete concern particular frequencies. The technical specifications and product comparisons page [2] shows that Acoustiblok is least effective at 130Hz where it only reduces noise by 12dB and that it’s effectiveness increases to 38dB at 5KHz. So perhaps a concrete wall to stop low frequencies and Acoustiblok to stop high frequencies would be the best solution.

The Australian distributor for Acoustiblok is based in Brisbane [3].

The same company also sells Thermablok [4] which is the first aerogel based insulation that I’ve seen being advertised for commercial sale. I guess that it must be rather expensive as they are mostly advertising it for use as thin strips to cover stud faces (steel studs conduct heat well and can cause a lot of heat loss). A note in their FAQ says that it’s available in rolls for insulating entire walls or floors. The FAQ also indicates that they sell samples suitable for science classes. They are also apparently looking for retailers, it would be nice if someone wanted to sell this in Australia.

Related posts:

  1. Noise Canceling Headphones My patience with the noise of airlines has run out,...
  2. Testing Noise Canceling Headphones This evening I tested some Noise Canceling Headphones (as described...
  3. I Bought the Bose QC-15 I bought the Bose QC15 noise canceling headphones for my...

Syndicated 2012-05-03 05:44:23 from etbe - Russell Cokeretbe - Russell Coker

The Royal Caribbean Official Android app

I’ve just played with the official Android app [1] for the Royal Carribean cruise line [2]. The cruise line is apparently great (I’ve never been on one of their ships but the reviews are good) but the Android app isn’t.

Net Access

The most obvious and significant problem with this app is that it’s entirely useless without net access. All data of note comes from the Internet which means that the program is useless in any location where Internet access is unavailable (or unreasonably expensive). They wrote an app about cruising that can’t be used on a cruise ship! Did they even think about what they were doing?

The correct thing to do when writing such an application is to have all basic data about all ships included in the app. This means that when they change the deck plan of a ship they need to release a new version of the app and have people download it. Having done a lot of software development I understand that forcing software updates (even updates to included data files) involves some effort and expense. But when they spend $20,000,000 to update a ship (which is about the minimum that is spent for a major ship in dry-dock according to TV documentaries I’ve watched) it seems quite reasonable to budget $10,000 to release new software. Also one benefit of updating the software is that it can promote the changes, after spending tens of millions of dollars improving a ship they probably want to promote that to customers and pushing a new app update with adverts for the improved ship seems like a good way of doing that.

There is some data that can’t reasonably be included in the app due to size constraints with photos of ships being the most notable example. The solution to this is to provide an option for the user to cache the data that interests them. For example if I was meeting some people to discuss the possibility of a group cruise on a RCL ship then I could download all the pictures of that ship on my home Wifi network and then have them all available with no delay or 3G costs.

Also the app seems to hang if net access is temporarily interrupted. As phones are expected to have unreliable net access this is also a significant flaw.

Maps

The maps of the ships are comprised of a series of pictures which each show one deck. In addition to being downloaded (not cached or included in the app) they aren’t scalable (they should be SVG or at least allow zooming the high resolution pictures) and they don’t allow a 3D view. The paper maps used to promote cruises (including RCI cruises) and which are given to all passengers on Princess cruises (I’m not sure about other lines) show a side cutaway view of the ship which is handy for working out which things are near where you are. It seems that an ideal cruise ship mapping program would have some sort of 3D component, maybe X3D.

My experience is that a two night cruise isn’t long enough to become familiar with one of the smaller cruise ships. Using a map is essential and a smart phone is a good way of managing such a map as typical 2D paper maps just aren’t good enough for such a large and complex structure.

Photos

One of the significant things that is wrong with the app is a lack of care in displaying the photos of ships. They display three pictures of the Allure of the Seas (one of the two newest, biggest, and most luxurious ships in their fleet), but one of those three photos is actually of the Oasis of the Seas. The fact that the two ships are almost identical is no excuse, there is a principle at stake! Also only having three pictures is pretty poor, there is no way that less than 50 pictures could do justice to such a big ship!

A Google search for the words cruise and photos turns up many sites with pictures of cruise ships and it’s not particularly difficult to find pictures of any particular ship. Photographs by customers are often of high quality as some of the better DSLR cameras are in the same price range as some of the cheaper tickets for cruise ships. Probably the best thing that RCI could do is to run a contest and allow their customers to enter photos and vote towards the winning entries. That would get them photos that aren’t as sterile as the official photos and which include the things that are of most interest to customers.

Finally in terms of caching, pictures are the most easily cached source of data and as phones get higher resolution they keep getting bigger. The storage space for a modern phone is equivalent to the entire 3G download quota for about a year on an affordable Australian 3G plan. When dealing with photos downloaded from the net the default should be to cache everything.

Navigation

Navigating a smart-phone app is a lot more difficult than navigating the same data on a desktop system (which would be in a web browser). Users can compensate for some deficiencies with web site organisation by using a large monitor and having several web browser windows with multiple tabs. But with a phone it should be possible to switch between things quickly.

The main menu has a “View Our Ships” option (which allows viewing deck plans and pictures) and a “View Our Staterooms” option which offers a list of ships and then describes the state rooms available for each one. This means that you can’t see all the information about a ship in one place and even worse you can’t easily compare ships. As it seems likely that people will want to use this app for selecting a cruise it should be possible to select a few ships that are of interest and then quickly flip between them. For example the Rhapsody of the Seas and the Voyager of the Seas are cruising in my part of the world so it would be nice if I could tell the app to compare those ships and then allow me to view a page about one ship and then flick to the equivalent page about the other ship.

Another notable problem is that the ships are listed in alphabetical order. The sensible thing to do is to list them by class going from biggest to smallest.

Lessons to be Learned

These problems aren’t specific to the RCI app, many other Android apps have the same flaws. For example the Google Play market app doesn’t cache the icons of the installed apps so every time I want to see a list of installed apps it goes slow and wastes some of my bandwidth. Doing something wrong in the same way as Google isn’t necessarily a great mistake, although using the Google Play market on a cruise ship is probably very uncommon.

Probably the biggest problem is a lack of testing. They should have sent the developers on a cruise as a live test. Every cruise ship has a sales desk for booking future cruises so it wouldn’t be difficult to have a dozen Android phones at the sales desk to see how real customers who really want to book a cruise find it. I presume that even if net access was available then such a test would fail dismally. If a 3D display of a ship combined with all the data management capabilities of a modern smart phone (which is a lot more powerful than the desktop systems I used prior to 2000) can’t at least be a useful supplement to a stack of paper brochures then it’s probably a failure.

I think that the RCI app is an example of how to make an Android app which doesn’t fall into the more common failings (such as being a quick and dirty port from iOS) but yet still isn’t useful to customers. I recommend that people who develop apps which have an objective of imparting information to users try it out as an example of what not to do. Try a few basic tasks like comparing the three biggest classes of RCI ships in terms of features, after failing to do that with the app you can then use Wikipedia to get the result. But don’t use the Wikipedia client apps, use a tabbed browser such as Opera Mini.

Related posts:

  1. An Introduction to Android I gave a brief introductory talk about Android at this...
  2. Choosing an Android Phone My phone contract ends in a few months, so I’m...
  3. Galaxy S vs Xperia X10 and Android Network Access Galaxy S Review I’ve just been given an indefinite loan...

Syndicated 2012-04-30 05:40:12 from etbe - Russell Cokeretbe - Russell Coker

Nando’s Voucher Interpretation

Every year my parents buy a book of vouchers for various businesses in Victoria. It’s one of those deals where businesses (mostly restaurants) pay for advertising space to have their tear-off vouchers in the book (which typically allow a discount of between $5 and $30) and the customers buy the book for something like $40 (I’m not really sure as I don’t pay). Every year I take my pick of the vouchers that don’t suit my parents, the Nando’s chain of chicken and chips restaurants that specialises in Peri-Peri spicy sauce [1] is one that doesn’t suit my parents (they prefer the traditional English-Australian food).

The Nando’s vouchers say “Enjoy one complimentary 1/4 flame-grilled peri-peri chicken item when another 1/4 flame-grilled peri-peri chicken item is purchased” with no explanation of exactly what an “item” is. Every Nando’s store that I’ve been to in the past has interpreted “item” as chicken and chips, usually they include the drink that comes with the “quarter chicken meal” in the “item” that is free. I can do without a second soda as it’s really cheap from the supermarket and I’m not going to drink two at one meal anyway so I’m not bothered when someone interprets the voucher as not involving a free drink. But the lack of chips is annoying.

At the Nando’s store on Swanston St between La Trobe St and Little Lonsdale St they interpret “item” as being just the 1/4 chicken. I think that most people would regard this as an unusual interpretation. If the intent was to only offer 1/4 chicken then the voucher could have stated that a free 1/4 chicken was offered and removed all doubt. The fact that the voucher says a free “1/4 flame-grilled peri-peri chicken item” instead of offering a free “Quarter Chicken” (which is the description for chicken on it’s own on the Nandos menus) seems to be a reasonable indication that more than just the free chicken is offered.

I won’t be attending the Nandos store on Swanston St again and recommend that others avoid it too. Failing to offer the full value on the voucher is annoying, it decreases the value for money (which is a problem given how expensive Nandos is), and it makes me wonder what other cost-saving measures might be used at that store. I’ve got a stack of vouchers (many of which will expire before being used) and the Melbourne CBD has many places to eat.

No related posts.

Syndicated 2012-04-28 09:01:14 from etbe - Russell Cokeretbe - Russell Coker

BTRFS and ZFS as Layering Violations

LWN has an interesting article comparing recent developments in the Linux world to the “Unix Wars” that essentially killed every proprietary Unix system [1]. The article is really interesting and I recommend reading it, it’s probably only available to subscribers at the moment but should be generally available in a week or so (I used my Debian access sponsored by HP to read it).

A comment on that article cites my previous post about the reliability of RAID [2] and then goes on to disagree with my conclusion that using the filesystem for everything is the right thing to do.

The Benefits of Layers

I don’t believe as strongly in the BTRFS/ZFS design as the commentator probably thinks. The current way my servers (and a huge number of other Linux systems) work of having RAID to form a reliable array of disks from a set of cheap disks for the purpose of reliability and often capacity or performance is a good thing. I have storage on top of the RAID array and can fix the RAID without bothering about the filesystem(s) – and have done so in the past. I can also test the RAID array without involving any filesystem specific code. Then I have LVM running on top of the RAID array in exactly the same way that it runs on top of a single hard drive or SSD in the case of a laptop or netbook. So Linux on a laptop is much the same as Linux on a server in terms of storage once we get past the issue of whether a single disk or a RAID array is used for the LVM PV, among other things this means that the same code paths are used and I’m less likely to encounter a bug when I install a new system.

LVM provides multiple LVs which can be used for filesystems, swap, or anything else that uses storage. So if a filesystem gets badly corrupted I can umount it, create an LVM snapshot, and then take appropriate measures to try and fix it – without interfering with other filesystems.

When using layered storage I can easily add or change layers when it’s appropriate. For example I have encryption on only some LVs on my laptop and netbook systems (there is no point encrypting the filesystem used for .iso files of Linux distributions) and on some servers I use RAID-0 for cached data.

When using a filesystem like BTRFS or ZFS which includes subvolumes (similar in result to LVM in some cases) and internal RAID you can’t separate the layers. So if something gets corrupted then you have to deal with all the complexity of BTRFS or ZFS instead of just fixing the one layer that has a problem.

Update: One thing I forgot to mention when I first published this is the benefits of layering for some uncommon cases such as network devices. I can run an Ext4 filesystem over a RAID-1 array which has one device on NBD on another system. That’s a bit unusual but it is apparently working well for some people. The internal RAID on ZFS and BTRFS doesn’t support such things and using software RAID underneath ZFS or BTRFS loses some features.

When using DRBD you might have two servers with local RAID arrays, DRBD on top of that, and then an Ext4 filesystem. As any form of RAID other than internal RAID loses reliability features for ZFS and BTRFS that means that no matter how you might implement those filesystems with DRBD it seems that you will lose somehow. It seems that neither BTRFS nor ZFS supports a disconnected RAID mode (like a Linux software RAID with a bitmap so it can resync only the parts that didn’t change) so it’s not possible to use BTRFS or ZFS RAID-1 with an NBD device.

The only viable way of combining ZFS data integrity features with DRBD replication seems to be using a zvol for DRBD and then running Ext4 on top of that.

The Benefits of Integration

When RAID and the filesystem are separate things (with some added abstraction from LVM) it’s difficult to optimise the filesystem for RAID performance at the best of times and impossible in many cases. When the filesystem manages RAID it can optimise it’s operation to match the details of the RAID layout. I believe that in some situations ZFS will use mirroring instead of RAID-Z for small writes to reduce the load and that ZFS will combine writes into a single RAID-Z stripe (or set of contiguous RAID-Z stripes) to improve write performance.

It would be possible to have a RAID driver that includes checksums for all blocks, it could then read from another device when a checksum fails and give some of the reliability features that ZFS and BTRFS offer. Then to provide all the reliability benefits of ZFS you would at least need a filesystem that stores multiple copies of the data which would of course need checksums (because the filesystem could be used on a less reliable block device) and therefore you would end up with two checksums on the same data. Note that if you want to have a RAID array with checksums on all blocks then ZFS has a volume management feature (which is well described by Mark Round) [3]. Such a zvol could be used for a block device in a virtual machine and in an ideal world it would be possible to use one as swap space. But the zvol is apparently managed with all the regular ZFS mechanisms so it’s not a direct list of blocks on disk and thus can’t be extracted if there is a problem with ZFS.

Snapshots are an essential feature by today’s standards. The ability to create lots of snapshots with low overhead is a significant feature of filesystems like BTRFS and ZFS. Now it is possible to run BTRFS or ZFS on top of a volume manager like LVM which does snapshots to cover the case of the filesystem getting corrupted. But again that would end up with two sets of overhead.

The way that ZFS supports snapshots which inherit encryption keys is also interesting.

Conclusion

It’s technically possible to implement some of the ZFS features as separate layers, such as a software RAID implementation that put checksums on all blocks. But it appears that there isn’t much interest in developing such things. So while people would use it (and people are using ZFS ZVols as block devices for other filesystems as described in a comment on Mark Round’s blog) it’s probably not going to be implemented.

Therefore we have a choice of all the complexity and features of BTRFS or ZFS, or the current RAID+LVM+Ext4 option. While the complexity of BTRFS and ZFS is a concern for me (particularly as BTRFS is new and ZFS is really complex and not well supported on Linux) it seems that there is no other option for certain types of large storage at the moment.

ZFS on Linux isn’t a great option for me, but for some of my clients it seems to be the only option. ZFS on Solaris would be a better option in some ways, but that’s not possible when you have important Linux software that needs fast access to the storage.

Related posts:

  1. Starting with BTRFS Based on my investigation of RAID reliability [1] I have...
  2. ZFS vs BTRFS on Cheap Dell Servers I previously wrote about my first experiences with BTRFS [1]....
  3. Reliability of RAID ZDNet has an insightful article by Robin Harris predicting the...

Syndicated 2012-04-27 08:40:05 from etbe - Russell Cokeretbe - Russell Coker

BTRFS and ZFS as Layering Violations

LWN has an interesting article comparing recent developments in the Linux world to the “Unix Wars” that essentially killed every proprietary Unix system [1]. The article is really interesting and I recommend reading it, it’s probably only available to subscribers at the moment but should be generally available in a week or so (I used my Debian access sponsored by HP to read it).

A comment on that article cites my previous post about the reliability of RAID [2] and then goes on to disagree with my conclusion that using the filesystem for everything is the right thing to do.

The Benefits of Layers

I don’t believe as strongly in the BTRFS/ZFS design as the commentator probably thinks. The current way my servers (and a huge number of other Linux systems) work of having RAID to form a reliable array of disks from a set of cheap disks for the purpose of reliability and often capacity or performance is a good thing. I have storage on top of the RAID array and can fix the RAID without bothering about the filesystem(s) – and have done so in the past. I can also test the RAID array without involving any filesystem specific code. Then I have LVM running on top of the RAID array in exactly the same way that it runs on top of a single hard drive or SSD in the case of a laptop or netbook. So Linux on a laptop is much the same as Linux on a server in terms of storage once we get past the issue of whether a single disk or a RAID array is used for the LVM PV, among other things this means that the same code paths are used and I’m less likely to encounter a bug when I install a new system.

LVM provides multiple LVs which can be used for filesystems, swap, or anything else that uses storage. So if a filesystem gets badly corrupted I can umount it, create an LVM snapshot, and then take appropriate measures to try and fix it – without interfering with other filesystems.

When using layered storage I can easily add or change layers when it’s appropriate. For example I have encryption on only some LVs on my laptop and netbook systems (there is no point encrypting the filesystem used for .iso files of Linux distributions) and on some servers I use RAID-0 for cached data.

When using a filesystem like BTRFS or ZFS which includes subvolumes (similar in result to LVM in some cases) and internal RAID you can’t separate the layers. So if something gets corrupted then you have to deal with all the complexity of BTRFS or ZFS instead of just fixing the one layer that has a problem.

The Benefits of Integration

When RAID and the filesystem are separate things (with some added abstraction from LVM) it’s difficult to optimise the filesystem for RAID performance at the best of times and impossible in many cases. When the filesystem manages RAID it can optimise it’s operation to match the details of the RAID layout. I believe that in some situations ZFS will use mirroring instead of RAID-Z for small writes to reduce the load and that ZFS will combine writes into a single RAID-Z stripe (or set of contiguous RAID-Z stripes) to improve write performance.

It would be possible to have a RAID driver that includes checksums for all blocks, it could then read from another device when a checksum fails and give some of the reliability features that ZFS and BTRFS offer. Then to provide all the reliability benefits of ZFS you would at least need a filesystem that stores multiple copies of the data which would of course need checksums (because the filesystem could be used on a less reliable block device) and therefore you would end up with two checksums on the same data. Note that if you want to have a RAID array with checksums on all blocks then ZFS has a volume management feature (which is well described by Mark Round) [3]. Such a zvol could be used for a block device in a virtual machine and in an ideal world it would be possible to use one as swap space. But the zvol is apparently managed with all the regular ZFS mechanisms so it’s not a direct list of blocks on disk and thus can’t be extracted if there is a problem with ZFS.

Snapshots are an essential feature by today’s standards. The ability to create lots of snapshots with low overhead is a significant feature of filesystems like BTRFS and ZFS. Now it is possible to run BTRFS or ZFS on top of a volume manager like LVM which does snapshots to cover the case of the filesystem getting corrupted. But again that would end up with two sets of overhead.

The way that ZFS supports snapshots which inherit encryption keys is also interesting.

Conclusion

It’s technically possible to implement some of the ZFS features as separate layers, such as a software RAID implementation that put checksums on all blocks. But it appears that there isn’t much interest in developing such things. So while people would use it (and people are using ZFS ZVols as block devices for other filesystems as described in a comment on Mark Round’s blog) it’s probably not going to be implemented.

Therefore we have a choice of all the complexity and features of BTRFS or ZFS, or the current RAID+LVM+Ext4 option. While the complexity of BTRFS and ZFS is a concern for me (particularly as BTRFS is new and ZFS is really complex and not well supported on Linux) it seems that there is no other option for certain types of large storage at the moment.

ZFS on Linux isn’t a great option for me, but for some of my clients it seems to be the only option. ZFS on Solaris would be a better option in some ways, but that’s not possible when you have important Linux software that needs fast access to the storage.

Related posts:

  1. Starting with BTRFS Based on my investigation of RAID reliability [1] I have...
  2. ZFS vs BTRFS on Cheap Dell Servers I previously wrote about my first experiences with BTRFS [1]....
  3. Some RAID Issues I just read an interesting paper titled An Analysis of...

Syndicated 2012-04-27 07:10:05 from etbe - Russell Cokeretbe - Russell Coker

967 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!