Older blog entries for etbe (starting at number 999)

Woolworths Maths Fail

picture of discount from $3.99 to $3.00 advertised as 20% off

The above is a picture of the chocolate display at Woolworths, an Australian supermarket that was formerly known as Safeway – it had the same logo as the US Safeway so there’s probably a connection. This is actually a 24.81% discount. It’s possible that some people might consider it a legal issue to advertise something as a 25% discount when it’s 1 cent short of that (even though we haven’t had a coin smaller than 5 cents in Australia since 1991). But then if they wanted to advertise a discount percentage that’s a multiple of 5% they could have made the discount price $2.99, presumably whatever factors made them make the original price $3.99 instead of $4.00 would also apply when choosing a discount price.

So the question is, do Woolworths have a strict policy of rounding down discount rates to the nearest 5% or do they just employ people who failed maths in high school?

Sometimes when discussing education people ask rhetorical questions such as “when would someone use calculus in real life”, I think that the best answer is “people who have studied calculus probably won’t write such stupid signs”. Sure the claimed discount is technically correct as they don’t say “no more than 20% off” and not misleading in a legal sense (it’s OK to claim less than you provide), but it’s annoyingly wrong. Well educated people don’t do that sort of thing.

As an aside, the chocolate in question is Green and Black, that’s a premium chocolate line that is Fair Trade, Organic, and very tasty. If you are in Australia then I recommend buying some because $3.00 is a good price.

Related posts:

  1. fair trade is the Linux way I have recently purchased a large quantity of fair trade...
  2. LUG Meetings etc Recently I was talking to an employee at Safeway (an...
  3. The Sad State of Shopping in Australia Paul Wayper has written a blog post criticising the main...

Syndicated 2012-08-29 09:52:11 from etbe - Russell Coker

SSD for a Workstation

SSDs have been dropping in price recently so I just bought four Intel 120G devices for $115 each. I installed the first one for my mother in law who had been complaining about system performance. Her system boot time went from 90 seconds to 20 seconds and a KDE login went from about 35 seconds to about 10 seconds. The real problem that she had reported was occasional excessive application delay, while it wasn’t possible to diagnose that properly I think it was a combination of her MUA doing synchronous writes while other programs such as Chromium were doing things. To avoid the possibility of a CPU performance problem I replaced her 1.8GHz E4300 system with a 2.66GHz E7300 that I got from a junk pile (it’s amazing what’s discarded nowadays).

I also installed a SSD in my own workstation (a 2.4GHz E4600). The boot time went down from 45s on Ext4 without an encrypted root to 27s with root on BTRFS including the time taken to enter the encryption password (maybe about 23s excluding my typing time). The improvement wasn’t as great, but that’s because my workstation does some things on bootup that aren’t dependent on disk IO such as enabling a bridge with STP (making every workstation a bridge is quieter than using switches). KDE login went from about 27s to about 12s and the time taken to start Chromium and have it be usable (rather than blocking on disk IO) went from 30 seconds to an almost instant response (maybe a few seconds)! Tests on another system indicates that Chromium startup could be improved a lot by purging history, but I don’t want to do that. It’s unfortunate that Chromium only supports deleting recent history (to remove incriminating entries) but doesn’t support deleting ancient history that just isn’t useful.

I didn’t try to seriously benchmark the SSD (changing from Ext4 to BTRFS on my system would significantly reduce the accuracy of the results), I have plans for doing that on more important workloads in the near future. For the moment the most casual tests have shown a significant performance benefit so it’s clear that an SSD is the correct storage option for any new workstation which doesn’t need more than 120G of storage space. $115 for SSD vs $35 for HDD is a fairly easy choice for a new system. For larger storage the price of hard drives increases more slowly than that of SSD.

In spite of the performance benefits I doubt that I will gain a real benefit from this in the next year. The time taken to install the SSD equates to dozens of boot cycles which given a typical workstation uptime in excess of a month is unlikely to happen soon. One minor benefit is that deleting messages in Kmail is an instant operation which saves a little annoyance and there will be other occasional benefits.

One significant extra benefit is that an SSD is quiet and dissipates less heat which might allow the system cooling fans to run more slowly. As noisy computers annoy me an SSD is a luxury feature. Also it’s good to test new technologies that my clients may need.

The next thing on my todo list is to do some tests of ZFS with SSD for L2ARC and ZIL.

Related posts:

  1. How I Partition Disks Having had a number of hard drives fail over the...
  2. Xen and Swap The way Xen works is that the RAM used by...
  3. big and cheap USB flash devices It’s often the case with technology that serious changes occur...

Syndicated 2012-08-28 12:40:05 from etbe - Russell Coker

Mirror Displays

Image of a Macbook Pro with a Retina display showing how badly it reflects

When I previously wrote about the Retina display in the new Macbook Pro I was so excited that I forgot to even check whether the display reflects light [1]. A TFT display with a mirrored surface apparently permits more intense color which is generally a good thing. It also makes it easier to clean the surface which is really important for phones and tablets. The down-side of a mirrored surface on a display is that it can reflect whatever else is in the area.

This generally isn’t a problem in an office as you can usually adjust the angle of the monitor and the background lighting to avoid the worst problems. It’s also not a serious problem for a hand-held device as it’s usually easy to hold it at an angle such that you don’t have light from anything particularly bright reflecting.

But my experience of laptop use includes using them anywhere at any time. I’ve done a lot of coding on all forms of public transport in all weather conditions. Doing that with a Thinkpad which has a matte surface on it’s screen is often difficult but almost always possible. Doing that on a system with a mirrored display really isn’t possible. The above photo of a 15″ Macbook Pro model MD103X/A was taken at a Myer store which was specifically designed to make the computers look their best. The overall lighting wasn’t particularly bright so that the background didn’t reflect too much and the individual lights were diffuse to avoid dazzling point reflections. But even so the lights can be clearly seen. Note that the photo was taken with a Samsung Galaxy S, far from the best possible camera.

If I was buying a laptop that would only ever be used in the more northern parts of Europe or if I was buying a laptop to use only at home and at the office then I might consider a mirror display. But as I mostly use my laptop in mainland Australia including trips to tropical parts of Australia and I use it in all manner of locations a mirror display isn’t going to work for me.

This isn’t necessarily a bad decision by Apple designers. My observation of Macbook use includes lots of people using them only in offices and homes. Of the serious geeks who describe their laptop as My Precious hardly anyone has a Macbook while Thinkpads seem quite popular in that market segment. I don’t think that it’s just the matte screen that attracts serious geeks to the Thinkpad, but it does seem like part of a series of design decisions (which include the past tradition of supporting hard drive removal without tools and the option of a second hard drive for RAID-1) that make Thinkpads more suitable for geeks than Macbooks. While the new tradition in Apple design of gluing things together so they can never be repaired, recycled, or even have their battery replaced seems part of a pattern that goes against geek use. Even when Apple products are technically superior in some ways their catering to the less technical buyers makes them unsuitable to people like me.

Maybe the ability to use a Macbook as a shaving mirror could be handy, but I’d rather grow a beard and use a Thinkpad.

Related posts:

  1. The Retina Display Last night I played with an Apple Macbook Pro with...
  2. are Thinkpads meant to run 24*7? My Thinkpad has started to run hot recently. If I...
  3. presentations and background color In response to my last post about using laptops for...

Syndicated 2012-08-21 03:54:35 from etbe - Russell Coker

Hard Drives for Backup

The general trend seems to be that cheap hard drives are increasing in capacity faster than much of the data that is commonly stored. Back in 1998 I had a 3G disk in my laptop and about 800M was used for my home directory. Now I have 6.2G used for my home directory (and another 2G in ~/src) out of the 100G capacity in my laptop. So my space usage for my home directory has increased by a factor of about 8 while my space available has increased by a factor of about 30. When I had 800M for my home directory I saved space by cropping pictures for my web site and deleting the originals (thus losing some data I would rather have today) but now I just keep everything and it’s still doesn’t take up much of my hard drive. Similar trends apply to most systems that I use and that I run for my clients.

Due to the availability of storage people are gratuitously using a lot of disk space. A relative recently took 10G of pictures on a holiday, her phone has 12G of internal storage so there was nothing stopping her. She might decide that half the pictures aren’t that great if she had to save space, but that space is essentially free (she couldn’t buy a cheaper phone with less storage) so there’s no reason to delete any pictures.

When considering backup methods one important factor is the ability to store all of one type of data on one backup device. Having a single backup span multiple disks, tapes, etc has a dramatic impact on the ease of recovery and the potential for data loss. Currently 3TB SATA disks are really cheap and 4TB disks are available but rather expensive. Currently only one of my clients has more than 4TB of data used for one purpose (IE a single filesystem) so apart from that client a single SATA disk can backup anything that I run.

Benefits of Hard Drive Backup

When using a hard drive there is an option to make it a bootable disk in the same format as the live disk. I haven’t done this, but if you want the option of a quick recovery from a hardware failure then having a bootable disk with all the data on it is a good option. For example a server with software RAID-1 could have a backup disk that is configured as a degraded RAID-1 array.

The biggest benefit is the ability to read a disk anywhere. I’ve read many reports of tape drives being discovered to be defective at the least convenient time. With a SATA disk you can install it in any PC or put it in a USB bay if you have USB 3.0 or the performance penalty of USB 2.0 is bearable – a USB 2.0 bay is great if you want to recover a single file, but if you want terabytes in a hurry then it won’t do.

A backup on a hard drive will typically use a common filesystem. For backing up Linux servers I generally use Ext3, at some future time I will move to BTRFS as having checksums on all data is a good feature for a backup. Using a regular filesystem means that I can access the data anywhere without needing any special software, I can run programs like diff on the backup, and I can export the backup via NFS or Samba if necessary. You never know how you will need to access your backup so it’s best to keep your options open.

Hard drive backups are the best solution for files that are accidentally deleted. You can have the first line of backups on a local server (or through a filesystem like BTRFS or ZFS that supports snapshots) and files can be recovered quickly. Even a SATA disk in a USB bay is very fast for recovering a single file.

LTO tapes have a maximum capacity of 1.5TB at the moment and tape size has been increasing more slowly than disk size. Also LTO tapes have an expected lifetime of only 200 reads/writes of the entire tape. It seems to me that tapes don’t provide a great benefit unless you are backing up enough data to need a tape robot.

Problems with a Hard Drive Backup

Hard drives tend not to survive being dropped so posting a hard drive for remote storage probably isn’t a good option. This can be solved by transferring data over the Internet if the data isn’t particularly big or doesn’t change too much (I have a 400G data set backed up via rsync to another country because most of the data doesn’t change over the course of a year). Also if the data is particularly small then solid state storage (which costs about $1 per GB) is a viable option, I run more than a few servers which could be entirely backed up to a 200G SSD. $200 for a single backup of 200G of data is a bit expensive, but the potential for saving time and money on the restore means that it can be financially viable.

Some people claim that tape storage will better survive a Carrington Event than hard drives. I’m fairly dubious about the benefits of this, if a hard drive in a Faraday Cage (such as a regular safe that is earthed) is going to be destroyed then you will probably worry about security of the food supply instead of your data. Maybe I should just add a disclaimer “this backup system won’t survive a zombie apocalypse”. ;)

It’s widely regarded that tape storage lasts longer than hard drives. I doubt that this provides a real benefit as some of my personal servers are running on 20G hard drives from back when 20G was big. The fact that drives tend to last for more than 10 years combined with the fact that newer bigger drives are always being released means that important backups can be moved to bigger drives. As a general rule you should assume that anything which isn’t regularly tested doesn’t work. So whatever your backup method you should test it regularly and have multiple copies of the data to deal with the case when one copy becomes corrupt. The process of testing a backup can involve moving it to newer media.

I’ve seen it claimed that a benefit of tape storage is that part of the data can be recovered from a damaged tape. One problem with this is that part of a database often isn’t particularly useful. Another issue is that in my experience hard drives usually don’t fail entirely unless you drop them, drives usually fail a few sectors at a time.

How to Implement Hard Drive Backup

The most common need for backups is when someone deletes the wrong file. It’s usually a small restore and you want it to be an easy process. The best solution to this is to have a filesystem with snapshots such as BTRFS or ZFS. In theory it shouldn’t be too difficult to have a cron job manage snapshots, but as I’ve only just started putting BTRFS and ZFS on servers I haven’t got around to changing my backups. Snapshots won’t cover more serious problems such as hardware, software, or user errors that wipe all the disks in a server. For example the only time I lost a significant amount of data from a hosted server was when the data center staff wiped it, so obviously good off-site backups are needed.

The easiest way to deal with problems that wipe a server is to have data copied to another system. For remote backups you can rsync to a local system and then use “cp -rl” or your favorite snapshot system to make a hard linked copy of the tree. A really neat feature is the ZFS ability to “send” a filesystem snapshot (or the diff between two snapshots) to a remote system [1]. Once you have regular backups on local storage you can then copy them to removable disks as often as you wish, I think I’ll have to install ZFS on some of my servers for the sole purpose of getting the “send” feature! There are NAS devices that provide similar functionality to the ZFS send/receive (maybe implemented with ZFS), but I’m not a fan of cheap NAS devices [2].

It seems that the best way to address the first two needs of backup (fast local restore and resilience in the face of site failure) is to use ZFS snapshots on the server and ZFS send/receive to copy the data to another site. The next issue is that the backup server probably won’t be big enough for all the archives and you want to be able to recover from a failure on the backup server. This requires some removable storage.

The simplest removable backup is to use a SATA drive bay with eSATA and USB connectors. You use a regular filesystem like Ext3 and just copy the files on. It’s easy, cheap, and requires no special skill or software. Requiring no special skill is important, you never know who will be called on to recover from backups.

When a server is backing up another server by rsync (whether it’s in the same rack or another country) you want the backup server to be reliable. However there is no requirement for a single reliable server and sometimes having multiple backup servers will be cheaper. At current purchase prices you can buy two cheap tower systems with 4*3TB disks for less money than a single server that has redundant PSUs and other high end server features. Having two cheap servers die at once seems quite unlikely so getting two backup servers would be the better choice.

For filesystems that are bigger than 4TB a disk based backup would require backup software that handles multi part archives. One would hope that any software that is designed for tape backup would work well for this (consider a hard drive as a tape with a very fast seek), but often things don’t work as desired. If anyone knows of a good Linux backup program that supports multiple 4TB SATA disks in eSATA or USB bays then please let me know.

Conclusion

BTRFS or ZFS snapshots are the best way of recovering from simple mistakes.

ZFS send/receive seems to be the best way of synchronising updates to filesystems to other systems or sites.

ZFS should be used for all servers. Even if you don’t currently need send/receive you never know what the future requirements may be. Apart from needing huge amounts of RAM (one of my servers had OOM failures when it had a mere 4G of RAM) there doesn’t seem to be any down-side to ZFS.

I’m unsure of whether to use BTRFS for removable backup disks. The immediate up-sides are checksums on all data and meta-data and the possibility of using built-in RAID-1 so that a random bad sector is unlikely to lose data. There is also the possibility of using snapshots on a removable backup disk (if the disk contains separate files instead of an archive). The down-sides are lack of support on older systems and the fact that BTRFS is fairly new.

Have I missed anything?

Related posts:

  1. New Storage Developments Eweek has an article on a new 1TB Seagate drive....
  2. IDE hard drives I just lent two 80G IDE drives to a friend,...
  3. Hot-swap Storage I recently had to decommission an old Linux server and...

Syndicated 2012-08-08 09:27:22 from etbe - Russell Cokeretbe - Russell Coker

Love of Technology at First Sight

After seeing the Retina display I’ve been thinking about the computer products that I’ve immediately desired. Here is the list of the ones I can still remember:

  1. My first computer which was the TEC-1 [1], in 1982 or 1983.
  2. A computer with a full keyboard and a monitor (Microbee), in about 1984. A hex-only keypad is very limiting.
  3. Unix, initially SunOS 4.0 in 1991. Primarily the benefits of this were TCP/IP networking, fast email (no multi-day delay for Fidonet mail), IRC, and file transfer from anywhere in the world. Not inherent benefits to Unix, but at the time only Unix systems did TCP/IP at all well.
  4. OS/2 2.0 in 1992. At the time OS/2 had the best GUI of any system available (IMHO) and clearly the best multitasking of DOS and Windows programs.
  5. Linux in 1992. I started with the TAMU and “MCC Interim” distributions and then moved to SLS when it was released. The first kernel I compiled was about 0.52. At the time the main use of Linux for almost everyone was to learn about Unix and compile kernels. In 1993 I started running a public access Linux server.
  6. Trinitron monitors in 1996. I first saw an IBM Trinitron monitor when working on an IBM project and had to buy one for home use, at the time a 17″ Trinitron monitor beat the hell out of any other display device that one could reasonably afford. A bigger screen allowed me to display more code at once which allowed easier debugging.
  7. Thinkpad laptops from 1998 until now. They just keep working well and seem to be better than other products every time I compare them. I also like the TrackPoint. 1998 was when a Thinkpad dropped to a mere $3,800 for a system that could run with 96M of RAM, enough compute power for the biggest compiles and it cost less than most cars!
  8. The KDE desktop environment in 1998. In 1998 I switched my primary workstation from a PC running OS/2 to a Thinkpad running Linux because of KDE. Prior to KDE nothing on Linux was user-friendly enough.
  9. The iPaQ hand-held PC. I got one in 2002 and ran the Familiar distribution of Linux on it. I had it running SE Linux and used it for writing an article for Linux Journal. Being able to get a computer out on public transport to do some work really saved some time. In some ways the iPaQ hardware and the Familiar OS beat modern Android systems.
  10. The EeePC 701 which I bought in 2008 [2]. In the last 4 years someone has probably released a system that’s no larger or heavier and has the same amount of compute power (enough for web browsing, email, and ssh). But most Netbooks that I’ve seen don’t compete. The EeePC allowed me to take laptops to places where it previously wasn’t convenient.
  11. Android, before using Android I never had a smart phone that I used for anything other than taking photos. The other smart phone OSs are either locked down or don’t have the app support that Android has. I listed lots of problems with my first phone the Sony Ericsson Xperia X10, but I still really enjoyed using it a lot [3]. Since getting an Android phone I’ve read a lot of email while on the go, this means I can respond faster when necessary and use time that might otherwise be wasted. The ssh client means that I don’t need to carry a laptop with me when there’s a risk that emergency sysadmin work may be required.
  12. Cheap rented servers, Amazon defined cloud computing with EC2, Linode offers great deals for small virtual servers, and Hetzner offers amazing deals on renting entire servers. Getting your own Internet connection or running your own physical server in someone’s data-center is a lot of effort and expense. Being able to just rent servers is so much easier and allows so many new projects. I can’t remember when I first started using such services, maybe 5 years ago.
  13. The Apple Retina Display [4] a few days ago.

For the period between 1998 and 2008 I can’t think of anything that really excited me apart from the iPaQ. Computers became a lot smaller, faster, cheaper, etc. But it was never a big exciting change. The AMD64 architecture wasn’t particularly exciting as most systems didn’t need more than 4G of RAM and the ones that did could use PAE.

What are the most exciting computer products you have seen?

Related posts:

  1. Old PDA vs New Mobile Phone for PDA use Since about 2002 I have been using a iPaQ [1]...
  2. CyanogenMod and the Galaxy S Thanks to some advice from Philipp Kern I have now...
  3. Moving from a Laptop to a Cloud Lifestyle My Laptop History In 1998 I bought my first laptop,...

Syndicated 2012-08-04 12:34:10 from etbe - Russell Cokeretbe - Russell Coker

Sam Harris on Lying

The Neuroscientist and atheism advocate Sam Harris has written a short blog post about a journalist named Jonah Lehrer who destroyed his career through false quotes and lies about them [1]. The main point of the article seems to be to promote his new eBook about Lying. The book is available for free until the end of the week (not sure if that is Friday, Saturday, or Sunday and in what time zone – get it quick if you want it).

The book is very short, 58 pages with a single column of large font text. If written a densely as a typical research paper it would probably be about 12 pages. But it has some good points to make. He makes a good moral case against most forms of lying, even answering questions such as “do I look fat“.

It seems that anyone who was to follow his advice would be unusually honest even by Aspie standards.

Related posts:

  1. Lies and Online Dating Separating Fact From Fiction: An Examination of Deceptive Self-Presentation in...
  2. Links January 2012 Cops in Tennessee routinely steal cash from citizens [1]. They...
  3. Desks Lindsay Holmwood has written about the benefits of a standing...

Syndicated 2012-08-02 11:52:23 from etbe - Russell Cokeretbe - Russell Coker

Hetzner now Offers SSD

Hetzner is offering new servers with SSD, good news for people who want to run ZFS (for ZIL and/or L2ARC). See the EX server configuration list for more information [1]. Unfortunately they don’t specify what brand of SSD, this is a concern for me as some of the reports about SSD haven’t been that positive, getting whichever SSD is cheapest isn’t appealing. A cheap SSD might be OK for L2ARC (read cache), but for ZIL (write cache) reliability is fairly important. If anyone has access to a Hetzner server with SSD then please paste the relevant output of lsscsi into a comment.

The next issue is that they only officially offer it on the new “EX 8S” server. SSD will be of most interest to people who also want lots of RAM (the zfsonlinux.org code has given me kernel panics when running with a mere 4G of RAM – even when I did the recommended tuning to reduce ARC size). Also people who want more capable storage options will tend to want more RAM if only for disk caching.

But I’m sure that there are plenty of people who would be happy to have SSD on a smaller and cheaper server. The biggest SSD offering of 240G is bigger than a lot of servers. I run a Hetzner server that has only 183G of disk space in use (and another 200G of backups). If the backups were on another site then the server in question could have just a RAID-1 of SSD for all it’s storage. In this case it wouldn’t be worth doing as the server doesn’t have much disk IO load, but it would be nice to have the option – the exact same server plus some more IO load would make SSD the ideal choice.

The biggest problem is that the EX 8S server is really expensive. Hard drives which are included in the base price for cheaper options are now expensive additions. A server with 2*3TB disks and 2*240G SSD is E167 per month! That’s more expensive than three smaller servers that have 2*3TB disks! The good news for someone who wants SSD is that the Hetzner server “auction” has some better deals [2]. As is always the case with auction sites the exact offers will change by the moment, but currently they offer a server with 2*120G SSD and 24G of RAM for E88 per month and a server with 2*120G SSD, 2*1.5T HDD, and 24G of RAM for E118. E88 is a great deal if your storage fits in 240G and E118 could be pretty good if you only have 1.5T of data that needs ZFS features.

The main SSD offering is still a good option for some cases. A project that I did a couple of years ago would probably have worked really well on a E167/month server with 2*3TB and 2*240G SSD. It was designed around multiple database servers sharding the load which was largely writes, so SSD would have allowed a significant reduction in the number of servers.

They also don’t offer SSD on their “storage servers” which is a significant omission. I presume that they will fix that soon enough. 13 disks and 2 SSD will often be more useful than 15 disks. That’s assuming the SSD doesn’t suck of course.

The reason this is newsworthy is that most hosted server offerings have very poor disk IO and no good options for expanding it. For servers that you host yourself it’s not too difficult to buy extra trays of disks or even a single rack-mount server that has any number of internal disks in the range 2 to 24 and any choice as to how you populate them. But with rented servers it’s typically 2 disks with no options to add SSD or other performance enhancements and no possibility of connecting a SAN. As an aside it would still be nice if someone ran a data center that supported NetApp devices and gave the option of connecting an arbitrary number of servers to a NetApp Filer (or a redundant pair of Filers). If anyone knows of a hosting company that provides options for good disk IO which are better than just providing SSD or cheaper than E167 per month then please provide the URL in a comment.

Related posts:

  1. Servers vs Phones Hetzner have recently updated their offerings to include servers with...
  2. Hetzner Failover Konfiguration The Wiki documenting how to configure IP failover for Hetzner...
  3. Dedicated vs Virtual Servers A common question about hosting is whether to use a...

Syndicated 2012-08-01 09:55:18 from etbe - Russell Cokeretbe - Russell Coker

Cheap SATA Disks in a Dell PowerEdge T410

A non-profit organisation I support has just bought a Dell PowerEdge T410 server to be mainly used as a file server. We need a reasonable amount of space and really good reliability features because the system may have periods without being actively monitored, it also has to be relatively cheap.

Dell servers are quite cheap, but disks are not cheap at all when Dell sells them. I get the impression that disks and RAM are major profit centers for Dell and that the profit margins on the basic servers are quite small. So naturally we decided to buy some SATA disks from a local store, one advantage of this is that Dell sells nothing bigger than 2TB while 3TB disks are available cheaply everywhere else.

So we bought 4 cheap 3TB (2.7TiB) SATA disks, connected them to the server, and found that only 2TiB was accessible. The Dell Tech Center says that some of the RAID controllers don’t support anything larger than 2TiB [1]. Obviously we have one of the older models. There’s lots of SATA sockets on the motherboard that could be used, however there is one problem.

View of the open side of a PowerEdge T410

The above picture is the side view of the T410, it was taken with a Samsung Galaxy S so the quality is a little poor (click for the original picture). The server is quite neat, not many cables for a system with 6 disks, not the 12 separate cables you would get in a typical white-box system.

Disks in a PowerEdge T410

The above picture shows the disk enclosure. You can see that each disk has a single connector for power and data, also the disks aren’t separate, multiple disks have the same power wires and the data cables are paired.

SAS controller in a PowerEdge T410

Above you can see the SAS controller, it has two large connectors that can each handle the data cables for 4 disks, nothing like the standard cables.

It’s easy to buy SATA data cables and connect them, but there are no spare power cables in the box. The connector that supplies power to all the disks appears to be something proprietary to Dell which goes straight to the double-connectors on each disk that supply power and data. This setup makes cabling very neat but also provides no good option for cabling regular disks. I’m sure I could make my own cables and if I hunted around the net enough I could probably buy some matching power cables, but it would be a hassle and the result wouldn’t be neat.

So the question was then whether to go to more effort, expense, and possibly risk the warranty to get the full 3TB or to just use the SAS controller and get 2TiB (2.2TB) per disk. One factor we considered is the fact that the higher sector numbers typically give much slower access times due to being on shorter tracks (see my ZCAV results page for test results from past disks [2]). We decided that 2.2TB (2TiB) out of 3TB (2.7TiB) was adequate capacity and that losing some of the slow parts of the disk wasn’t a big deal.

I’ve now setup a RAID-Z2 array on the disks, and ZFS reports 3.78TiB available capacity, which isn’t a lot considering that we have 4*3TB disks in the array. But the old server had only 200G of storage, so it’s a good improvement in capacity, performance, and RAID-Z2 should beat the hell out of RAID-6 for reliability.

Related posts:

  1. Dell PowerEdge T105 Today I received a Dell PowerEDGE T105 for use by...
  2. ZFS vs BTRFS on Cheap Dell Servers I previously wrote about my first experiences with BTRFS [1]....
  3. How I Partition Disks Having had a number of hard drives fail over the...

Syndicated 2012-07-31 23:55:56 from etbe - Russell Cokeretbe - Russell Coker

ZFS on Debian/Wheezy

As storage capacities increase the probability of data corruption increases as does the amount of time required for a fsck on a traditional filesystem. Also the capacity of disks is increasing a lot faster than the contiguous IO speed which means that the RAID rebuild time is increasing, for example my first hard disk was 70M and had a transfer rate of 500K/s which meant that the entire contents could be read in a mere 140 seconds! The last time I did a test on a more recent disk a 1TB SATA disk gave contiguous transfer rates ranging from 112MB/s to 52MB/s which meant that reading the entire contents took 3 hours and 10 minutes, and that problem is worse with newer bigger disks. The long rebuild times make greater redundancy more desirable.

BTRFS vs ZFS

Both BTRFS and ZFS checksum all data to cover the case where a disk returns corrupt data, they don’t need a fsck program, and the combination of checksums and built-in RAID means that they should have less risk of data loss due to a second failure during rebuild. ZFS supports RAID-Z which is essentially a RAID-5 with checksums on all blocks to handle the case of corrupt data as well as RAID-Z2 which is a similar equivalent to RAID-6. RAID-Z is quite important if you don’t want to have half your disk space taken up by redundancy or if you want to have your data survive the loss or more than one disk, so until BTRFS has an equivalent feature ZFS offers significant benefits. Also BTRFS is still rather new which is a concern for software that is critical to data integrity.

I am about to install a system to be a file server and Xen server which probably isn’t going to be upgraded a lot over the next few years. It will have 4 disks so ZFS with RAID-Z offers a significant benefit over BTRFS for capacity and RAID-Z2 offers a significant benefit for redundancy. As it won’t be upgraded a lot I’ll start with Debian/Wheezy even though it isn’t released yet because the system will be in use without much change well after Squeeze security updates end.

ZFS on Wheezy

Getting ZFS to basically work isn’t particularly hard, the ZFSonLinux.org site has the code and reasonable instructions for doing it [1]. The zfsonlinux code doesn’t compile out of the box on Wheezy although it works well on Squeeze. I found it easier to get a the latest Ubuntu working with ZFS and then I rebuilt the Ubuntu packages for Debian/Wheezy and they worked. This wasn’t particularly difficult but it’s a pity that the zfsonlinux site didn’t support recent kernels.

Root on ZFS

The complication with root on ZFS is that the ZFS FAQ recommends using whole disks for best performance so you can avoid alignment problems on 4K sector disks (which is an issue for any disk large enough that you want to use it with ZFS) [2]. This means you have to either use /boot on ZFS (which seems a little too experimental for me) or have a separate boot device.

Currently I have one server running with 4*3TB disks in a RAID-Z array and a single smaller disk for the root filesystem. Having a fifth disk attached by duct-tape to a system that is only designed for four disks isn’t ideal, but when you have an OS image that is backed up (and not so important) and a data store that’s business critical (but not needed every day) then a failure on the root device can be fixed the next day without serious problems. But I want to fix this and avoid creating more systems like it.

There is some good documentation on using Ubuntu with root on ZFS [3]. I considered using Ubuntu LTS for the server in question, but as I prefer Debian and I can recompile Ubuntu packages for Debian it seems that Debian is the best choice for me. I compiled those packages for Wheezy, did the install and DKMS build, and got ZFS basically working without much effort.

The problem then became getting ZFS to work for the root filesystem. The Ubuntu packages didn’t work with the Debian initramfs for some reason and modules failed to load. This wasn’t necessarily a show-stopper as I can modify such things myself, but it’s another painful thing to manage and another way that the system can potentially break on upgrade.

The next issue is the unusual way that ZFS mounts filesystems. Instead of having block devices to mount and entries in /etc/fstab the ZFS system does things for you. So if you want a ZFS volume to be mounted as root you configure the mountpoint via the “zfs set mountpoint” command. This of course means that it doesn’t get mounted if you boot with a different root filesystem and adds some needless pain to the process. When I encountered this I decided that root on ZFS isn’t a good option. So for this new server I’ll install it with an Ext4 filesystem on a RAID-1 device for root and /boot and use ZFS for everything else.

Correct Alignment

After setting up the system with a 4 disk RAID-1 (or mirror for the pedants who insist that true RAID-1 has only two disks) for root and boot I then created partitions for ZFS. According to fdisk output the partitions /dev/sda2, /dev/sdb2 etc had their first sector address as a multiple of 2048 which I presume addresses the alignment requirement for a disk that has 4K sectors.

Installing ZFS

deb http://www.coker.com.au wheezy zfs

I created the above APT repository (only AMD64) for ZFS packages based on Darik Horn’s Ubuntu packages (thanks for the good work Darik). Installing zfs-dkms, spl-dkms, and zfsutils gave a working ZFS system. I could probably have used Darik’s binary packages but I think it’s best to rebuild Ubuntu packages to use on Debian.

The server in question hasn’t gone live in production yet (it turns out that we don’t have agreement on what the server will do). But so far it seems to be working OK.

Related posts:

  1. Discovering OS Bugs and Using Snapshots I’m running Debian/Unstable on an EeePC 701, I’ve got an...
  2. Starting with BTRFS Based on my investigation of RAID reliability [1] I have...
  3. ZFS vs BTRFS on Cheap Dell Servers I previously wrote about my first experiences with BTRFS [1]....

Syndicated 2012-07-31 04:03:59 from etbe - Russell Cokeretbe - Russell Coker

The Retina Display

Last night I played with an Apple Macbook Pro with the new Retina Display (Wikipedia link). Wikipedia cites some controversy about whether the display actually has higher resolution than the human eye can perceive. When wearing glasses my vision is considerably better than average (I have average vision without glasses) and while kneeling in front of the Macbook I couldn’t easily distinguish pixels. So Apple’s marketing claims seem technically correct to me.

When I tested the Macbook Pro I found that the quality of the text display was very high, even now the 1680*1050 display on my Thinkpad T61 looks completely crap when compared to the 2880*1800 display on the Macbook. The Macbook was really great for text and for a JPEG that was installed on the system for testing. But unfortunately pictures on web sites didn’t look particularly good. Pictures on my blog looked quite poor and pictures returned by a Google search for “art” didn’t look that great either. I wonder if Safari (the Apple web browser) isn’t properly optimised for the display or if there is something that we should do when preparing pictures for web sites to make them look better on Safari.

The retina display has a 71% greater DPI which means 2.93* the total number of pixels of my Thinkpad. The overall quality of the experience for me (apart from web pictures) seems more like a factor of 2.93 when compared with my Thinkpad than a factor of 1.71. This has to be one of the most desirable products I’ve seen from a company that’s opposed to freedom for it’s users. I’m not about to buy one though, $2,300 is a lot of money for a system that can’t be upgraded, repaired, or recycled, and doesn’t even have an Ethernet port. I’m sure that if I bought one I would discover that it some of the hardware features don’t work properly with Linux.

The new Apple design trend of making it impossible to repair anything works reasonably well for phones and tablets which are cheap enough that they are hardly worth repairing when they have been used for a while. Lots of people can afford to spend about $600 on something that may be discarded after a year or two, but very few people can afford to spend more than $2,000 on such a disposable product.

Why is Apple the only company producing systems with such displays? If someone produced regular PCs that have the expected features (including an Ethernet port) with such a display at a lower price then I’m sure that there would be a great demand.

Related posts:

  1. Modern Laptops Suck One of the reasons why I’m moving from a laptop...
  2. A First Digital Camera I’ve just been asked for advice on buying a digital...
  3. RPC and SE Linux One ongoing problem with TCP networking is the combination of...

Syndicated 2012-07-31 03:27:31 from etbe - Russell Cokeretbe - Russell Coker

990 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!