Older blog entries for fraggle (starting at number 42)

Macbook Air

The minimum price for a Macbook Air is £1199. For this, you get a slow processor, 2 gig of RAM with no option to upgrade ever, mono speakers - although I guess it doesn't need decent speakers, since there is no DVD drive to watch movies on anyway, tiny (and slow) hard drive (just in case you thought you could download movies to watch instead), no Ethernet port, and a single USB port just to fuck you over in case you thought you could plug in a USB ethernet dongle and external USB hard drives and DVD drives to work around the above inadequacies.


The best part of all is that if you pay £2000, you can get the higher spec model, which has a slightly faster processor and even less storage.

Syndicated 2008-01-24 10:41:54 from fragglet

EU trolling

One of the features of the EU treaty being signed today is that it gives the Charter of fundamental rights of the EU legal force. I noticed this in Article 41:

4. Every person may write to the institutions of the Union in one of the languages of the Treaties and must have an answer in the same language.

Some things to consider:
  • Institutions of the European Union are obliged to respond to questions in any of the languages of the EU with replies in the same language.
  • The EU parliament is an "Institution of the European Union".
  • Therefore, MEPs are members of an "institution of the EU".
  • Does this mean that I can write to random MEPs (Robery Kilroy-Silk, for example) in random EU languages (Hungarian, for example), and they are obliged to reply to me in the same language?

I see great potential here for foreign language-based trolling.

Syndicated 2007-12-13 15:26:36 from fragglet

Offensive scrabble words

Ubisoft recently sparked some outrage over including the word "Lesbo" in their Nintendo DS version of Scrabble, which some people found offensive.


I decided to do some minor research, here is a list of several more words present in Scrabble DS:

Cursing: Asshole, Cunt, Fuck, Jism, Mofo, Shit, Wank

Homophobic: Fag, Ponce, Poof, Poon

Racist: Cracker, Dago, Gook, Jew (as a verb, meaning to haggle), Jigaboo, Kike, Raghead, Spic, Wog, Yid

There were many more racist terms but some of them seemed to be obscure words specific to a specific dialect, that I've never even heard before. Ubisoft certainly used a comprehensive dictionary!

Syndicated 2007-10-05 23:19:08 from fragglet

Psychic debugging

< AlexMax_> Oh fuck yes
< AlexMax_> my bash kung fu is still strong
< AlexMax_> heh this is getting messy, windows svn doesnt like being
            called from a shell script so now I'm using the batch file to
            update and shell script for everything else
< AlexMax_> heaven forbid anyone else try to replicate what I'm doing
< AlexMax_> OK this is really weird
< AlexMax_> If I put in a command at the bash command line, it runs fine
< AlexMax_> but if i put in that same command into a shell script, the
            command acts like it doesnt recognize the paramitors
<@fraggle> sh != bash
< AlexMax_> I'm using winbash
< AlexMax_> sh is winbash
< AlexMax_> wait a minute
<@fraggle> do you have #!/bin/sh at the top of your file?
< AlexMax_> what?
< AlexMax_> No, but why should i have to, I involke it using sh
            autobuild.sh
< AlexMax_> actually fuck
<@fraggle> try bash autobuild.sh
< AlexMax_> yeah, i could have sworn bash and sh were the same on this
            system
<@fraggle> i think it can behave differently depending on whether you
           invoke it as sh or bash
< AlexMax_> i know that sh and bash are usually distinct on linux
< AlexMax_> but i just remembered that sh is the msys sh and bash is
            winbash
<@fraggle> your bash kung foo may be strong but my psychic debugging
           powers are stronger

Syndicated 2007-10-02 21:44:34 from fragglet

CCTV cameras and Big Brother

I saw (linked from Slashdot), "This is London" is reporting that "Despite tens of thousands of CCTV cameras, 80% of crime remains unsolved.

First of all, the article analyses "crime clearup rate", which is not a measure of the amount of crime, but of how much crime is solved. So what it is really claiming is that "CCTV cameras do not help police to solve crimes". It's important to make this distinction, because it's easy to misinterpret this as meaning "CCTV cameras do not deter criminals", which, indeed, is what the submitter to Slashdot thought.

Secondly, the figures themselves are used in a way that is practically meaningless. "Police in [District X] only have a clearup rate of 20%, despite [N] cameras!". Now, I'm not discounting that there may be a relationship between CCTV cameras and crime clearup rate, but I'm sure there are plenty of other factors that are likely to be much more significant when comparing clearup rates between districts - the number of police officers, their competence, and the actual crime rates in those districts, for example. We're also given no indication of what a "good" crime clearup rate is supposed to be, or how those rates have changed over time since the introduction of CCTV.

I'm always skeptical about stories about CCTV cameras (especially ones where they are described as a "publicly funded spy network"), because a lot of people seem to have an irrational fear of them. Whenever CCTV is mentioned, cries of "Big Brother" and "invasion of privacy" abound. Big Brother and George Orwell form an interesting parallel to Godwin's Law: Any discussion regarding CCTV cameras will inevitably descend into comparisons with Big Brother. "Big Brother" has become a reason unto itself to bash CCTV: a book exists, depicting a dictatorial world, and it features CCTV, therefore CCTV is bad.

Similarly I'm not quite sure how filming a public place constitutes an invasion of privacy. Nobody that I've talked to has yet been able to answer this. If there was a policeman standing on the street in the place of the camera, would that also constitute an "invasion of privacy"? The funniest answer I've had so far is that people would no longer be able to commit minor crimes that they would previously be free to commit.

Of course, I don't believe that there are no potential issues whatsoever surrounding the use of CCTV cameras, but I really detest the sensationalism and irrational paranoia that surrounds them.

Syndicated 2007-09-21 10:34:24 from fragglet

It's the bandwidth, stupid: part 2

Bill Dougherty has posted Part 2 of this "It's the latency, stupid" article. Sadly, this is filled with as many factual errors as the previous one.

Where do I start? First of all, HTTP: "HTTP 1.1 signals the web server to use gzip compression for file transfers". This is pure and simply wrong. Go and read the HTTP/1.1 specification. Although gzip is mentioned, there's no requirement that a HTTP/1.1 server should use gzip compression. I'd say that no browser has shipped for at least five years that uses HTTP/1.0, so this is a totally irrelevant suggestion to make. Even then, switching to HTTP/1.1 will not magically add gzip compression: it's up to the web server to optionally send you compressed data instead of the normal uncompressed data. 99+% will not do this.

Using HTTP/1.1 CAN provide an advantage, but for different reasons that are entirely unrelated to compression. The major difference between HTTP/1.0 and 1.1 is that HTTP/1.1 can reuse an existing connection to retrieve more files. HTTP/1.0 immediately closes the connection when a download has completed. This has advantages because of the way the congestion control algorithms work: they start off with a small TCP window size that is increased in order to determine the available bandwidth of the channel. With HTTP/1.0, this process is restarted when downloading each file. HTTP/1.1 allows you to reuse your existing connection that has already settled to a reasonable TCP window size. This is important for modern websites that have lots of images and other embedded content. As I mentioned before though, this is utterly irrelevant because all modern browsers already use HTTP/1.1 by default.

Then Bill comes up with this gem: "One effective method is to change the protocol. Latency is a problem because TCP waits for an acknowledgement". This is also wrong. He seems to be under the mistaken impression that TCP is a stop and wait protocol: that each packet is sent, an acknowledgement waited for, and then the next packet sent. What actually happens is that TCP sends a bunch of packets across the channel, and as the acknowledgement is received for each of packet, the next packet is sent. To use the trucks analogy again, imagine twenty trucks, equally spaced, driving in a circle between two depots, carrying goods from one depot to the other. Latency is not a problem, just like distance between the depots is not a problem: provided that you have enough trucks, the transfer rate is maintained. The TCP congestion control algorithms automatically determine "how many trucks to use".

TCP will restrict the rate at which you can send data. Suppose, for example, you're writing a sockets program and sending a file across a TCP connection: you cannot send the entire file at once. After you have written a certain amount of data into the pipe, you cannot write any more until the receiving end has read the data. This is a good thing! What is happening here is called flow control. You physically can't send data faster than the bandwidth of the channel you're using can support. Suppose that you're using 10KB/sec channel: you can't send 50KB/sec of data across that channel. All that TCP is doing is limiting you to sending data at the physical limit of the channel.

"If you control the code, and can deal with lost or mis-ordered packets, UDP may be the way to go". While this is true, it's misleading and potentially really bad advice, certainly to any programmers writing networked applications. If your application mainly involves transfer of files, the best thing to do is stick with TCP. The reason is that TCP already takes care of these problems: they've been thoroughly researched and there are many tweaks and optimisations that have been applied to the protocol over the years. One important feature is the congestion control algorithms, that automatically determine the available bandwidth. If you don't use these kind of algorithms, you can end up the kind of collapse that Jacobson describes in his original paper on network congestion. If you use UDP, you're forced to reinvent this and every other feature of TCP from scratch. As a general rule of thumb, it's best to stick with TCP unless there is some specific need to use UDP.

Finally, I'd like to examine his list of "tricks that network accelerators use":

"1. Local TCP acknowledgment. The accelerator sends an ack back to the sending host immediately. This ensures that the sender keeps putting packets on the wire, instead waiting for the ack from the actual recipient". This is nonsense. TCP keeps putting packets onto the wire in normal operation. It doesn't stop and wait for an acknowledgement. TCP acknowledgements should already be being transmitted correctly If you're interfering with the normal transmission of acknowledgements, all you're doing is breaking the fundamental nature of how the protocol and the sliding window algorithm work.

"2. UDP Conversion. The accelerators change the TCP stream to UDP to cross the WAN. When the packet reaches the accelerator on the far end, it is switched back to TCP. You can think of this a tunneling TCP inside of UDP, although unlike a VPN the UDP tunnel does not add any overhead to the stream." I fail to see what possible advantage this could bring.

"3. Caching. The accelerators notice data patterns and cache repeating information. When a sender transmits data that is already in the cache, the accelerators only push the cache ID across the WAN. An example of this would be several users accessing the same file from a CIFS share across your WAN. The accelerators would cache the file after the first user retrieves it, and use a token to transfer the subsequent requests." This is useful in the very specific case of CIFS, because SMB has known performance issues when running over high latency connections - it was designed for use on LANs, and the protocol suffers because of some assumptions that were made in its design. This doesn't apply, however, to the majority of other network protocols.

"4. Compression. In addition to caching, network accelerators are able to compress some of the data being transmitted. The accelerator on the other end of the WAN decompresses the data before sending it to its destination. Compressed data can be sent in fewer packets, thus reducing the apparent time to send." Amusingly, what this actually does is decrease the bandwidth used, and has nothing to do with latency.

Syndicated 2007-06-06 02:55:11 from fragglet

It's the bandwidth, stupid.

I saw this blog entry linked on Digg (it currently has over 2000 diggs), and felt that I should respond to it.

The author claims that poor latency is causing problems with TCP congestion control algorithms. Basically, this entire article is based on a flawed understanding of how TCP works.

TCP has built-in congestion control algorithms that attempt to determine the amount of available bandwidth between two hosts on a network, and determine the rate at which to transmit information. If you transmit data faster than the link can handle, you end up with lost packets, where as if you transmit data too slow, you aren't using the full capacity of your network, so it's important to try to find the optimum point. These algorithms aren't based on latency: they can be affected by latency in some ways, but the overall effect in determining the available bandwidth is in general not affected by latency.

The author uses the analogy of passing sand scoops over a wall to explain his point. Unfortunately, it's a false analogy. A better analogy would be trucks driving between cities. Imagine that you have two warehouses, one in Southampton and one in Manchester. You want to transport things from Southampton to Manchester, so you put the things on a truck, the truck drives to Manchester and then drives back again.

Suppose you move the Manchester depot to Edinburgh instead. Now the trucks have to drive a lot further. If you only have one truck, doubling the latency halves the transfer rate. However, the point to realise is that with TCP, there is more than one truck. The author says, "As distance increases, the TCP window shrinks". This is the exact opposite of what happens in TCP. To use the trucks analogy again, if you increase the distance between depots, the logical thing to do is to increase the number of trucks to sustain the same throughput. This is exactly what TCP does. TCP window size = number of trucks. Latency increase leads to window size increase.

There are flaws in the existing congestion control algorithms. For example, there is a problem that people are experiencing on very high bandwidth connections where TCP window size does not scale up fast enough. However, this only affects very high bandwidth networks: 10 gigabits or more. This isn't something that will affect users on a home DSL line.

Finally, yes, latency is important for certain applications. Gaming and video conferencing are two examples of applications where latency is incredibly important. The reason is that in these situations low latency is important. Arguably, the popularity of Web 2.0 applications where users need fast updates from web servers also means that latency has increased importance. However, when speaking about download speeds, latency is irrelevant. Here, bandwidth is all that matters.

Syndicated 2007-06-04 21:17:47 from fragglet

Voice authentication

WorldPay has launched VoicePay, a voice-authenticated system for making secure payments.


The problem with new technologies like this one is that they seem deceptively secure simply because they look "hi-tech". We're used to seeing such systems appear in James Bond movies, or in Star Trek, and that gives it a false veneer of security. We need to stop and think about whether it's actually a good idea in the real world. It's important that the actual security issues with such a system are properly examined.


Consider how a voice authentication system must inevitably work. The system takes a sample of the user's voice, and extracts certain characteristic features of the voice (vocal tract properties, for example). In effect, the combination of those particular features is being used as the user's password. When the user comes to authenticate, they speak to the system, those same features are extracted again and compared with the user's profile.


The problem here is that this is basically no better than a password-based system. In fact, it's worse. It's vulnerable to the same attacks that a password-based system is vulnerable to (phishing/spoofing, keyloggers can be replaced by voiceloggers, etc). Now take into account that in effect, whenever you speak, you're broadcasting your password to anyone in the vicinity. If someone knows the voice features used by the authentication system, it's not very difficult to get a recording of someone's voice, extract those same features and feed them into a voice synthesiser.


I'm just imagining a mugging of the future, where a thief holds up a man in an alleyway, takes his credit cards, then produces a gun and a dictaphone and says, "now, beg for your life!"

Syndicated 2007-04-30 08:46:56 from fragglet

Nintendo Wireless Friend Codes

Here are my Nintendo Wireless Friend Codes. Reply with yours and I'll add you!



Tetris DS
852472, 170981
Mario Kart DS
309330, 357187

More coming soon...

Syndicated 2007-02-19 21:00:26 from fragglet

API design

There are lots of good books on graphical user interface design. I recently received Designing Interfaces as a Christmas present, which gives a very good overview of different patterns in the design of GUIs. However, discussion of user interface design in general seems limited to graphical user interfaces. What about API design?


As I see it, an API is just another form of user interface. Almost all programming projects result in something that is directly used by a human user. In the case of a GUI program, this is something that can be manipulated with the keyboard or mouse. However, many projects take the form of libraries for other programmers to use. In this case, there is still a user - the programmer that must use the library. Why should the design of these types of interface be any less important than a graphical user interface? If anything, it seems to me that the design of a good API is something even more important, as it is something that can influence the correctness of many other programs that use it.


It's fairly straightforward to see examples of good and bad APIs. The Gtk+ API is something I've found fairly nice, for example. In comparison, something like The Windows Registry API seems a complicated mess.


Of course, in using anything beyond the most simple of APIs, an API reference is insufficient as the sole form of documentation. Tutorials and examples are incredibly useful as a guide to how to use the interface - even though I like Gtk+, there's no way that I could have ever written a Gtk+ application without seeing example code. However, what I do dislike are APIs that almost require the use of example code in order to use. Using the RegCreateKeyEx example from above, we can see that the function requires nine arguments to be specified in order to use it. This seems a little overcomplicated to say the least.


Is this a subject that warrants further study?

Syndicated 2007-02-19 10:28:35 from fragglet

33 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!