19 May 2005 tnt   » (Master)

New Netscape Browser
The new Netscape browser is out. I actually got a job offer to work on this, because of all my XUL experience and knowledge. (And probably also because the development was being done in Victoria, BC. And I live pretty close to that. In Surrey, BC.) It would have been a really interesting project to work on. Unfortunately, other responsibilities prevented me from being able to accept the offer. (Not to mention that, at this point in my life, I really don't like the idea of having to move to Victoria. It's too far from the Surrey area,... where all my friends and family are.)



We Need an Alternative to URLs
I've been thinking lately that we need an alternative to URLs.

URLs are great in that they let you "reference" or "point to" things. (Such as webpages, files, objects, people, procedures, e-mail boxes, etc etc.)

(Really URLs are the computer equivalent of a "name" in human language.)

However, I think URLs have some failings.

#1: It's integrated with domain names. "Why is that a problem", you might be asking. ("Domain names are alot easier to remember than IP addresses after all.") Well, first off domain names cost money. (Which isn't really the bad part.) The bad part is that people often don't keep on paying for domain names, and they loose them, and "great" websites, or webpages with little "gems" on them disappear. Things like Archive.org help. But, I think a better solution is needed. (Also, Archive.org doesn't get everything.) Perhaps if URLs, or a URL alternative, didn't use domain names or IP address then we wouldn't have websites and webpages disappearing. (Maybe if we used a "free" identifier that anyone could generate. Maybe something like a UUID. And have that mapped onto IP addresses, "hashes", or something else.)

#2: The protocols that a URL can use (as far as I know), technically, must be built on TCP. This makes for a problem. I'll give you an example.

I want to create a desktop application system that used UNIX/POSIX named sockets. And I wanted to "point to" the server for the application using a URL. How do I do that? How do I use named sockets in a URL? It's similar to a TCP port. However, I can't just stick in a "path" to a named socket where the port for the URL would go, since it would include slashes. I guess I could use some method of "escaping" the slashes. However, in using "named sockets" I'm no longer using TCP. And thus, it's not really a URL.)



RE: THE WEB, TCP CONNECTIONS, FLASH COMMUNICATIONS SERVER, AND RICH INTERNET APPLICATIONS
Got a reply from James Andrewartha -- trs80 -- via e-mail, to my previous post , in regards to me wanting all browsers to have a open, standard, cross-browser, and cross-platform way to creating TCP connections.

James Andrewartha (trs80) replied:

xmlhttprequest is now the de facto standard for this sort of thing. Now with the buzzword name of AJAX http://www.adaptivepath.com/publications/essays/archives/000385.php it's a big feature of so-called "Web 2.0" applications. Also, if you read the recentlog, two entries below yours is a post talking about JSON which is a highly useful encapsulation to get data from the server to the client.

James Andrewartha (trs80), thanks for the reply. I am already aware of XmlHttpRequest, JSON, and AJAX. Been aware of them for a while now actually, even before the terms JSON and AJAX were coined.

There was a time when XmlHttpRequest was one of the best kept secrets of web development. I used to tell people, "alot of the the really cool stuff on the web is done with XmlHttpRequest". (When you make XUL applications, you typically make heavy use of XmlHttpRequest.)

However, I don't think XmlHttpRequest is good enough. Sure, it lets your webpage do things without having to "reload" the page. But that's only one of the problems we faced. (Not the only one.)

The problem with XmlHttpRequest is that it uses HTTP. And the problem with HTTP is that it uses the request-response paradigm. And this results in applications constantly having to poll the server. Which is bad. And it also has the client creating and tearing down TCP connections all the time. Which is also bad.

We need true asynchronous communication. And XmlHttpRequest only gives the illusion of asynchronous communication (because the webpage is not reloading).

I just can't get the performance I need from an XmlHttpRequest. (But I can get the perfomance I need from a TCP connection, using a custom protocol though.)



Complaints about MySQL Replication
It's great that MySQL supports replication. However, more needs to be done to make it work properly. (Or maybe I should say work "better".)

MySQL replication works by essentially sending SQL commands, from the master to the slaves, when the SQL command changes something in the master. Which seems like a good idea. And if nothing ever got screwed up and nothing ever went wrong, then this would be fine.

However, things do get screwed up and things do go wrong, so you need to write your software assuming this. Your software should either be able to detect when things screw up or go wrong, or check for it on a regular basis. And when your software finds out it did happen, your software should either try to fix it, or notifiy someone about it.

Here's some things that MySQL Replication should do:

#1 Check that all the tables have the same structure. (This should be pretty easy to do.)

#2 Check that the data in each table is the same. (At first this might seem like it would be difficult to do, in an efficient way, but it really isn't. You don't have to check every field in every row. If you were to keep a running "check sum" or "hash" for each table, then you could just compare those to detect a problem.)

"Why would any of these problems ever happen", you ask. Sometimes it's from human error. And sometimes if is for reasons beyond the MySQL database server's control. (Like a power failure, or hardware problems.) For example, people sometimes write to slaves when they shouldn't. Backups are sometimes out of sync. (So when you restore things from a backup, and you think everything is OK, it really isn't.) Hard drives have problems. Etc. I've even seen weird problems with replication synchronization that we just can't explain.

Latest blog entries     Older blog entries

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!