A Distributed alternative to the Domain Name System
Posted 28 Feb 2006 at 19:26 UTC by lkcl
For some time, it has been floated that China may end up creating their own alternate DNS root - and from March 1st that will become a reality.
In China, they have the clout to be able to do that - simply by putting in place Country-wide NAT that will redirect DNS root queries to the new servers. As all the Root Servers are outside of China...
We all know that the current United States Government is paranoid, stupid and willing to treat anyone and everyone as a threat. The trouble is that their actions have a world-wide massive detrimental impact.
In this case, where they are being uncooperative and consider the internet to be "theirs", they have forced another Sovereign Country "known to be disrespectful of human rights" to basically conclude "sod you, we're doing this ourselves".
I applaud this decision.
America has absolutely no right to restrict what languages domain names are registered, nor to treat ".com" as "purely america".
The Fly In The Ointment: neither does anyone else.
To that end, I invite Free Software Developers to design and implement a peer-to-peer alternative to the Domain Name System, and to provide gateway and proxying services to "standard" DNS (first implementation probably simply read-only proxying rather than dynamic DNS or microsoft-based dynamic dns).
True Peer-to-peer Domain Naming has some very unique problems, such as people taking over someone else's domain name. To that end, I believe that new entries should only be added if a certain number of OpenCA-registered individuals should Digitally Sign and certify a domain name as truly belonging to that registrant.
For very short domain names, that should be well over 50 or even 100 individuals.
The responsibility for someone who Digitally Signs a domain name is very high: if the domain name is a trademark, then they will, if someone is endeavouring to infringe a trademark, be complicit in that infringement.
It is therefore absolutely absolutely necessary that someone, who is able to carry out Digital Signatures on domain name registration in the proposed peer-to-peer alternative to DNS, be absolutely paranoid in their "due diligence".
Technical Question (which is probably on the minds of the readers) - how the hell do I enforce the Digital Signatures on the (let's call it PPDNS - peer-to-peer dns system) PPDNS names?
Answer: With something akin to "Keynote", aka RFC 2704.
keynote basically allows you to distribute some "digitally-signed rules", and the keynote framework allows you to evaluate the rules, which are formally expressed in a simple language, to check whether
someone's actions are "compliant". For example, you can specify
a "Rule" that says "any domain name with only one dot in it must
require 100 digital signatures". And each node in the Peer-to-Peer
domain name system can evaluate "how many dots does the domain name
have; how many digital signatures does it have" and if the answer
is one and less than 100 respectively, keynote will return "invalid"
and the server knows NOT to give out the IP address associated with
the invalid domain name.
Question: in a distributed peer-to-peer domain name system, do you
need a hierarchical domain structure?
Answer: absolutely not!
Why? because data stored in peer-to-peer services is typically distributed, so what the heck would you need a hierarchy behind THAT???
Okay - that answer needs a little more background. I'm anticipating that the kinds of registrations that could be carried out should be "The following IP address has an "old-style" DNS server at 123.456.789.01
and it is responsible for *.foo.com" When a browser looks at www.foo.com, a PPDNS query would then send out a data search for the distributed/hashed content behind "*.foo.com" as a first priority, and simultaneously a query for "www.foo.com". In this case, www.foo.com would come back with zero answers, but an answer for "*.foo.com" would succeed; then the PPDNS client would know that it must contact IP address 123.456.789.01 with a standard port 53 DNS query, to look up www.foo.com.
Oh - of course - _after_ verifying that the Digital Signatures are valid.
Question: how do you "revoke" a Trademark that someone else has registered?
Answer: in several ways. Firstly, you contact the "Digital Signers" and you request them to issue a "revocation certificate". If they don't comply, you get a Court Order - and ultimately, you sue them. Secondly, you repeatedly make queries into the PPDNS network, to check the IP addresses being returned (yes, this is where PPDNS is slightly different: the IP address of the people who are cacheing and distributing the content _needs_ to be known) and to formally request that they "expire" that content and replace it with "updated" content.
If they fail to comply with that request (to expire and update the content) then, again, you take them to court, because they are aiding and abetting Trademark Infringment.
I wanted to get this out there ASAP, for people to review. I realise that a lot of thought needs to go into the "rules" - to ensure that people can't register the letters a to z as 26 possible top-level domain names and then claim ownership of the entire domain name system, for example.
Also it's absolutely necessary for people to register "*.foo.com" as a means to claim ownership of everything that matches that wildcard - simply because otherwise someone could register *.foo.com and another person register www.foo.com - at their own site!
So, whilst the underlying infrastructure itself should be peer-to-peer distributed, the _appearance_ of being still a hierarchical domain name system must be agreed upon as a "convention".
Other things, while I think of it: addition of new "rules" to PPDNS should require an astronomically large number of individuals - over a thousand Digital Signatures. Do people think this would be enough?
one man, one vote?, posted 28 Feb 2006 at 21:02 UTC by gobry »
If taking over amazon.com requires only 100 certificates, I'm sure this won't go very far. As we are on Advogato, introducing a trust metric might be a good idea... but who would provide the seed accounts of this highly critical metric? :)
We need pet names for web servers basically.
In this, everybody would be their own seed account. They would assign trust levels to their friends and use their friends names for domains.
But, IMHO, this requires something like CAKE to be widely deployed so that the ability to have a secure pseudononymous identity is ubiquitous.
Just tunnel it, posted 1 Mar 2006 at 09:47 UTC by metaur »
simply by putting in place Country-wide NAT that will redirect DNS root
queries to the new servers
That doesn't sound very effective. People could just tunnel their DNS traffic over encrypted HTTP sessions.
In this, everybody would be their own seed account. They would assign trust levels to their friends and use their friends names for domains.
But then you cannot pass an URL to someone, you need to translate it in her own language. And when a service changes its IP, every such local group will need to update their definitions.
Having a person declaring to be in charge of a domain name, and providing a way to trust him (and to find him BTW) is certainly appealing, though the details look tricky.
You're rambling!, posted 1 Mar 2006 at 16:07 UTC by jcm »
It's a nice idea, lkcl, but there are some things which do need to be centralized and naming is one of them.
DNS is already distributed by design, but there does have to be someone to set some kind of standards at the top level and thus far I haven't seen the current system turn out to be so bad that it needs a complete replacement at this stage. Why would it get any better?
The US doesn't trust China with DNS for very good reasons, but it /is/ misguided in not allowing the overall control to be with the UN or some other international body of which China is free to partake of membership. Sometimes "peer to peer" isn't the holy panacea for everything.
The problem here is the IP address, not the pet name concept. IP adresses are tied to the topological layout of the network, and that's why they change. The solution is to have network node identifiers (I think they ought to be service identifiers) be something more secure and lasting than an IP address.
the dns root doesn't _need_ to be centralised - it's just that that's the way it's currently done.
centralisation is exactly what the US wants - and centralisation (of the key "root" data) is exactly where the weakness lies.
there has to be trust - and china, rather sensibly, doesn't trust the US.
you watch: the rest of the asian countries will soon follow suit, and ask china if they can help advise them on how to set up, and peer-distribute, their own domain names.
which, ultimately, will end up with china probably leading the way to do some modifications to DNS to support UTF-8 or some other international character set format.
read and write, posted 2 Mar 2006 at 10:28 UTC by lkcl »
many thanks for your comments and insights. as usual with articles that i write off the top of my head, i remember, some days later, things that i missed out, which you kindly remind me of.
DNS is peer-to-peer, yes - but only from a read-push perspective.
a true peer-to-peer domain name system would also allow distributed write.
"jumping out" to existing (legacy?) DNS servers - to find them via a distributed hash-table lookup - to replace the existing domain name "registration" system, world-wide, with a peer-to-peer, OpenCA double-checking ... mmmm .... "thing" - would:
* do away with the mad insanity that the US imposes.
* make it possible to render networksolutions and other registration companies irrelevant, so don't expect the PPDNS idea to be popular.
the only down-side that i haven't thought about is that companies like dyndns.com, whose services are incredibly useful, might be adversely affected. or maybe not.... if you PPDNS-register your domain as being managed by dyndns.org/com's existing dynamic DNS server.... yeh, that'd do the trick. panic over :)
regarding the "100 people could register amazon.com and hijack it" comment - well.... remember, you have to find 100 OpenCA certified people who have 150 "points" in the OpenCA system, which means that they have gone through a laborious process of proving identity and then getting their OpenCA Certificate signed by 100 other people.
so there's actually anything like 10,000 people involved in the "hijacking", 100 of which whose reputations would be utterly destroyed; they'd be sued for being complicit in intellectual property and trademark etc. theft; but worse, their OpenCA certificate would be revoked.
but, if you believe that 100 people is not enough, then the bar needs to be raised.
Tor, posted 2 Mar 2006 at 18:18 UTC by realblades »
already has a hash-based name service besides all the other nice features.
Hmm, now that's pretty interesting. That's about the kind of thing I was looking to create when I thought up CAKE. Though I didn't really care about strong anonymity.
OpenCA, posted 3 Mar 2006 at 07:02 UTC by gobry »
I don't know OpenCA, but it looks more like a framework to me, rather than a place to certify people. So, how would the certification process work? How do you avoid moving centralization there?
OpenCA, posted 3 Mar 2006 at 20:59 UTC by lkcl »
OpenCA is, as far as i can make out, a way for Certificates to be "signed" - just like you trust "thawte" and "verisign" to hand out SSL certificates, and everyone "trusts" them to not have their private keys stolen. so they put the server that has the private keys on in a friggin _vault_, with the most paranoid security that they can possibly dream up.
OpenCA, from what i can fuzzily gather, does it the "old fashioned" way. in order to become an OpenCA "signer" (equivalent to thawte & verisign's paranoia) you must 1) have your identify verified by at least two bank managers or other notaries 2) go round ONE HUNDRED people or something ridiculous and have them _digitally sign_ your key - all of them!! - once they have verified who you are (you show them your passport).
that's probably not exactly how it's done, but it's pretty paranoid.
once you have done these things, you are then entitled, for a fixed and expirable number of years, to "digitally sign" other people's certificates.
so yes, it's definitely not a "place" where you can go to have your certificates "signed".
you have to PHYSICALLY meet someone who has gone through the OpenCA framework in order to get them to sign your certificate.
... now imagine if you were to need ONE HUNDRED such OpenCA-verified people to register a domain name.
those one hundred people are going to have met two hundred bank managers; those one hundred people are going to have shown their passport to 10,000 people. those one hundred people's livelihoods are on the line if they don't do "due diligence" in verifying that you aren't trying to rip off "amazon.com"'s domain name.
That process seems really cumbersome. I don't really think there's much need for most names to be global. And I think the globalness of a name can be a side-effect of a global reputation system that links public keys with names. For really popular names that deserve to be global, it's quite likely that practically everybody will use a name that's very similar for their public key.
In addition, it would help the phisher problem. Right now the bar for figuring out that a site is phishing is actually quite high. It's really easy to socially engineer many folks into entering their information in the wrong place. But if names were simply considered shorthand for the associated public key, a phisher wouldn't be able to convincingly hijack the name of a business someone was already doing business with and I think that would raise the bar signficantly.
omnifarious - that sounds like an excellent plan.
most names _are_ global, already - every domain name under the TLDs is global.
and on a distributed peer-to-peer network, with distributed hash tables, it's not such a heavy burden (especially if you "jump out" to existing DNS infrastructure after the first lookup).
how _would_ phishing be stopped? what information is in, say, the HTML page, that gives you a clue?
well.... here's the rub: phishing is done by putting an IP address or a fake domain name in an href, and the link text says "paypal.com" or "barclays.co.uk", leading the dumbest-of-the-dumb to believe that everything's hunky-dory.
(side-note: i used outlook for the first time in ten years, recently, and i am not in the SLIGHTEST bit surprised as to why most people get fooled by phishing. in email messages, only the name is displayed, not the address... and who the XXXX thought that html content for email was a good idea???)
the upshot is this: nothing that you can realistically do is going to protect the gullible.
about the only really secure way to prevent phishing is to have a "secondary" system for authentication, which is, apparently, used in switzerland.
what you do is you go to a bank's web page, put in your ID number and you get presented with a "code". you must then call up a TELEPHONE NUMBER of the bank, and you must dial-in that code AND you must also do your PIN number (over the phone).
you DO NOT type in the PIN number into the bank's web page.
this simple handshake achieves the following things things:
1) the bank knows that you are you.
2) you know that it's definitely your bank at the end of the web page.
ultimately, this sort of thing _could_ also be achieved with a "passport" service.
oh, just like that microsoft "passport" service, which nobody trusted because, errr... it's microsoft. and it's proprietary. so why would we bother with it.
but whatever you use, it definitely needs to be a "separate" thing - a third party which cannot be faked - and, crucialy, it requires modification of the web services.
in short, whilst the infrastructure can be offered as you suggest, onmifarious, you've _still_ got to get people to use it.
Community names, posted 8 Mar 2006 at 16:38 UTC by aminorex »
I agree that truly global names are rarely required. What might be more readily achievable is the development and deployment of a system for establishing shared nomenclature within a community of discourse, a system which facilitates the emergence of consensus naming. Most names of this sort are content-based, and the ideal name is semantically defined rather than syntactically. Rather than restricting such a system to naming network endpoints, it makes sense to generalize it to arbitrary networked resources, de styl XML.
Hash-based names are specialized because they are very specific and deviate substantially from naive practices of naming. More general purpose naming systems must conform to the requirements of naive practices in order to achieve the goal of general purpose usefulness.
The goal of naming is to make reference, but the mean utility of a reference is less if it is too specific, or too general. In a given community, "Bob's blog" is a well-defined concept. But there are too many Bobs about for this to refer well globally, so additional qualifiers are required. One form of qualifier is relative to community "Bob from Advogato's blog", another is global, "Bob ITIN 515-55-1515's blog". When somone manages to emerge as the established global holder of a piece of namespace, only community-based references with higher relevance can unambiguously override their possession. ITIN might be relatively unambiguous and global, but its reliance on a central authority (overlooking its privacy implications) makes it a poor example. Bob's name or public affiliation is a better specifier for purposes of naivete, but such uses create practical implementation problems which a useful scheme needs to resolve with some degree of elegance.
In order that names should be semantic, they must necessarily be approximate. This increases rather than decreases their utility, because they are less specific, but specific enough for naive uses of names. "Kubrick's last movie" and "AI" should refer equally to all of the various representations of that work, although each with a different form of ambiguity. Names are always disambiguated by their context, to some degree. A naming framework or system provides one form of context, but a useful general-purpose naming scheme must accept environmental factors as part of the disambiguation scheme, rather than relying exclusively on the framework for disambiguation.
Naming systems which employ social or semantic networks to resolve ambiguity are relatively achievable of implementation, but their utility depends on a critical mass of data and pervasive availability. Data can be cobbled from existing sources, and pervasive availability can be supplied by exploiting pervasive platforms, such as web browsers, but it is an area of research requiring significant labor, to identify useful applications which can effectively integrate these conditions.
But, I think they provide a nice way of linking the human world of naming with the computer world of naming. It's up to the computers to make this link. But in the underlying guts of things it is nice to have unambiguous unique names for things. I think it's up to the computers to translate this world into one that makes sense for people and works with human concepts of naming.
The thing a UI can do with hash based names is tell people that they've had no prior relationship with this site. It could use color coding on the borders for this, or it could use a dialog, or both. It can scan for images and words in the site for ones that make it look like it IS a site the user has used before and give them an even sterner warning in blinking red if it notices this. This scanning need not be that complex. Keywords or directly comparing images for close similarity would probably be enough.
here's where things start to go slightly wonky for google, and other search engines.
they must have some distributed hashing scheme which maps URLs to a set of machines (or a NAT'd set behind one IP address ... long story) on which the indexed content sits (that's how i'd do it).
in some respects, google already _has_ what you describe, animorex - it's just that we don't realise it.
you type in "bob's blog" and up comes the content.
as people learn to use "google search terms" to remember how to get to pages.
here's where it all goes horribly wrong: when DNS comes crashing down.
HOPEFULLY google have a way round this - by mapping the entire IP address range, which, ultimately, makes DNS pretty much irrelevant.
BUT, of course, links to external web sites don't go via "Ip addresses", they go via domain names..
perhaps... perhaps... the solution is simply to embed very specific "google redirects" into your web site - to the "i'm feeling lucky" page, making sure that there are unique terms in your page!!!
hmmm... time for me to write an email to google, i think.
Having google associating search terms with IP addresses is not a complete fix to the problem. It makes IP addresses and the ability to spoof them even more valuable. Additionally it makes google too important.
I think the only reasons we haven't seen more major attacks on DNS or the routing infrastructure is that there's much easier pickings elsewhere. But, if those dry up, you can be sure major resources will be put behind figuring out how to attack those systems. I think they will fall. IP addresses are an inherently flawed identifier.
Google, despite their motto, will make mistakes, and/or become evil for some value of evil. This is especially true if they're given the reigns of centralized control. So, relying on them as a long-term solution isn't a good idea.
CACert, posted 18 Mar 2006 at 18:25 UTC by chexum »
lkcl, maybe you confused OpenCA with CACert.org in the last moment? OpenCA is a
bit different, while CACert matches perfectly :)
Interestingly, there are a few apparently political oppositions to it that I
can't follow. Literally, because some links in the
discussions on including
its CA root in browsers (mainly the mozilla bug) are dead...