Tech Fusion Outline: Organising the World's Knowledge.
Posted 7 Apr 2008 at 03:32 UTC (updated 18 Apr 2008 at 17:42 UTC) by lkcl
With the introduction of the Internet, vast amounts of information
became available - and, rather than help people of the planet to become
useful in a globalised world, it has deluged them. Peeking through the
morass of software and hardware is the occasional light (hopefully not
an oncoming train). This article will outline those technologies.
briefly, for later expansion.
The "Executive Summary" is that for computer technology
to be
useful, we
need modular portable hardware with wireless mesh networking as well as
standard internet access, and for the software applications to sit on
top of distributed and peer-to-peer technology.
None of the technology outlined here is new (in fact, some
of it has
existed for many decades): it's just not being brought together. It
should be pretty clear that in the current world climate, there is some
degree of urgency to making this "Tech Fusion" happen.
As I aim to describe these technologies in more detail later, I mainly
want to
outline them and give hints as to their relevance, as a first priority. The
goal is simple: *actually* provide people with a means to articulate their
thoughts, needs and desires, and to be able to communicate those
thoughts needs
and desires to whoever can fulfil them. world-wide.
When this happens, we will have brought about - literally - a "World Age of
Enlightenment". literally, because the literal definition of an
"Enlightened
Society" is one in which everyone in that society is "useful".
Databases: improvements for distributed - and modern - uses
The design and principles behind Databases were thrown together nearly forty
years ago, and have not been updated since. In the article titled "The
Vietnam
of Computer Science", the next forty years have been spent endeavouring
to make
a two-dimensional concept (rows, columns) fit into the much more useful and
generic "tree" or "free-form" structure concept of Objects and their
inter-relationships (Object Relational Mappers - ORMs).
it doesn't fit.
One company spent twelve years wading an ORM back and forth from a hybrid of
c++, SQL and Stored Procedures, changing the percentage of code in each
programming category over the years depending on the whim of the
ever-changing
management. At one point, they had nearly 100% of the code - generating
Enterprise-grade forms and reporting for data management in the Oracle
Stored
Procedures, and twice in the company's code development they had nearly
100% of
the code in c++.
A great deal of time is wasted on writing code in as many languages as
exist on
the planet, when in fact the addition of OO features to a database would
make
that almost entirely unnecessary.
Additionally, significant amounts of time are spent developing "database
replication" technology, which is the old "client-server" model gone badly
wrong. What is actually needed is for applications to take into account a
distributed and peer-to-peer architecture, and for the databases to actually
help them do that.
Also, Databases were designed when procedural programming languages were the
norm: we've since moved on a bit, but Databases haven't. Whilst most modern
programming languages are Object-Orientated, so-called "4th Generation"
Database technology is either unheard of or prohibitively expensive.
Here is what is required to update Free Software databases to be
"useful" in a
distributed context, and they all focus on being able to store "object
relationships":
- Addition of Primary Indexing on 16 bytes of (un)signed chars
known to most as uuids, but that's just a uchar[16]. Those people with a
knowledge of cryptographic hashing functions and peer-to-peer DHT will
immediately recognise why this would be useful in a global environment.
- Writeable views are absolutely essential. I've outlined
before
why, and many of the algorithms I came up with have been reduplicated in
sqlalchemy.org. It is absolutely essential, as it becomes possible for
people
to think and access the database in 2nd Normalised form yet the underlying
structure is entirely in a completeley different 2nd, 3rd or even 4th
Normalised form. The executive summary: we need a "Meta-SQL" compiler or
interpreter which takes in any type of SQL syntax and any queries, and
"maps"
them to a different form, then makes the queries in any *other* type of SQL
syntax.
- Inclusion of vtable and inheritance concepts in SQL databases.
This isn't new: it's a well-understood essential and integral part of every
single modern compiler and interpreter. Each "object" has a set of
"function
pointers" and each object which inherits another extends its "function
pointers". In SQL terms, that means that each TABLE requires to be
identified
by a UUID (which can be stored in a special control table) and an "INHERITS
FROM tablename" concept added (oh look, another special control table).
Then,
Queries are not just a matter of looking in one table for the data, it
will be
necessary to continuously refer to the "inheritance control" table to
merge in
additional results from the inheritance stack, down to the "base" object. i
mean table. The issue of there being differing "objects" returned with
varying
order and size of row results can once again be solved by providing the row
names and orderings in yet another special control table. Yes, this may
look
complex, but if you think it's complex, I'm sorry to have to point this
out but
you *really* don't know how absolutely awful it's been for SQL
programmers over
the past 40 years: do read that article "The Vietnam of Computer Science".
Peer-to-peer Distributed Technology
The "client-server" model is fine for centralised control: it's not
fine in an environment where there aren't any servers. Or, where the
technology is expected to operate stand-alone as well as integrated as
part of a larger group.
- Distributed Hash Tables are becoming increasingly
common: the underlying principle is to create a 128-dimensional
hypercube, with the 128-bit (16 byte) "hash" as the "address"
in the hypercube. Nodes with only one-bit of difference between
"addresses" (hashes) are considered to be immediate neighbours
in the hypercube. Search algorithms make repeated queries to
increasingly-distant nodes, asking for keys which have progressively
less and less "bits" different from the hash actually being sought.
This technology - DHTs - must be a standard component of every
single modern programming language and O.S.
- uPnP, STUN and STUNT: firewall-busting and NAT traversal.
the thorny issue of running out of IPv4 space isn't unique to IPv4:
pure IPv6 Gateways are still going to present exactly the same problem.
So a series of services - a networking layer - is required that provides
these "Traversal" technologies "as standard".
- An ISO Layer 3 sockaddr scheme is required that is a
standard for peer-to-peer services. It would probably suffice to make
it 16-bytes (plus a 16-bit port, maybe?). struct sockaddr_p2p or similar.
Behind this simple sockaddr would be the technologies listed above:
DHTs, STUN etc. Implementations of services which provide the
technology already exist - for example, cspace, gnunetd and i2p - they
don't however have the kind of level of "acceptance" that is required
to make it easy to port any application - easily - into their
framework. Also, after reviewing many p2p applications and frameworks,
it would help to have a SOCK_DGSFW - "Datagram Service which Stores
and Forwards". ioctls specifying the priority and the time-to-live
will at least be required.
- A peer-to-peer "Naming Service" which will replace the
need for DNS. DNS itself should be included in the PPNS as one of
the sub-services offered. It would in fact be pretty easy to
put zones and sub-zones etc. into a PPNS. The concept of PPNS is
borrowed from DCE/RPC's "name service", which, funnily enough,
provides exactly what is needed (see FreeDCE and also the release
from The Open Group of DCE 1.2.2 under the LGPL). It even has
a central (global) naming service and the information is stored
in an X500 directory - search for isode [which would have to go: replaced
by a peer-to-peer directory which would need a distributed database
ohh look are we getting the picture here? :) ]
Also, it's important to recognise that good "Naming Services" have, as
an absolutely essential feature, the means to "group" providers of a
service together. This simple feature means that, under a particular
service, many "providers" can register - and be found. DNS itself has
this very concept, where you can register many nameservices or
mailservers - but typically you have a central "control" over how the
DNS zone is managed.
There are many examples of "grouping" Name Services - NetBIOS
(rfc1001.txt and rfc1002.txt) is an example of one of the most widely
deployed (and most widely misunderstood) proper peer-to-peer naming
services that contained this strategically important feature of "group
registration". nmbd, in samba, implements NBNS. Unfortunately, NBNS
implementations are restricted to "WAN" deployment, and do not scale to
millions of systems (without a little bit of tweaking, and I
understand what's needed).
A level of security in the registration of who can provide a particular
group service is required, which again brings us on to requiring a
distributed peer-to-peer secure authentication service.
- A Peer-to-peer secure authentication service is required
which is based around the concept of "who your friends are". Whom do
you trust, and to do what? (e.g. "I trust this person to talk to me,
but their computer contains viruses and they keep sending me
spam, so I dont "fully" trust them.") Instant Messaging has had this
concept for decades: it keeps all the people selling sex away from your
children.
The IM "buddies" concept is a bit like Advogato's very own Trust
Metrics. A more formal version of Trust Metrics, which has PKI Digital
Signatures on top of it, is "keynote". Keynote implements the
(missing) part of the equation that Raph Levien outlines in his paper
on Trust Metrics, and advogato itself implements the missing part of
the equation that keynote achieves [actual real-world use :) ]
- RPC technology needs to be much better used, developed,
and understood. DCOP is a good example of how *not* to write an RPC
mechanism (i heard it took 20 minutes). DCE/RPC was written by legendary
visionaries whom i have the utmost respect for - yet DCE/RPC was
written as "Enterprise" middleware, not as "World-wide" middleware.
There is a plethora of RPC technology out there, yet people still make
the mistake of thinking that it is okay to ignore communications
errors in their RPC applications. Correction: Object-Orientated RPC
technology is needed. We're in the 21st Century, now.
- Mesh Networking needs to become "the norm". There are
several implementations: Zeroconf, IEEE 802.11.4 (Zigbee!), TETRA,
gnunetd's VPN service; in i2p, mesh networking is even built-in, to
make it attack-resistant. Note: Zigbee "as is" isn't so useful
due to the limited range of the standard, but the peer-to-peer
networking which is described in great detail in the standard *is*
useful.
Skype is the perfect model which every "useful" application should
strive for (except that it's not free software). Skype is simple,
useful, easy to install, easy to use, does the job, doesn't go wrong
even when being attacked by the Telecoms and ISP Industries, and
provides seamless voice, messaging and video communication when all
other VoIP options are effectively living in the stone age. Why?
because skype solved the problems by using peer-to-peer technology.
GoogleTalk *almost* does the same job - except that, if you've
examined the source code, you will notice that it is designed to
integrate into google's infrastructure, not for the provision of
completely independent peer-to-peer communications. Notes indicate
that if you cannot contact the network, you must go via a "proxy".
No such "proxy" service is provided in the libjinglep2p code: the
design is incomplete.
Virtually every free software peer-to-peer application that you
encounter contains its own RPC mechanism of some sort, its own
DHT algorithm, its own naming service, its own search capability,
its own networking capability. All of this technology needs to
become ubiquotous.
Hardware: Modular Design, Mesh Networking, Hybrid Fusion of Purpose
The hardware I describe here exists or has existed - in commercial
and volume production today; in research labs; in spy networks operating
since the late 1950s - and yet, and yet, it's not yet provided in a
single device. The reason: competition. There's no money in it, or
it's too powerful, or cannot be controlled. Well, with the planet
falling apart around our ears, it's time to get this hardware brought
up-to-speed - *fast* - before it's too late.
- Small, compact, portable, interchangeable. That's the
design requirements. A hybrid machine which is sufficiently powerful
to be a computer, a communicator (instant messaging, SMS, voice and video),
a multimedia platform (video and audio) - just absolutely everything.
- Touchscreens. Essential. Mice aren't really portable.
- Laser keyboards are "in". Keyboards are the number one
World Health Organisation health hazard.
- Interchangeable screen/case. With a compact snap-together
design, the modules should be easy to fit into whatever "package" that
the user desires. For example, the same module could fit into a
form-factor case which is a standard mobile phone: 1.5in screen plus
buttons. The same module could fit into a smartphone case, with a 4in
VGA or a 3in QVGA touchscreen. The same module could fit into a
Tablet PC case, with a 1024x768 or 1200x800 touchscreen, USB
extensions etc. and a big battery. The list goes on.
- Micro-projectors from companies like Light Blue Optics.
1024x768 or 1200x800 projections is perfectly feasible, readable
indoors at a projection distance of 15in. That is good enough to be a
replacement for a desktop TFT screen.
Also, a recent invention is a 140 lumens lightbulb that uses plasma,
can fit into a space approximately 7mm cubed, and only uses 1 watt.
That's as powerful as a streetlamp. 6,000 degrees centigrade inside.
only one watt. This little lightbulb makes micro-projectors actually
viable: the Mitsubishi PK20 mini-projector, which uses the world's only
*commercially* available projector integrated circuit (from Texas
Instruments) is only 25 lumens.
- The 2009 Intel 45nm CPU - somewhere around 1.5ghz,
somewhere around 0.1 watts, somewhere around $USD 6 because a
single wafer will have over 2,000 processors on it. Integrated
Video. Integrated NorthBridge. Almost perfect. Makes you wonder
why Intel sold their PXA processor design, because that would be
a far better CPU for the target market of UMPCs and Smartphones,
and, as the entire mobile phone industry revolves around the ARM
CPU, switching to x86 is utterly painful. Perhaps that's a good
thing. Alternatives are hard to find (or a compromise): Samsung's
SC2440 series; the AMD Geode LX-800. Intel, as the monopoly, really
is the only one offering technology powerful enough. Mr Intel,
you have an enormous responsibility!
- Modular Communications
The hardware needs to have tiny snap-on communications modules, so
that a traveller going from one place to another can simply obtain
what they need and immediately be able to access any wireless
networks (point-to-point or mesh) in their area.
GSM/GPRS/EDGE. WCDMA. HDSPA. WIMAX. TETRA. ZigBee. 802.11abgn.
Even Low-Earth Orbit Satellite (for the future, that one...)
Ultra-Wide-Band is the one that *really* needs attention, however,
as the hardware is simple, low-power, resistant to disruption,
and can easily transmit even on a GHz carrier over distances of
several kilometres.
Ultimately, however, a module is needed which can be reprogrammed
(see gnu radio project). A powerful SIMD engine on an ARM core
would be perfect, to directly process and create the raw R.F.
signal.
- 8-way phased ceramic antennas provide direct, accurate
and near-impossible-to-disrupt communications. The array of ceramic
antennae do beam-steering, which must be done at the R.F. level,
not at baseband. Short of a nuclear strike, it's damn difficult to
disrupt. And, because the beam is directed, much less power is
needed.
- Aluminium Batteries. Europositron.com have a design of
an aluminium-based, sealed 1.5 volt rechargeable cell that is FIVE
times more powerful than a NiMh cell. Unlike former aluminium
batteries, the aluminium compound is on the anode not the cathode, and
so the battery does not turn to slush when discharged: instead, the
liquid compound crystallises. Nanotech materials are needed in the
manufacturing.
- Voice-based authentication - voice "fingerprinting".
140 metrics can be taken from sentences which are sufficient to
provide 100% accurate, 99.9% reliable identification - even with
a single bar of GSM signal. A two-way authentication mechanism
is also part of the invention, which is often what "secure" bank
systems forget about. There's no point providing authentication
mechanisms that can be "Phished"!
- Voice recognition - a chip has been invented which, when
combined with some Content-Addressable Memory, can instantly recognise
phonemes and look them up into words. in any language. Currently,
the technology can only do about 1,500 words - but that is more than
enough to run most people's computing devices.
- Environmental Sensor Array should be included as sta
ndard, with sensors detecting temperature; humidity; Carbon Dioxide
levels; Carbon Monoxide levels; artificially manufactured sapphires that
can be used to detect poisons in water supplies; even a miniature
spectrum analyser (anyone know if this is feasible?). The purpose of
this miniature equipment: collecting environmental information to
protect people from hazards and, more importantly, the planet we live
on. For example: a world-wide uncensored incontrovertible distributed
database of the levels of Carbon Dioxide in populated areas would
finally open people's eyes.
Knowledge, Ontologies, Parser Auto-generation (Reverse-Engineering)
Organising the world's information is a trivial task of applying
simple, well-understood distributed algorithms onto increasingly-large
amounts of hardware. "well-understood" does not necessarily mean
"well-liked". For example, I heard somewhere that google told people
that it uses brute-force search instead of database indexing (which
didn't go down too well on slashdot).
Organising the world's knowledge involves contextual inference of
meaning. Pattern-matching. Classification (known to the Web 2.0
community as "tagging") - but automated. Classification of
classifications involves levels of recursion that quickly damage most
people's brains.
It's *not* trivial. There are a handful of people in the world who
understand how to organise knowledge, and they mostly work for
Intelligence Agencies, and they are absolutely committed to their
work. There do exist commercial applications: their price tag usually
starts at five figures, and comes attached with contractors whose
daily rate is in the four figures range, because you simply won't
"grok" their tools.
The key to understanding "Knowledge" is in the Vedic scriptures,
which are *not* religious texts: they are an expansion of quantum
mechanics functions. Many readers will have difficulty accepting this
simple statement. The only thing that I can say to you is: it's taken
me my entire life so far to understand enough to be able to state what
I have, and it's only in the recent few weeks that I've begun to fully
comprehend the significance of Vedic knowledge, and its relevance,
and the parallels with computing technology. You therefore have two
choices: trust my words, or work it out for yourself. In the meantime,
the planet goes down the toilet, and we don't have a replacement planet.
The goal of "Organising the World's Knowledge" is simple: to make it
possible to search for any topic, and to immediately be connected to
the world's leading authoritative individuals in that area, and their
work.
To achieve that, tools are needed which can pattern-match
similar-looking information, whether that information is in text,
images, voice, video: anything. Some formats are going to need
more computation than others, but it is not an insurmountable
issue. Many of the formats will need accompanying text written
by humans to provide context - in fact, *all* of the formats will
need to have their context taken into consideration.
Ironically, all of the components needed already exist - even as Free
Software: they just haven't been integrated, because their
significance hasn't been recognised. Here are the components which,
when integrated, will provide the framework to organise the world's
knowledge. I'd like them to be implemented in a peer-to-peer
framework.
- Vedic Science and Modern Science are beginning to
merge. Quantum Mechanics. Dempster Shafer Theory. Theories of
Consciousness. "The Secret". "Down The Rabbit Hole (Quantum
Edition)". Ironically, in order to truly prove the parallels between
Vedic and Modern science, organising the world's knowledge is,
undeniably, a way to connect the dots.
- Ontology Classification (aka "Tagging" in Web 2.0
terminology) is a key requirement. The AMOS project (part of the
E.U. funded 5th framework) was completed in 2005. The AMOS
project implements a pattern-matcher which can match similar pieces
of text, and it was used to match up source code of free software
projects (like... all of them). Whilst the AMOS code doesn't
actually tell you *what* it's found is the same as that which
was found in another text file, it can at least *find* similarities.
You then have a front-end tool which presents the "finds" to a
human, who then gives "names" to the matches, along with descriptions.
However, ultimately, it's possible to perform matches on the
classifications, and the descriptions of those classifications,
along with the context of the many "finds". This is where it gets
horribly recursive and awful and I don't want to go into details.
- The GPLv3 "annotator" which was developed to show
comments made during the license's revision stage is a good example of
a knowledge-based front-end.
- Auto-parser-generation - yes it's possible. The
authors of python-hachoir have written a
reverse-engineering-assistance tool which I believe implements
Dempster-Shafer's algorithm. Amazingly, it's 250 lines of code.
The code as-is doesn't create recursive parsers: actually, you don't
want it to. But, what you do is you then perform recursive pattern
matching on the subsections that are sub-divided at the level above.
- Parser development assistance and Reverse-engineering tools .
GoldParser. OllyDbg. IDAPro. SoftICE. All of these tools assist in
producing formal definitions of, and understandings of, data.
- Context-based tools LEO. FreeMind. LEO is the most
promising, but FreeMind is mentioned for completeness.
- Genetic Algorithms have always been useful: learning
from nature's example. Even over ten years ago I remember an article
which described how random variations in algorithms, run thousands
of times and tested for correctness, could result in speed improvement
techniques being "rediscovered", such as loop unbundling and loop
invariants. It doesn't have to stop there: you just have to trust that
it works (for a given level of paranoid testing "trust"). For any
requirements, you just have to define the tests correctly, and "trust"
that the process will end up with the "best-fit" results, even if you
don't understand what happened.
- Parsers which accept tree-structures as input , where
a special class of input is a tree of left-associative
trees with one "character" in each node. To retrieve data from trees
is pretty easy: you create a class which has a "GetNextToken" function.
Ordinarily, in a "normal" parser, the next character in the stream
would be provided (solution for LALR parsers left as exercise for reader).
However if the "GetNextToken" was a recursive-depth tree walker on
a Tree-node.... So you could even walk an XML tree as input - or even
the results of an XPath query, or even the output from a computer program
could be "streamed" to a Parser (wow!). This is the principle
behind LEO: LEO provides a framework in which this kind of incredibly
powerful parser concept actually works.
Note to people who are familiar with LALR parsers such as flex and bison:
I'm acutely aware that the technique of "GetNextToken" and "lookaheadtoken"
is already supported - it's just that it's not recognised as being
"useful". As part of a knowledge system, I assure you it most
definitely is.
Note also: an LR parser which supports BNF form can be written in FORTH
in under fifteen lines of code (including comments). It's so fast that
it is completely unnecessary to expand it to have LALR capabilities.
Laws and Licenses
This is perhaps the most thorny of all the issues. Corporations have
bought so many laws, where possible, that in some ways it is easier to
forget about the countries where this has been done. However, the
point of "World Knowledge" is that the entire... world is involved.
So - here is a list of things that need to be resolved:
- Dissolve or beef up the United Nations. The jury's
out on this one. Read "First and Last Men" by Olaf Stapledon. Stapledon
points out, in the very first few pages of Chapter 1, the ineffectiveness
of the League of Nations and its replacement by the "United Nations".
Well - the same thing is happening to the United Nations: it's becoming
irrelevant. Why? because of the "veto" voting system, and also because
of the fear of upsetting those Nations primarily funding the U.N. - no
really important decisions can ever actually be made. Belonging to
the U.N. should be like jury service. Read Orson Scott Card's "Ender"
Series - look up how Graff has to be ever so careful.
(btw - note to american readers: ignore Baxter's introduction in which
he agrees with everything that Stapledon says apart from how Stapledon
must have been wrong in his assessment of the U.S.)
- Reestablish Sovereignty.
As Sovereign Nations look to the UN to act, it's
quite clear that each Nation really does need to get its individual
act together, for matters that are their National Responsibility.
That will including standing up to being bullied by other Nations,
for example by not handing over individuals who carry out
reverse-engineering or security breaches (note to people who are
bristling at this: you shouldn't allow access to your systems
across National boundaries, should you! why are your systems
accessible across borders? what are you up to in another
Sovereign State such that some of your citizens require access to
your "secured" servers, across National borders??)
- Dissolving WIPO - The World Intellectual Property
Organisation needs to be shut down - or its job transformed into one
which *protects* sovereign nations from the interference of
intellectual "property" hoarding. I consider the concept of
"Intellectual Property" to be "slavery". even the name says so!
Intellectual. Property. Intelligence. Owned. Information.
Enslaved. It's got to stop. We're not a bunch of savages. any more.
- Dissolution of Patents. See above. Also: see the
documentary "The Corporation". Even the reasons why the Patent system
was created is flawed: hoarding of information so that the inventor
can benefit from it is so completely against the grain of an
enlightened world that it hardly needs mentioning - but there will be
people reading this who will genuinely believe - just like the Victorians
did about their life-long human captives - that Slavery is perfectly
acceptable because "everyone does it".
- Banning of Articles of Incorporation with "profit" as the
main focus. This is absolutely essential. There are plenty of
alternative Articles of Incorporation where profits are made, but not
at the expense of world resources. Companies House has a boilerplate
designed for social clubs.
- FCC and other Spectrum Licensing. The process needs to
be much more open: cooperation is the key, not competition. Not entirely
sure how this should go, but it's mentioned here for completeness.
Ultra-Wide-Band, with the means to operate under the noise threshold,
in combination with ceramic phased-array antennae, pretty much makes
it pointless to have licensing of spectrum, anyway, anywhere in the world.
- Other. I know I've missed a few things, here - I just
don't know what they are.
Projects for a World-class Cooperative Economy
Here's a hint at the kind of projects that need to happen,
which will give you an idea of why the above enormous list of hardware
and software requirements is actually relevant. The list itself
is also pretty big, yet the total resources of the Free Software
community as measured in 2006 - two years ago - exceeded those of
the world's largest corporate software house by 50%.
Bottom line: it's perfectly feasible. no - overachievable.
- Wikipedia
Wikipedia needs to be turned into a distributed peer-to-peer
application. This would actually be a fairly simple task, if the
databases provided the object-orientated and distributed functionality
described above.
The data can easily be entrusted to a azureus-localhost, gnunetd file
share, or freenet, with the servers that are currently being used to
store the database ensuring that all of the data is always stored on the
azureus-localhost, gnunetd or freenet distributed file share.
Or, a gnunetd plugin could be written which provides a distributed
front-end to a SQL database. a distributed SQL database.
Ironically, this would give the Wikipedia Foundation exactly the kind
of kick up the backside that they really need, because the
distribution of the data would mean that they could be made entirely
redundant. or replaced.
- Free Software replacement for Skype
Skype is the only working Internet communications system, and it's not
available for Linux-based smartphones, for example. Smartphones are
incredibly complex bits of kit, as any embedded designer will tell
you. Linux smartphones are extremely rare: before OpenMoko's
"FreeRunner", you had very few choices: the HTC Universal
Reverse-engineering effort has been going on for over three years, and
the full hardware support (e.g. for things like switching the five speakers
or the three microphones) is still not complete. There are also a
couple of IPAQs (ironically also manufactured by HTC) such as the
hw6915.
Googletalk doesn't entirely cut it, but it's a good starting point.
- Language Translation Technology.
Both computer languages and natural languages are contextual. There
exists a plug-in for Visual Studio which performs translation of
soruce code into *any* supported programming language. It's done by
compiling down to CLR and then de-compiling (pretty-printing). This
isn't rocket science. Computer Language translation is a perfect example of
how the process of parsing which we as software developers take for granted
needs to be automated.
- Automatic generation of Maps . Openstreetmap is a good
start, however it's being carried out mostly by geeks who go war-driving,
and they like to provide the locations of drinking establishments as a
priority over-and-above things like speed cameras and hospitals.
Uploading the routes taken by conscientious users who like to provide
information other than drinking establishments would be extremely
useful. And would avoid exorbitant licensing fees from government
sanctioned organisations like the U.K. Royal Ordinance Survey.
In addition, the same conscientious users might also like to add
descriptive information about any other sights that they see along a
route, such as pretty flowers, majestic views, cow dung and
fly-tipping incidents.
- Replacing the proprietary A-GPS system As
people walk around with their spangly-new linux-based hybrid device,
it would be great to be able to record the signal strength of the
cell-towers, triangulate their position, match that against the GPS
coordinates, and upload into a distributed database the information
derived. Then, when someone else switches on their GPS device, they
can get the ID number of all the cell-towers in their area, obtain
from the distributed database the approximate GPS coordinates, and
feed that to the built-in GPS chipset so that it can get a quicker
cold-start lock.
- Rsync with a VFS plugin layer would be an incredibly
powerful system. Rsync itself is incredible, yet it is increasingly
being focussed on hierarchical filesystems - Unix filesystems, at that.
Rsync actually is good for synchronising data across hierarchical
tree structures, yet it has been restricted to just Unix.
a VFS plugin would allow people to back-end hierarchical data into
TAR compressed archives; XML data structures; IMAP mailstores; and
that's just the beginning.
Note: Both FUSE and Samba's smbfs command-line utility demonstrate
that it's not rocket-science to design a VFS layer, even in userspace.
About 25 functions are required. smbfs basically over-rode the
libc standard functions for __open, __close etc. - at runtime
(LD_LIBRARY_PATH). Redhat 5 released a version of libc in 1998 which
removed the required functions, effectively killing the project, but
the principle is still there.
- IMAP with Rsync is an incredibly powerful combination,
and in a peer-to-peer environment which does distributed
store-and-forward messaging, with an SMTP front-end thrown in on top,
even more so. The SPAM issues we face with email would entirely
disappear overnight with such a powerful combination. And we would
be able to sync email communications across many devices, by having
the rsync-enabled IMAP server running on our own PDA, or Desktop,
or anything. It's Blackberry on steroids [by the way, are people
aware that the entire worldwide Blackberry infrastructure runs off of
two servers - one in Canada and the other in the United Kingdom?]
- GIT needs more recognition, and for its back-end
to be made a VFS plugin layer. GIT is a type of knowledge store,
and it goes to a lot of trouble to "compress" the data and to
optimise use of network traffic. GIT combined with rsync is
an incredibly powerful combination: synchronisation of portions
of a GIT repository, where the data doesn't have to be unpacked
into a unix filesystem because rsync has a GIT-aware VFS plugin.
- l4linux.org needs to be mainstream. Only a year ago I
heard of an HDTV embedded product being developed. It was a failure.
The reason was that in an embedded environment, the latency of the
Linux Kernel killed the whole system. Whilst, overall, the hardware
spec looked "good enough", with the AVERAGE time looking adequate,
the response time (for important things like pressing buttons on the
front of the unit) was so dreadful that the whole project had to be
canned. This is one area where Linux kernel developers, whose focus
is primarily on "Desktops" and "Servers", simply don't
understand why extreme latency matters. The solution is to adopt the
l4linux.org source code as a compile-time option in the mainstream
Linux kernel source tree, and all that implies. OSKit 1.0 had
"issues" - the l4linux team solved them.
- "N of M" crypto. In 1999 I heard
about a cryptographic algorithm which could be used for "cooperative"
encryption and decryption. In some voodoo-magic way, a key could be
shared across M parties, and with some even more obscure voodoo, only
N of those M parties were needed to perform encryption, and, incredibly,
a totally different set of N out of M parties could do decrypts.
In combination with Trust Metrics and the Voice-Fingerprinting, it
becomes possible to provide, rather than a "Public" Key Infrastructure,
a "Distributed Grouping" Key Infrastructure. The usefulness of this
combination cannot be underestimated: it is possible for two or more
parties to sign binding contracts with their voices alone. The implications
of that are quite startling: Banking, Justice and Arbitration services can
be set up at will, on demand! These kinds of things are regularly part
of the Science-Fiction I read, yet they are only possible when you have
the combination of technology outlined here.
- Javascript PyPy "front-end" (e.g. for use in Firefox).
This one's a bit obscure, but is an illustration of the type of
improvements necessary - bear with me. Javascript is a dog. Everyone
keeps writing their own javascript interpreter: there is one for every
single browser out there, and they are all incompatible and they are
all too slow. The PyPy project is an on-demand compiler which has
front-end and back-end technology: CLR (Common Language Runtime) is one
of the back-ends. On the front-ends, python is the main focus, although
I had heard of someone adding a "B" front-end, just as an experiment.
So, on the face of it, pypy looks like a python .NET compiler - it most
definitely is *not* just a python .NET compiler.
The point is: by providing a Javascript front-end, it should be possible
to plug pypy into browsers - including lynx and dillo (!) via simple
pre-processing. The end-result is that you would have an independent
"javascript" engine for browsers, which could be used by any free
software project. Also, google's search engine would come out of the
cold from the Web 2.0 phenomenon, because servers could be dedicated
to running the AJAX code, creating the HTML and then indexing that.
In this way, Web 2.0 AJAX sites end up being brought into the fold.
(btw - it doesn't have to be google that does that - it could be anyone).
- Pyjamas and other technology like it needs a much higher
priority. Pyjamas is a port of Google Web Kit to python. The principle
behind these technologies is to turn Browsers into Desktop Applications.
However, the use of this strategy actually has an interesting side-effect:
it gets people to partition their applications correctly. The exception
to that rule *has* to be GoogleMail and YahooMail Web 2 apps, as it is
crazy to expect people's little PDAs and embedded smart devices to have
512mb of RAM (hence the reason why the javascript engine idea above
is so vital, because gmail and yahoomail are just the beginning of the
Web 2.0 revolution). Ultimately, it should be possible to have the same
source code run under GTK, KDE, Athena Widgets, Curses and Web Browsers,
because the front-end is written in Pyjamas or GWT, the communication
between the front-end and the back-end is done using JSON-RPC, and the
back-end server doesn't necessarily run over the Internet, it runs on
the local user's device. The user then gets to choose the level of
interaction they want - depending on their resources - yet they still
have the choice of running the application locally. In combination with
peer-to-peer distributed infrastructure, built in to the service, the
goal of redundancy, fault tolerance and usefulness is over-achieved.
- Distributed Distributions such as Debian. Debtorrent
was created to take the load off of the Debian Mirrors, many of which
are beginning to have to only do partial mirroring and are creaking
under the load. The security updates are simply not adequately
mirrored at all. The entire Debian distribution system needs to
become properly distributed, from end-to-end. Debian's prevalent
use of PGP key-signing puts them into a unique position of being able
to entirely develop and publish the entire Debian Distribution using
peer-to-peer distributed technology. Utilising the "old" methods -
email and web technology - is clearly not working and is clearly not
scaling.
Also, the neat thing about the Debian Distribution system - dpkg - is
that actually it need not be "programs" that are distributed with it:
it could be Video archives, or music, or DNS zone files (all digitally
signed). And the application front-ends to "install" software also
already exist. The Debian Distribution system could easily be turned
into a multimedia broadcasting system, where anyone in the world has
the right to broadcast media (or anything else for that matter).
"Democracy" Player would immediately become "useful" if it adopted
peer-to-peer-enhanced dpkg infrastructure, providing exactly the
technology that the BBC's iPlayer should not have spent £150 million
of tax payer's money on in under four years to attempt to placate us
with DRM-crippled single-platform rubbish.
- 3D virtual office Desktop . After looking forward
enormously to KDE 4, to see what exciting new technology would be
brought out, I was shocked to see something that looks like Vista and XP.
This caused me to think about what would actually be "useful" to people.
After having seen Compiz Fusion, it occurred to me that a 3D virtual
office environment would be easily achievable. Analogies exist between
every single concept in the computing world and a real office: a
"filing cabinet"; a "telephone"; a "desk". Why are computer interfaces
only modelling the desk "top"?? If someone wandered into a real office
and found that their wall calendar, their 5 ft high filing cabinet, their
potted plants and wall clock had all been shoved onto their desk
they'd hand in their notice immediately.
Once you have moved "Beyond the Desktop Metaphor" (look up the book
of the same name) you have enormous flexibility and power to work as
you see fit, rather than to have your life dictated to by a desk "top".
For example, a virtual reality "filing cabinet" could expand out in
3D into a "dungeon" of enormous cavernous proportions, with the entire
Library of Congress a small dot on the lantern-lit horizon.
If you believe that this is Science Fiction and that the technology
does not exist, today, run Compiz Fusion on an Intel Celeron M
Ultra-low-voltage 600mhz CPU with an Intel Extreme 3D 855 Graphics
chipset, and check the CPU usage. You will be stunned to find that
the main CPU runs at only 400mhz, even on a 1280 x 1024 screen.
Of course - tie-in into gaming systems such as Second Life and
Worldforge is the next logical step, and it's in gaming environments
where the need for peer-to-peer technology is most definitely felt.
Then you have a proper collaborative working environment with which
people feel much more comfortable.
- Collaborative Working tools are needed in a collaborative
working environment. Web 2.0 technology such as Google Docs just
doesn't cut it - you need technology like Abi-Collab (abiword with
collaborative editing). Imagine being in your 3D virtual office,
talking to someone on a VoIP phone (cue bakerlite 3D rendition of
a phone), you then want to show them a document, so you "virtually
invite" them into your "virtual office". They are then granted
access to the document (over a peer-to-peer distribution
system). You and their avatar in your "virtual office" begin
discussing the document, and you invite your friend to make his
own changes....
As the ODF XML-based document is being updated by each user, rsync
with an ODF-aware XML-based VFS plugin is being used to synchronise
the changes to each other user...
- Auto-RDF, Semantic Web etc. takes a little explaining,
but here we go: using reverse-engineering techniques, accelerated
by automatic-generation of parsers (e.g. the hachoir author's
reverse.py) it's possible to create "information merge" technology.
For example, it would be trivial, using auto-parse technology,
to create a web site which merged all of your social networking Web
2.0 identities, friends etc. This technology does exist, but it
is usually used to "import" from one Web 2 web site into another.
(Note to government agency people who may be wondering what that's
all about - look at it this way: all those disparate systems that
you have? it's possible to "merge" communications between them,
without having to ask for any information from the contractors
who charged you an arm and a leg ten years ago to develop those
creaking - and non-contractually-compliant - systems. how is this
done? by using automated reverse-engineering to analyse the
communications, it's possible to VERY quickly have *modern*
code written - mostly automatically - that can interoperate with
each system. once you have done that with all the systems you
want to connect together, you can then communicate via a common
framework between all the different systems. It's not difficult:
you're just being told that it is, so that you can be charged
more of taxpayers' money).
- Distributed Filesystems for automatic backups are
regularly created but are not ubiquotous. Again, this comes down to
the "Desktop" metaphor, to the fact that Linux implements
filesystems in kernel-space rather than user-space, and to some extent
to the fact that the Unix userid space is limited (unlike in VMS, where
user identification uses approximately 128 bits rather than only 16 or
32).
FUSE - Filesystem in Userspace - for Linux is a "hack" that many
"purists" deem unacceptable, particularly because, due to the
monolithic and limited design of the Linux kernel, critical filesystem
structures are locked for userspace applications to make filesystem
accesses, potentially resulting in total and unrecoverable deadlock of
the Operating System. This kind of design is unacceptable, yet there
really aren't any alternatives that have as much developer focus. The
GNU/Hurd has so much to catch up on that its far superior design is not
able to take hold.
Distributed Filesystems - or Global Filesystems - would allow groups of
people - friends - to mirror and back up each others' files,
independent of a centralised server infrastructure. A group of
Developers would be able to share files, help compile applications,
when on the move, when separated by a few miles, or when separated by
continents.
- Peer-to-peer distcc (and dist.net and distjava etc.) -
imagine ccache and distcc combined. Now imagine distcc with a
peer-to-peer distsributed database. The MD5 hashes on the compiler
options can easily be used as the DHT "key" into the distributed
database. The result is that even a tiny device or a very slow
embedded processor should be able to "compile" a binary in record
time, by being able to download object files from all over the
world. "make -j200" should be absolutely commonplace.
p2pdistcc has other advantages as well: it makes it possible to do away
with binary distributions, but still have the advantage of "downloads"
of "stable" binary distributions. One of the reasons for having
"binary distributions" at all is because of the time it takes to build
them: if the object files are regularly precompiled by developers and
made available in a peer-to-peer database, then not only does
collaborative development go faster, but also the distribution
maintainers job is done easier and faster, with no additional load
placed on the distribution web servers.
Why??
The goal is to lower the barrier to entry the means for people to
uplift themselves. That means that they need to be able to
express their needs and desires, and then find someone who can
fulfil them.
To achieve that, both hardware and software need to be robust,
resilient and interchangeable. One of the side-effects of that
is that the hardware and software will be free from corporate and
governmental control. If I had said this even one year ago, I doubt
that it would have gone down too well. However, it is becoming
increasingly clear that the status quo is going down the pan, taking
us with it, unless we act. Whilst we still have a chance.
So, by combining peer-to-peer software over peer-to-peer-enabled
hardware, we end up with a self-healing, self-organising network that
is useful to its users, even when part of the network is cut off
from the rest of the world.
If you've read Neal Stephenson's work, you will have immediately
recognised "The Librarian". The technology behind that science
fiction piece of software - ironically developed by the merging
of the CIA and the Library of Congress - really, really exists.
It's just ... all over the place. This article points out
how that technology can be drawn together to create a truly,
truly World Economy worth living in.
Keyboards?, posted 9 Apr 2008 at 01:58 UTC by ncm »
(Master)
What does, "Keyboards are the number one World Health Organisation health hazard" mean?
could it mean RSI?, posted 9 Apr 2008 at 16:47 UTC by sye »
(Journeyer)
Repetitive Strain Injury ?
but the good news is:
http://www.usernomics.com/news/2008/03/keep-typing-wrist-injuries-are-falling.html
Also latest Tech from Japan
virtual keyboard
screen projector
ncm: the number of bacteria on a standard keyboard far exceeds those found on any other household device - by _many_ orders of magnitude.
Really?, posted 10 Apr 2008 at 09:37 UTC by chalst »
(Master)
I've heard of worries about bacteria in keyboards used in intensive care units, but the concern in a household setting sounds overblown. Won't beds & sofas typically have far more bacteria than keyboards?
Googling "site:who.int keyboard bacteria" returns 0 results, FWIW.
folks - one of the reasons for writing up this article is because I've been asked to put an investment proposal together to make the above happen.
i need people to be alerted to the article and the funding opportunity; i need timescales; i need costings; and i need implementation ideas.
i don't want to hear "it can't be done", or "it'll never get done" or "we're just a bunch of loser free software people", i want to hear _how_ this can be made to happen - FAST.
and, of course, if anyone has better ideas, make them known.
lose your egos, negativity, blocks and inhibitions, folks: we (all of us) have work to do. it might even be fun. hooray.
WHO, posted 12 Apr 2008 at 17:17 UTC by lkcl »
(Master)
chalst - thanks for looking, but really - don't worry about it, it's not that important - getting results here is important.
Investors, posted 12 Apr 2008 at 17:22 UTC by lkcl »
(Master)
oh - also, if anyone knows of any other investors who would like to be part of a consortium to make the above happen, please also do let me know.
it's a heck of a lot of work, and it has to happen FAST.
we don't have very long.
ok - something occurred to me which should illustrate matters clearly for you.
intel plans to ramp up production to 100,000 of its 45nm CPUS *per day*. with the CPU cost of only $USD 6, they aim to create a new market of ultra-mobile wireless communications devices, world-wide.
as the cpu speed is adequate, the most likely operating system to be used for such a device is: windows.
you imagine the nightmare we in the developed world are in (at present, only 1% of people in the world have computers) you imagine the nightmare of the windows virus being spread to say 10% of the world's population. over _wireless_ enabled devices.
do you _really_ want that to happen?
so it's your call, folks.
Follow-up Article, posted 14 Apr 2008 at 19:18 UTC by nymia »
(Master)
When is the next article going to be posted? It would nice to see a progression, though. If this is a part-n-parcel of an advert, you might have to write-up a lot of copy just to maintain the movement.
on its way!, posted 15 Apr 2008 at 22:35 UTC by lkcl »
(Master)
i just spotted muhammed yunus' new book Creating a World without Poverty and chapter 9 describes EXACTLY how he envisions technology being used for good. he also mentinos around page 199 how government control is made irrelevant by IT, and how weak democracy in countries like the United States and Bangladesh are rife with corruption and thus sustain the problems.
(so you don't need _me_ to keep repeating the conclusion "democracy is a weak form of government", because someone who won a nobel peace price is saying it).
so - yes - it's on its way!
p.s. professor yunus advocates an I.T. Society to End Poverty - ISEP for short. here's an earlypre-publication extract from page 184 onwards.
done, posted 16 Apr 2008 at 03:39 UTC by lkcl »
(Master)
i received the following communication, yesterday:
Hello,
I have just read your article on "Singularity of Computing". While I agree very strongly with most of what you say, I totally disagree with your choice of Skype as an example of something that "just works".
There are two fundamental problems with Skype: first, the program itself has grown too complex and fragile, and seems to be basically unmaintainable by those who are responsible for it. There is no other explanation for the ever-increasing number of problems with video, audio, Skype crashes, system crashes, internet connection problems and so on. Second, the entire Skype "SuperNode" system has become unmanageable and completely unreliable, which results in the Skype "presence reporting" being so unreliable as to be useless.
As I said, I agree with your ideas, and your goals, but I think you should choose a different example to use.
my reply was as follows:
thank you very much for your comments.
the designers of skype - the people who actually get results and sort
things out - have moved on from skype, to joost. so yes, i would
kinda expect skype to "decay" somewhat.
it's quite common at around 8 million people for the supernode
infrastructure to collapse as you say, and a friend who is behind
three layers of NAT'ing finds that he is unable to make calls when the
users goes above this point.
that having been said, i'm inclined to leave things as they are for
now - on the basis that there really is nothing that comes even
remotely close, from free software, to even finding _out_ that
supernodes have to be done properly! :)
... _that_ having been said, i'm inclined to consider posting these
thoughts to the article, if that's ok with you?
(permission kindly granted - thank you!)
so, even though skype works - mostly, for most of us, except when the total number of users is too large - i still hold it as an example to follow, because no free software uncensorable self-healing communications network exists which remotely comes close to having 9 million simultaneous users.
Great Article!, posted 18 Apr 2008 at 03:36 UTC by DeepNorth »
(Journeyer)
lkcl -- This is an excellent article, which I believe was published on relatively short notice. Of course, the ideas have obviously been brewing for a while. I would like to deal with what you have brought up with at length, but just do not have the time. However, I would like to make some points:
Ontology Classification -- I so much prefer this term to tags. It is not accessible to most people, but perhaps there is a way to create a term that carries that meaning without resorting to 'Ontology'? I am going to start using your term.
The 128-bit hash should be a minimum of 512 and I strongly lean toward something longer. I expect to be coding hashing based (most likely) a combination of whirlpool and other nominally strong hashes. Hashes produced will likely be 1024 by default.
WRT RPC -- I strongly agree. Could this *get* any more screwed up? I remember an entire project team stopping dead in their tracks because a rebuild of something put Corba signatures out of whack.
FCC and other Spectrum Licensing -- this has gone very, very wrong. The entirety of our available spectrum should be devoted to TCP/IP.
Laws and licenses -- I agree it is a thorny issue. However, it should not be. Every major world body has only served to injure us all. I *do* believe that the world should come under one governing body. However, I think that it should be constituted as the United States was, only this time the constitution should supply a bare minimum of enumerated powers and should remain in force. Meantime, at least the U.N, WIPO, NAFTA, etc all have to go. They can't be saved. They should be dismantled and whatever sovereignty weakening agreements they have caused to happen should be nullified. In my opinion, they were null to begin with.
Reestablish Sovereignty -- yes -- sovereignty, IMO, resides in the body politic. We did not authorize any of that junk. That stuff was all negotiated in shadowy back-room deals, to our detriment. I could go on at length about this. The increasing acceptance that these bodies have any legitimacy is disturbing.
IP -- I made a submission to the EU objecting to software patents. The notion that 'IP' even exists would be laughable, were it not so tragically true. IP conflates chattels, marks in trade, copyrights and patents. So-called 'IP' 'rights' holders wish to take the UNION SET of all rights, entitlements and privileges that accrue to all of them at the price of accepting the INTERSECTION SET of obligations. You know something has gone wrong when a tiny civil infringement of copyright is given the moniker 'Piracy' -- traditionally a capital offence -- punishable by death. Somebody on watch must have fallen asleep.
'M of N' (sic) Crypto -- As it happens, I should be coding of 'm of n' encryption in the coming year. It is vital to any type of truly secure environment. Joint custody is needed and two custodians does not nearly 'cut it'.
These are 'broad stroke' high level points. I said:
FCC and other Spectrum Licensing -- this has gone very, very wrong. The entirety of our available spectrum should be devoted to TCP/IP.
Naturally, the old CB bands, emergency broadcast bands and updated equivalents for them should be reserved...
thanks, posted 18 Apr 2008 at 18:06 UTC by lkcl »
(Master)
deepnorth, thanks.
yes the article was written in about six hours (and it shows - i've updated it and added missing sections repeatedly since!), and is an amalgamation of pretty much all of the ideas and input i've encountered from dozens of sources and people over several years (i just updated it again *sigh*)
i seem to recall seeing somewhere that Ontology is the classification of "what", Epistemology is the classification of "who", Methodology is the classification of "how", and i presume there's a relevance to mentioning "chronology" which is the classification of "when" or something. "when what where who how why" is all about the link between observer, observed and process of observation, knowledge, the knower and the process of knowing, and it ties in with "Enterprise Management" stuff and all that and it's both complex and simple at the same time and _definitely_ misunderstood :)
oh - and definitely a mouthful ha ha :) so you _could_ say "object classification" or... there's another one... can't remember - but ... "tagging", which treats "who" as just another "what", is kinda... good enough for now.
the issue with going to 1024-bit hashes is that for simple connections, you're overdoing it. a method needs to be devised for doing context-based "proxying" which identifies a connection or object "globally" by a 1024-bit hash and allocates a much smaller id to save bandwidth. VPNs, NATs and STUNs could be adapted to this purpose quite easily, making the saving of network traffic somewhat transparent.
i don't necessarily agree with "shutting down" of the U.N, WIPO and NAFTA, Patent Offices etc. if they agree to actually do a decent job - to change their roles to *reverse* the damage being done in their name, then great. otherwise - we should leave them behind.
regarding patents: a distributed peer-to-peer infrastructure will make that easy to do. especially in source-code-only distributions, compiled up by individuals. it's enshrined in patent law that individuals are allowed to "create" a one-off implementation of any "patented material", to encourage the inventor to make "further inventions and improvements".
source code falls neatly into this category. thus we see the importance of a peer-to-peer-enabled version of ccache and distcc (and its extension to distjava and dist.net etc. etc.) where cached object files can be distributed world-wide, saving compilation time on tiny devices.
just so you know: the E.U civil servants are well aware of the damage of software patents, and are, as a professional body, entirely and 100% against software patents. they just need to go one step further and stop patents full stop.
'M of N' crypto. GOOD MAN. i have a block cipher which is capable of doing 32768 bit encryption. i can't release it in its current form, i need to derive a new version (can't explain why right now). it's sufficiently powerful that it cannot be deployed for individual use - it has to be for "everybody" or for "nobody".
cryptographic algorithms should be used for the purpose they were intended, not for anything "less". if an algorithm is capable of securing data for 50 to 70 years, it should be used to communicate the ABSOLUTE MINIMUM of information, in order to protect that very algorithm - and the data - from plaintext attack. if an algorithm is capable of securing data for only 5-10 years, it is perfectly acceptable to use it for "perishable information" (a military technical term used to describe information that is only "useful" for up to 24 hours - e.g. the time of a meeting the following day).
so i call the algorithm i created "sovereign grade" because it's far more powerful than "military grade", and so cannot be deployed even amongst military networks or even for internal use in a country, in case it is used to engineer a coup!
this illustrates why "sovereign grade" encryption has to be for everybody, or for nobody.
notes, posted 15 May 2008 at 10:50 UTC by lkcl »
(Master)
p2psockets babel http://www.oreillynet.com/onlamp/blog/2007/11/mesh_networks_on_olpc_its_all_1.html peerd
notes2, posted 15 May 2008 at 19:57 UTC by lkcl »
(Master)
http://sourceforge.net/projects/bigdata/
notes3, posted 15 May 2008 at 20:03 UTC by lkcl »
(Master)
http://codinginparadise.org/paperairplane/
http://hyperscope.org/
http://openlibrary.org/
http://gearsblog.blogspot.com/
notes4, posted 15 May 2008 at 20:04 UTC by lkcl »
(Master)
http://lists.samba.org/archive/rsync/2005-April/012185.html
radiantdata. even supports mysql and postgresql. needs to be free software. proprietary per-os-per-kernel binary modules completely unacceptable.
p2p filesystems, posted 15 May 2008 at 20:23 UTC by lkcl »
(Master)
http://regal.lip6.fr/spip.php?article74 http://ralyx.inria.fr/2007/Raweb/regal/uid28.html
http://offsystem.sourceforge.net/
http://p2p-fs.sourceforge.net/
notes5, posted 16 May 2008 at 14:04 UTC by lkcl »
(Master)
http://www.moblin.org/projects/projects_connman.php
notes6, posted 16 May 2008 at 16:53 UTC by lkcl »
(Master)
http://github.com/jwiegley/git-issues/
notes7, posted 16 May 2008 at 16:54 UTC by lkcl »
(Master)
http://searchengineland.com/080512-000100.php - powerset (knowledge search engine of wikipedia)
notes8, posted 16 May 2008 at 16:59 UTC by lkcl »
(Master)
http://www.organicdesign.co.nz/PeerFS
notes9, posted 16 May 2008 at 18:10 UTC by lkcl »
(Master)
http://www.semanticweb.org/wiki/Semantic_MediaWiki
notes10, posted 23 May 2008 at 19:26 UTC by lkcl »
(Master)
http://www.cs.cornell.edu/~bwong/cubit/
hinternet, posted 27 May 2008 at 13:55 UTC by lkcl »
(Master)
http://en.wikipedia.org/wiki/Hinternet - also see AMPRnet
notes11, posted 30 May 2008 at 11:00 UTC by lkcl »
(Master)
http://www.pelago.com/
http://buglabs.net
notes12, posted 30 May 2008 at 16:50 UTC by lkcl »
(Master)
pixelqi.com - designers of the OLPC 1200x900 8in screen.
notes13, posted 30 May 2008 at 17:48 UTC by lkcl »
(Master)
http://www.ideastorm.com/article/show/10089234/Modular_Computer_UMPC_or_phone_or_Laptop_or_PDA_you_get_to_choose
notes14, posted 2 Jun 2008 at 10:44 UTC by lkcl »
(Master)
http://arstechnica.com/news.ars/post/20080314-verizon-embraces-p4p-a-more-efficient-peer-to-peer-tech.html
displaylink.com, posted 2 Jun 2008 at 13:02 UTC by lkcl »
(Master)
http://www.videsignline.com/products/207602760 - set up by ndiyo.org apparently. see http://www.ndiyo.org/systems
http://news.bbc.co.uk/1/hi/technology/7430768.stm
http://news.bbc.co.uk/1/hi/technology/7425099.stm
notes15, posted 2 Jun 2008 at 16:47 UTC by lkcl »
(Master)
opencyc.org http://www.scienceblog.com/cms/wordlogic-bank-help-build-%E2%80%98thinking%E2%80%99-machines-16567.html