LDAP, Netinfo, ActiveDirectory.. The problem has been solved many times,
but why are
we still fighting data management issues? Top-down or bottom-up, we
can't seem to
get it right the first time, and once our directory service gets
inconsistent, its overall use deteriorates quickly...
With the final arrival of Windows 2000, a lot of press has been given to
the hype of ActiveDirectory as well as the good and bad realities of its
practical deployment. Recent articles have shown that the conglomeration
of standard protocols and GUI management alone does not necessarily
the large scale IT management problem. The solutions, which involve many
complex components, can become a bear to deal with in the simple work
group scenario. In the end, the solution becomes yet another problem to
is a good example.
In simpler times, the solution existed, albeit in a platform limited and
relatively expensive form: NetInfo. A lot of what became the registry,
NT domains, and finally ActiveDirectory owes much to some reverse
engineering of NetInfo. However, NetInfo as it exists and existed way
back when did not handle the myriad of issues that faces system
solutions today, include but not limited to security, greater complexity
of client applications, and greater quantity of non-integrated (or
evenintegrated) components that it must manage. Backwards compatibility
further exasperates the new problem.
At the same time, its simpleness still makes it a sort of holy grail in
the directory services community. It has its faults, but its data
model and its management tools are still ideal for only its ability to
handle extremely small work groups that seed an organization up to the
large corporate wide directory service requirements that any
will build up to.
LDAP, Kerberos, DNS, and similar component technologies that make up
modern directory services require much forethought as to the eventual
heirarchy of an organization. The tools to enable its management make
simple case non-trivial, and present problems in reconciling the
directory trees and their data the one will someday wish to merge. The
primary reasons for this are the lack of atomic transactions and the
inability to reference easily the objects. The requirement to reference
name, and more importantly the strictness of having every entry contain
the full DN helps make LDAP a hard technology to manage in many
This leads us to my proposition: A directory service solution for a NOS
must both enable flexible and reliable solutions for the single machine
up through the local LAN, but at the same time enable top-down
from the global DS solution. Obviously, the bottoms-up and top-down
approaches must marry with some reconciling middleware.
A possible scenario could be the use of NetInfo as the local "system" to
"workgroup" DS solution, tied to a generic LDAP store backend that is
seeded in a TBDed unique way. The initiation, or seeding, of a workgroup
would involve some standard templated solutions, akin to picking m4
components in ones creation of a sendmail.cf file (via m4). This enables
both flexibility and consistency of data model. Of these templates, a
workgroup can elect to adhere to top-down policies and other DS
via a corporate-tree LDAP server. In between the two lies a daemon that
reconciles regularly the data view of the local systems with top-down
design of the corporate wide directory.
The key here is the management of the data. On the low end, NetInfo
management would be sufficient, defining local users and enabling
immediate configuration-free participation by new members to the
The primary management tools focus on day to day simple tasks with a
simple data view. At the other end, LDAP management tools enable
control of the hierarchy and the ability to push management data to all
subscribing workgroups. The data view here is quite different, and it
can be taylored again to maintain simplicity (possibly not drilling down
too far unnecessarily). Advanced options at both end help modify the
view on demand to help subscribe to the corp directory or reconcile
local data into the larger corporate directory.
I've been told the problem encountered historically is that "the data
representation models tend to require relationships between objects and
such which are hard to maintain." Nicholas Williams further states that
transaction model is a necessity, or otherwise the system must handle
inconsistencies. Our solution thus has a great requirement in its
capabilities. Finally, Nicolas adds:
"I advocate a separation between the database and the name services. I
don't mind the name services hitting the database (replicas) directly,
in fact I want that, as long as the separation is fairly clear. Granted
that either way you end up coding schema information into your name
services code, but if you modularize that code then you minimize the
impact of future radical schema changes (schema extensions shouldn't
require code changes, unless you wish to implement new name service
Remember, this is primarily for management. There is nothing to keep
clients referring directly to the corporate DS exclusively or in tandem.
The key thing is that through both approaches, the directory service
can evolve as your data and your company evolves.
Keep it simple, and prepare simple data views to stay encroaching
complexity. A management system should always be a part of the solution,
and not add to the problem...
Having two dissimilar systems tied together seems to be the opposite of
simplicity. Doing the necessary access-control semantics to support
NetInfo-style management in an LDAP directory seems like the proper
solution to me. I feel like I'm missing something here...
It seems like the core thing that you need is a way for a newly
appearing machine (possibly the server for a workgroup) to become part
of the directory so that it can be managed without needing to have prior
authorization or configuration in the directory server for that
particular machine. This implies compromising a certain level of
security--but with some protocol design you can mitigate this risk so it
only happens once per host (i.e. let the machine register its key with
the server and from there on out have access to update information about
itself). Such a system could also be reconfigured to a more secure mode
in which top-down authorization is required for any access to the
But again I think maybe I'm describing a solution to a different problem
Relying on periodic updates from one data source to another one seems to
me like it is not a good thing. If some applications use the local data
source and others use the master data source, it will only lead to some
applications being built which reuse code and end up getting data from
both sources, which may be inconsistent. In order to do transactions
properly it seems like you would need to not consider the transaction
committed until the data changes were pushed all the way to the top...
and it seems like in order to do that you must lose most of the benefits
of the system described.
One thing that I seem to recall missing from NetInfo is the ability to
do good delegated administration and other related access control
tasks. This is somewhat easier to implement with LDAP... and is often a
need even for smaller groups. Or perhaps NI has these features and I
just don't know enough about it.
Actually, I've heard that PADL (the people who make pam_ldap and
nss_ldap) are working on porting NetInfo to use an LDAP back-end. So,
for the local system, NetInfo is the system management interface,
regardless of whether or not it stores the configuration in a
network-wide LDAP directory, a local database, or a local LDAP
For some time, I've wished someone would create a version of the LDAP
client APIs that could (in a perhaps a simplified manner) work directly
with a local database, rather than using the LDAP protocol over
sockets. This way, servers could store configuration information using
the LDAP client APIs, and be ignorant of whether it was going to a
directory on the network, or a local database (without the end-system
having the complexity or potential security holes of a local LDAP
server). Perhaps NetInfo would provide the client API for this.
Note too that I'm not advocating servers storing their
configuration information in a binary database of any sort. My plan
was/is to have translator scripts that would translate native
configuration files to directory entries and vice versa. The benefits
of this approach are a) You don't stick people with a potentially
error-prone binary database (think Windows) b) Command-line commandos
can edit configuration files and use them to update the directory c) One
less network service to potentially crash and cause network disruptions
Some of these ideas were developed in relation to Project Nehalem, a
not-yet-started sub-project of my distro LNXS.
wcooley hit upon it a bit more. I threw these ideas out to get some
feedback. There is a lot more thought behind it, but I'd rather see some
good discussion as to the merits of LDAP, Netinfo, etc.
The ideas I stated came by way of conversations with Luke Howard, who
heads up PADL. His work on both LDAP and NetInfo are pretty well known,
and we are looking at houw best to manage the single system, to the
small cluster, up to the large enterprise. The LNXS project interests
me, since we each have interests in different vehicles to test out our
ideas (Darwin, TurboLinux being examples).
I'd like to see the build out of what is necessary in LDAP itself.
Sadly, there are issues in its current implementation. Specifically, the
DNs are implement as string objects instead of references to other
objects. Its ok in the singular name space, but not good when you have
to merge multiple hierarchies. In a growing organization, its key to be
able to use the technology in an initial bootstrap stage and let it
evolve with the organization. How best to achieve that? There are some
ideas that tend to involve messing with open protocols (in an open way,
but against the spirit of their design).
I'd just like to hear what people think is a viable direction to solve
the problem. Again, email@example.com just in case this forum dies