So slaughter is definitely getting overhauled
There have been a few interesting discussions going on in parallel
about my slaughter
I've now decided there will be a 2.0 release, and that will
change things for the better. At the moment there are two main parts to the system:
- Downloading polices
These are instructions/perl code that are applied to the local host.
- Downloading files
Polices are allowed to download files. e.g. /etc/ssh/sshd_config
Both these occur over HTTP fetches (SSL may be used), and there is a
different root for the two trees. For example you can see the two
public examples I have here:
A fetch of the policy "foo.policy" uses the first prefix, and a fetch
of the file "bar" uses the latter prefix. (In actual live usage I use a
restricted location because I figured I might end up storing sensitive
things, though I suspect I don't.)
The plan is to update the configuration file to read something like this:
transport = http
# Valid options will be
# rsync | http | git | mercurial | ftp
# each transport will have a different prefix
prefix = http://static.steve.org.uk/private
# for rsync:
# for ftp:
# for git:
# for mercurial
I anticipate that the HTTP transport will continue to work the way it
currently does. The other transports will clone/fetch the appropriate
resource recursively to a local directory - say
/var/cache/slaughter. So the complete archive of files/policies will be available locally.
The HTTP transport will continue to work the same way with regard to
file fetching, i.e. fetching them remotely on-demand. For all other transports the "remote" file being copied will be pulled from the local cache.
So assuming this:
transport = rsync
prefix = rsync.company.com::module/
Then the following policy will result in the expected action:
if ( UserExists( User => "skx" ) )
Source => "/global-keys",
Dest => "/home/skx/.ssh/authorized_keys2",
Owner => "skx",
Group => "skx",
Mode => "600" );
The file "/global-keys" will refer to /var/cache/slaughter/global-keys which will have been already downloaded.
I see zero downside to this approach; it allows HTTP stuff to continue to work as it did before, and it allows more flexibility. We can benefit from knowing that the remote policies are untampered with, for example, via the checking built into git/mercurial, and the speed gains of rsync.
There will also be an optional verification stage. So the code will roughly go like this:
- 1. Fetch the policy using the specified transport.
- 2. (Optionally) run some local command to verify the local policies.
- 3. Execute policies.
I'm not anticipating additional changes, but I'm open to persuasion.
Syndicated 2012-10-24 07:28:56 from Steve Kemp's Blog