Older blog entries for skvidal (starting at number 425)

mockchain.py

Talking to halfline today and yesterday I decided to spend a little time playing with this for another project.

mockchain

mockchain.py [options] chrootname file.src.rpm [file1.src.rpm] [file2.src.rpm] ...

Builds a series of srpms in mock one at a time. After each successful build
of a package it adds the resulting packages to a local repo which
are available to the next package to satisfy buildreqs.

options:
 -c - continue on package build failure - by default it will exit if
      a package fails to build. set this if you wish it to try and continue
      for the rest of the packages.

 -l path - set the path to put the results/repo in. This path needs to be
    somewhere accessible to users other than you for reading as the
    mock process doesn't run as you.

This does not try to sort the packages by build order b/c that is too much 
effort and not obviously doable with the buildreq information we have.

The build process when you use -l is idempotent so a package which has
already been successfully built will not be built again.
If you want to force the rebuild of a package which has been built 
successfully simply remove the 'success' file from the dir with 
the package results in it.

Syndicated 2012-04-13 20:28:06 from journal/notes

ansible basic operating theory explained

Talking on irc tonight pointed out a lacking in the docs of ansible. Specifically, explaining the dirt-simple nature of how it works.

0. ansible has modules – modules are just executable code/scripts in any language you want – there are only 2 requirements:
a. that whatever language you want to write them in is available on the remote system(s)

b. that the modules return json as their results.
1. ansible connects to a host(or many hosts) using ssh

2. ansible shoves across the module(s) you want to run

3. ansible shoves across the arguments you  want to pass to the module(s)

4. ansible runs the modules with the arguments

5. ansible gets back json from the modules and sends it to the calling script/program to be handled and/or displayed.

 

Now – for a lot of people the only module they really care about it is the ‘command’ or ‘shell’ module – which just lets you run a command directly on the system and it returns the results to the calling program. Pretty handy for any number of things. However, you can write a custom module – which is really nothing more than a script that ansible runs remotely. Ansible just handles the communication/execution part to multiple systems at the same time and return the results back to you, sensibly.

So that’s the dead-simple version of what ansible can do.

How do you as an admin wanting to test it out get started?

git clone https://github.com/ansible/ansible.git

cd ansible

echo “somehost-i-have-root-on” > ~/ansible-hosts

. ./hacking/env-setup

If you have a root ssh key setup then you can run:

bin/ansible all -i ~/ansible-hosts “uptime”

if you don’t have a root ssh key setup then run:

bin/ansible all -k -i ~/ansible-hosts “uptime”

 

it will prompt you for the root password

Add more hosts to ~/ansible-hosts to talk to more at the same time.

 


Syndicated 2012-04-11 05:29:40 from journal/notes

playing with ansible more

I’ve been working with/on ansible off and on for a couple of weeks now. I’ve got a simple reinstall playbook created and the basics of how we create new builders.

I was also working on using ansible as an api to port over some tools from func. After getting a bit sidetracked getting it to have a optparser function so I didn’t have to duplicate all those options in ever script I ported the func-host-reboot script over and improved it a bit:
https://github.com/ansible/ansible-contrib/tree/master/scripts/host-reboot

The advantage of using the python interface is I don’t have to write things using the playbook language when I really want to do something more involved and I like the idea of being able to do multiple commands BETWEEN hosts and interplay their results.

I’m thinking that, coupled with kickstart, we should be able to streamline all of our provisioning. Additionally, I’m thinking that the api (esp for the playbooks) should let me play with more ad-hoc builder creation and package build submission.

I’m trying to figure out where this fits.

We have pieces that can do all of these things, mostly, but all of them require some other kind of setup to make work. I have 3 f16 hosts, 1 el6 box and 1 el5 box that I’m testing these against. I run against all of them at the same time and, other than sshd and python none of them have any specific installed on them for ansible to be able to run.

Is a lightweight “clientless” mgmt system a good idea? Is it enough of a feature to make it worth pursuing?

It feels like it helps overcome the pain-in-the-ass quality that is setting up most systems-mgmt infrastructure.


Syndicated 2012-04-11 03:53:32 from journal/notes

fedorapeople.org upgrade

The outage today was to move fedorapeople to a new guest on a machine with more disk space and bandwidth. Things seemed to have gone smoothly. The big change is we made a space for projects that is outside of the normal quota’d userspace on fedorapeople. If you need a project space that’s web accessible (or even not web accessible) file a ticket or let someone in fedora infrastructure know and we can get the space set up for you lickety-split.


Syndicated 2012-03-27 00:21:45 from journal/notes

func-host-reboot

A week and a half ago I posted about func-vhost-reboot. After that is working and functional I realized I needed a func-host-reboot, too -for the non-vhost-wide reboots.

func-host-reboot

it’s pretty simple – it reboots a set of hosts and checks to make sure they all come back online. It also adds a ‘–one-at-a-time’ option which is pretty handy. For example if you want to bounce all your nameservers for some reason but you don’t want them ALL to be bouncing at the same time. Then you’d pass –one-at-a-time or -o. It will reboot them each at a time, waiting for the one being rebooted to come back before proceeding to the next one.


Syndicated 2012-03-19 20:56:06 from journal/notes

func-vhost-reboot

I wrote this a couple of weeks ago but I hadn’t had a chance to test it in real use until yesterday:

func-vhost-reboot

It assumes you’ve created groups for your vhosts using this:
func-groups-by-virthost

Then you run:
func-vhost-reboot fqdn-of-virthost [@virthost-group]

It connects to all of the guests on the virthost, looks to see if there are any users logged in and displays that info if there is. Then it asks you to confirm halting the guests. It waits for them to halt. Then it reboots the virthost and waits for it to come back up. Once it is back up it confirms that the guests have returned in the same state as they were before they were halted. I did a bunch of tests on it yesterday and it worked pretty well.

Take a look.


Syndicated 2012-03-08 14:23:15 from journal/notes

euca thoughts

I setup rhev 3 and eucalyptus 2 last week. I’ll talk about rhev eventually but I have some initial thoughts on euca I want to get down.

1. install instructions are mostly good but need some work to make it clear what ELSE I need to setup
2. the overview docs are good but 10m spent talking to andy grimm cleared up everything A WHOLE LOT MORE.

Now let’s go onto my rant:
First I want to thank Greg Dekoenigsberg for getting the images list at eucalyptus setup. That helped point me in roughly the right direction. It was a big help. But lets be clear I will never use someone else’s image for my own servers. NEVER. Do you know why? B/c I do not trust other people to either a. not be morons or b. do something intentionally crappy. I’m happy the list exists b/c in addition to advertising existing images (which are quite helpful) it also promotes the discussion of how making images is stupidly difficult and under-documented.

So: Why is making images that frelling hard?
To make an image you more or less setup an installroot – blow the pkgs in there then you go screw with them some. Then perform a relatively complicated set of things to make it ‘work’ and then upload it to euca.

This is dumb.

1. installing to an install root is what the INSTALLER is for
2. the installer for rhel/fedora/centos/SL, etc, etc, etc is anaconda
3. the automated installer for these is kickstart
4. for a euca instance you have:
a. processor
b. memory
c. disk (sorta)
d. network

Last I checked that’s all you need to run anaconda (and kickstart).

I’ve been an admin for a long time now and I’ve been mass installing systems since LONG before many people understood why that was important. I’ve been using kickstart (practically the same basic kickstart) for about 12 yrs now. Why would I want a NEW tool for installing instances and setting up images? Especially a new, inferior, incompatible tool with a format that means I have to go screw with how I’ve been installing systems for OVER A DECADE? I would not. There is no reason. There is nothing that makes that make sense.

The anaconda developers have done a STELLAR job maintaining compatibility with the kickstart format to the point that the whole linux-using world has realized it. To the point that I can almost take my ks.cfg from rhl 7.2 and have it work on rhel 6. Even if I’m going to install and instance, take a snapshot and immediately use that as my clone for all new instances – it is still easier and better if the mechanism I use is the same as I would use on any other server. If only for consistency.

(The first person who says something about consistency and hobgoblins of little minds will:
a. get slapped and b. get reminded that the first line is a ‘foolish consistency’)

Moreover doing things from kickstart as the basis for the images means:
1. you’re not inheriting bizarre little things you forgot you modified in your image
2. you’re starting from known good (and gpg-verifiable) pkgs
3. you don’t need to change your established practices.

Let’s say, in an ideal world, all my instances are in my private cloud running on euca. That’s great, but the cloud controller and nodes that run euca aren’t able to use those images, – so I’m going to have to install (and reinstall) those. Which means I’m going to be using kickstart. So, yah any install tool must use kickstart at its base.

So, with that in mind I had lunch with Greg and Andy on Friday and we discussed this a good bit. Then after Andy and I talked about the problem space some he explained what the limiting factor was. He then mentioned someone working on Neuca at Renci and the patches they have to do something related (as in to modify the xml that is passed to libvirt to generate images) and after he mentioned his first name and that he was at duke I realized that he lives 2 blocks away from me and I’ve known him for over a decade. :) So I called him and we talked about whether or not the patches to neuca will do what I want (which is to let you kickstart to install an image). It’s not in the bag yet but it sure seems like the bag is open and all the pieces appear to be able to fit inside.

After talking with Victor friday evening I felt a good deal better. I couldn’t imagine why this hasn’t already been addressed therefore I thought I was missing something obvious, something that made this trivial and I just ignored it. No, in fact, I hadn’t missed anything – building images is stupidly difficult and obtuse and for no good reason.

There’s a lot more to go but I’m looking forward to tinkering with this when I get back from a little trip on wednesday.


Syndicated 2012-01-29 05:28:40 from journal/notes

love

You know what I love?

When reboots don’t go horribly wrong.


Syndicated 2012-01-27 05:19:43 from journal/notes

fudcon day 2 and 3 thoughts

day 2 -
- running late – took brompton to get coffee and food
- barcamp, yay – gluster, once again, on top of my to-look-at-list
- lunch yay
- more barcamp
- head back to the hotel
- go to put the brompton back in the car – no keys
- retrace steps – no joy
- do it again – no joy
- resolve to not worry about it until sunday.
- fudpub. Nice – food was good, space was open, bowling seems to be a good activity for fedora-y people
- back to the hotel (while it snows) to sit around and talk for a long time. It was pleasant and worthy of note that sitting and talking with friends and coworkers is a nice way to take your mind off of other worries. I appreciated being able to sit and talk with so much of the infrastructure and other teams. It was comfortable and it was relaxing.

I guess that’s one of the F’s of Fedora.

Thanks to the organizers for putting this together.


Syndicated 2012-01-15 20:53:11 from journal/notes

fudcon day 1

a. infrastructure staging will perish:

1. Kill staging branch – identify what we want from staging to be in our ‘master’
branch.
a. copy staging branch to a subdir
b. kill staging branch
c. copy in pieces we care about a bit at a time – as needed.
2. move all .stg. boxes to ‘production’ puppet environment.
3. all boxes we maintain are ‘production’ from a configtest/pkging standpoing
4. app development moves to partial silos and/or openshift to do their code testing
then moved to production like a normal app deployment – rpm + config changes.

b. 2fa will start being implemented for users of sudo

c. if we are very good, smolt will find a new home. Maybe in openshift?

d. happy to meet pknirsch the guy in charge of the packaging team and hear of his evil plans to make my (and many other people’s) lives better.


Syndicated 2012-01-14 13:03:42 from journal/notes

416 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!