Older blog entries for skvidal (starting at number 448)

coprs status

My call for testers was successful. We have 21 people now who have access to coprs and a number of folks have started testing builds out. We’ve found some bugs and some feature enhancements needed. Also discussion has started on the copr devel mailing list and we’ve gotten some patches contributed to help us make a cli come to pass.

If anyone else wants to take a look at the code and help out, you can see it all here in copr cgit.

thanks to all the folks who are helping test and develop the copr code.

 

 


Syndicated 2013-01-08 18:30:00 from journal/notes

call for serious testers

Right now, we have coprs working. It’s not fancy or beautiful and I am POSITIVE many bugs are lurking there (and some aren’t lurking at all just sitting there). So, we need people who will seriously undertake testing. I don’t want people who want to build things and don’t care about reporting issues. I also don’t want people who are going to be filled with destructive criticism.

If you want to test it out and cite issues please email me: skvidal at fedoraproject.org

you must already have an active fas account and some srpms you want to build.

I only need maybe 10 or 20 people at the moment. Not looking to overdo it :)

Thanks

 


Syndicated 2013-01-02 20:09:02 from journal/notes

jenkins and ansible and ‘the cloud’

Working with Pingou on this I merged the playbooks into a single playbook. We were setting up jenkins and wanted a way of putting all the systems together quickly and easily. All it does is spin up a master instance, provision it, spin up 2 workers (one in el6 the other f17) provision them and end:

It could easily be done in ec2, euca, openstack or pretty much any system. I know there are more ways to skin this cat but I thought this was simple enough to follow along and others might find it interesting.

This is the playbook:

http://infrastructure.fedoraproject.org/infra/ansible/playbooks/groups/jenkins-cloud.yml

 

all of our ansible playbooks and inventory files are available here:

http://infrastructure.fedoraproject.org/infra/ansible/

 


Syndicated 2012-12-19 06:15:18 from journal/notes

copr – testers needed soon

Bohuslav and I have been diligently working on the copr buildsystem code in the last month or so. We originally targetted the middle of november but that wasn’t to be. However, right now we have working rough-around-the-edges code. It has some completed successful builds so we know that part works.

A little background:

copr is intended to provide a safe place for packages to be built that are not packages that are not or cannot be built in fedora. Think of it as a place where you can scratch and chain build whatever you want (up to the obvious limits of legality :) . It takes your packages, builds them against themselves and puts the repo up available for download.

Why not just do this in koji?

  1. koji is just for fedora builds
  2. koji does not allow external, arbitrary repositories for build deps
  3. koji’s builders are not destroyed and cleaned after each build
  4. koji just isn’t designed to do what copr does and grafting it on top of koji is a dangerous proposition to the safety and security of fedora builds.

How does copr do things?

You submit your build request (name of the copr, chroots to build against, repos to use to build and the list of src.rpms to build) into the frontend. The backend takes this request, creates a builder in our private cloud, builds the pkgs, pulls back the results onto a webserver, provides you with a link to the results and destroys the builder.

What’s the status right now?

The backend code works, but there are a lot of rough edges to sand down. The frontend code works, but there is a bunch of ui work needed there (right now Ryan Lerch is creating some mockups for how things should be). The code is available from the link at the top. Right now everything is in two branches a front end branch (bkabrda-workspace) and a backend branch (skvidal-backend). We’ll be merging them into master in short order.

We’ll be sending out a call for folks to help test in the not-so-distant future. The whole goal is to provide a place for people to build packages they would like to have available but maybe cannot have in fedora for one reason or another.

Examples:

  • alternative configuration/build options for an existing package
  • not very easy to package in fedora (bundling, etc) but perfectly good software otherwise
  • test builds
  • your idea here
  • builds based on other repos that are not included in fedora

Things which cannot be in a copr:

  • software which are not legal to include in fedora

 

The fedora buildsys mailing list (buildsys@lists.fedoraproject.org) is where we will discuss things  going forward but I just wanted to let people know what we’ve been working on.

 


Syndicated 2012-12-10 15:45:15 from journal/notes

FAD: Two-Factor Auth setup

This week we held a Fedora Activity Day in Raleigh at RH HQ to get two-factor auth setup for Fedora Infrastructure. We had a pretty good plan and we knew the pieces we needed to put together – it was just a matter of doing it. We got our goals accomplished but that’s not what I wanted to talk about.

We ended up with 9 people there. I was originally a bit concerned that was too many people, that we would end talking more than accomplishing things and that would suck, especially if we had failed to get things implemented. I was keeping track of what everyone was doing, how they were helping. What I noticed was that everyone contributed in some way. There wasn’t anyone on the sidelines. At one point we had 2 people working on the package reviews of the pkgs we needed to get into fedora (totpcgi and pam_url). We had a person writing the cgi to let us use both yubikeys and google auth, a person working on the provisioning interface to get people setup using google auth, a person working on the puppet config, a person setting up the certs/pki needed to let pam_url connect securely. We had a person setting up cloud instances for us to use to test/blow things up and we had a couple of people writing/rewriting their yubikeys and auth secrets in order to test and retest and reretest.

 

The FAD just removed all friction (as someone else put it to me yesterday). It meant that instead of waiting a few days or more to solve the problems we only waited 20minutes. Like often said about mediation – good facilities are sometimes all that is required to get things done.

It was great having everyone there and able to work it was great being able to ONLY focus on this one thing. I think we will have this again in the future to help accomplish tasks which are just too involved to bite off a little at a time or will take years to get done at that rate.

 


Syndicated 2012-11-30 14:38:38 from journal/notes

ansible presentation from @jp_mens

Great presentation slidedeck:

 

https://speakerdeck.com/jpmens/ansible-an-introduction

 

introducing ansible.

 


Syndicated 2012-11-08 16:29:44 from journal/notes

euca-terminate-instances

As I find the need I write functionality I need into the existing euca2ools using their lovely cli python api.

I hate trying to remember an instance id. I know the ip of the host or I know the dns name of the hose. I don’t need to go find the instance id to know which one I want to kill.

But euca-terminate-instances is silly and won’t let me pass in an ip or a hostname. Nor will it let me specify globs :(

So I wrote this

http://fedorapeople.org/cgit/skvidal/public_git/scripts.git/tree/euca/my-terminate-instances.py

It takes public or private ips, public or private dns names (the ones euca or ec2 has) or instance ids.

It also lets you pass file-globs to them. So you can do things like:

my-terminate-instances i-\*

and kill everything you’re running. Isn’t that fun!

enjoy

 


Syndicated 2012-11-02 22:10:32 from journal/notes

ansible and cloud instances

A few days ago I posted about using ansible to provision persistent ip instances in a public or private cloud.

Today I finished up the next piece I needed to make this work for copr builders w/transient ips/instances.

I needed a way to create a new instance, provision it as if it was one of the normal inventory of our systems, and return back the ip of that was given to it via eucalyptus (or ec2) automatically. And when I was done with it – I didn’t want any record of it preserved in our inventory system. I wanted it gone.

The problem was ansible wants to act only on the hosts it knows about in its inventory. This is quite understandable. But since I’m not specifying an ip to this host and I don’t know it in advance I wanted a way, in a single playbook to do this.

So I wrote add_host an action_plugin for ansible. These let you interact with the internal state of the inventory/playbook/etc while executing a playbook.

All it does is to add the host to the in-memory inventory the ansible playbook object has. And it adds that host to a group you specify. This is so in the second play in the playbook you can say ‘oh operate on hosts in this special group name’ and that will be the only host in that group.

I’ve sent in a pull request for it but it’s not been accepted quite yet. :) However, if you want to try it out you can just toss it into your  action_plugins dir in ansible and call it.

Here’s an example playbook. It’s very similar to the one for creating a persistent instance. In the fedora public ansible repo we are, in fact, importing the same set of tasks from another file to set them up the same.

It just means if anyone wants an instance in our private cloud running f17 or el6 it is incredibly trivial to make one available for you.

 

 


Syndicated 2012-10-31 05:52:17 from journal/notes

handy ansible action for adding root keys to cloud instances

You’ve just spun up a new instance and you need to give additional people access to the system as root. You have a common IDMS that houses ssh pub keys for your users. You want to trivially specify a list of those users and have their keys show up in root’s .ssh/authorized_keys file.

Here’s what you do:

 

- name: add root keys for other allowed users
action: authorized_key user=root key=’$PIPE(/path/to/script/for/keys ${root_auth_users})’
only_if: is_set(‘${root_auth_users}’}

 

In our infrastructure FAS houses all the pubkeys. So Toshio wrote this script: http://infrastructure.fedoraproject.org/infra/ansible/scripts/auth-keys-from-fas

So if you define a hostvar in your ansible inventory for this host – then the above will automatically populate your root authorized_keys with the right pub keys.

Kinda awesome, I think.


Syndicated 2012-10-26 17:42:33 from journal/notes

creating and provisioning euca/ec2/cloud* instances using ansible in one step

Since we’ve been working more and more with our private cloud setups on eucalyptus and openstack I’ve been trying to come up with a semi-sane way of having persistent hosts in the cloudlets.

So that if we shut things down or an instance dies – it can be brought back up automatically without someone having to explicitly say ‘go start this up’. I’ve also been working on setting up ansible such that we can run it via cron on our central command host to maintain our systems.

So, of course, I’ve combined those two things into one task :)

We want certain cloud instances to:

  1. always exist
  2. have the same ip each time they come up.

These are for things like the build master for COPRs/PPAs or for the jenkins-master server or for any number of random services where they need a persistent ip throughout recreations. For the ips we’re using euca-associate-address after allocating a number to our account.

So I started on a mechanism to create those instances and give me results I could use in ansible for other actions. I came up with ec2_create which lets you do that from a local_action call in ansible.

Once you have that module in your ansible library path you just need to:

  • add the host to your ansible inventory file
  • take  this playbook example
  • modify it for your host
  • run it like: ansible-playbook thatplaybook.yml

you’ll need euca2ools installed along with the deps for ansible.

Here’s what the playbook does:

  1. it looks to see if the host is running (by a simple nc connect to port 22)
  2. if it is not running then it will run ec2_create with your args
  3. then associate the ip
  4. wait for the host to get the ip and become available
  5. then it goes on to the next play which is normal ansible provisioning

which is perfect, for the next time you run – steps 2-4 won’t happen – it just goes directly to provisioning. Since ansible playbooks are idempotent you can run them as many times as you want w/o maligning your server.

Now the example playbook is all merged together – but if you use host_vars in your ansible inventory and you setup the check/create tasks as an includable task file – then the playbook for creation and provisioning of any new cloud instance you want becomes VERY short.

More to come as I finish it.


Syndicated 2012-10-25 22:33:37 from journal/notes

439 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!