Recent blog entries for mikal

Configuring docker to use rexray and Ceph for persistent storage

For various reasons I wanted to play with docker containers backed by persistent Ceph storage. rexray seemed like the way to do that, so here are my notes on getting that working...

First off, I needed to install rexray:

    root@labosa:~/rexray# curl -sSL https://dl.bintray.com/emccode/rexray/install | sh
    Selecting previously unselected package rexray.
    (Reading database ... 177547 files and directories currently installed.)
    Preparing to unpack rexray_0.9.0-1_amd64.deb ...
    Unpacking rexray (0.9.0-1) ...
    Setting up rexray (0.9.0-1) ...
    
    rexray has been installed to /usr/bin/rexray
    
    REX-Ray
    -------
    Binary: /usr/bin/rexray
    Flavor: client+agent+controller
    SemVer: 0.9.0
    OsArch: Linux-x86_64
    Branch: v0.9.0
    Commit: 2a7458dd90a79c673463e14094377baf9fc8695e
    Formed: Thu, 04 May 2017 07:38:11 AEST
    
    libStorage
    ----------
    SemVer: 0.6.0
    OsArch: Linux-x86_64
    Branch: v0.9.0
    Commit: fa055d6da595602715bdfd5541b4aa6d4dcbcbd9
    Formed: Thu, 04 May 2017 07:36:11 AEST
    


    Which is of course horrid. What that script seems to have done is install a deb'd version of rexray based on an alien'd package:

      root@labosa:~/rexray# dpkg -s rexray
      Package: rexray
      Status: install ok installed
      Priority: extra
      Section: alien
      Installed-Size: 36140
      Maintainer: Travis CI User <travis@testing-gce-7fbf00fc-f7cd-4e37-a584-810c64fdeeb1>
      Architecture: amd64
      Version: 0.9.0-1
      Depends: libc6 (>= 2.3.2)
      Description: Tool for managing remote & local storage.
       A guest based storage introspection tool that
       allows local visibility and management from cloud
       and storage platforms.
       .
       (Converted from a rpm package by alien version 8.86.)
      


      If I was building anything more than a test environment I think I'd want to do a better job of installing rexray than this, so you've been warned.

      Next to configure rexray to use Ceph. The configuration details are cunningly hidden in the libstorage docs, and aren't mentioned at all in the rexray docs, so you probably want to take a look at the libstorage docs on ceph. First off, we need to install the ceph tools, and copy the ceph authentication information from the the ceph we installed using openstack-ansible earlier.

        root@labosa:/etc# apt-get install ceph-common
        root@labosa:/etc# scp -rp 172.29.239.114:/etc/ceph .
        The authenticity of host '172.29.239.114 (172.29.239.114)' can't be established.
        ECDSA key fingerprint is SHA256:SA6U2fuXyVbsVJIoCEHL+qlQ3xEIda/MDOnHOZbgtnE.
        Are you sure you want to continue connecting (yes/no)? yes
        Warning: Permanently added '172.29.239.114' (ECDSA) to the list of known hosts.
        rbdmap                       100%   92     0.1KB/s   00:00    
        ceph.conf                    100%  681     0.7KB/s   00:00    
        ceph.client.admin.keyring    100%   63     0.1KB/s   00:00    
        ceph.client.glance.keyring   100%   64     0.1KB/s   00:00    
        ceph.client.cinder.keyring   100%   64     0.1KB/s   00:00    
        ceph.client.cinder-backup.keyring   71     0.1KB/s   00:00  
        root@labosa:/etc# modprobe rbd
        


        You also need to configure rexray. My first attempt looked like this:

          root@labosa:/var/log# cat /etc/rexray/config.yml
          libstorage:
            service: ceph
          


          And the rexray output sure made it look like it worked...

            root@labosa:/etc# rexray service start
            ● rexray.service - rexray
               Loaded: loaded (/etc/systemd/system/rexray.service; enabled; vendor preset: enabled)
               Active: active (running) since Mon 2017-05-29 10:14:07 AEST; 33ms ago
             Main PID: 477423 (rexray)
                Tasks: 5
               Memory: 1.5M
                  CPU: 9ms
               CGroup: /system.slice/rexray.service
                       └─477423 /usr/bin/rexray start -f
            
            May 29 10:14:07 labosa systemd[1]: Started rexray.
            


            Which looked good, but /var/log/syslog said:

              May 29 10:14:08 labosa rexray[477423]: REX-Ray
              May 29 10:14:08 labosa rexray[477423]: -------
              May 29 10:14:08 labosa rexray[477423]: Binary: /usr/bin/rexray
              May 29 10:14:08 labosa rexray[477423]: Flavor: client+agent+controller
              May 29 10:14:08 labosa rexray[477423]: SemVer: 0.9.0
              May 29 10:14:08 labosa rexray[477423]: OsArch: Linux-x86_64
              May 29 10:14:08 labosa rexray[477423]: Branch: v0.9.0
              May 29 10:14:08 labosa rexray[477423]: Commit: 2a7458dd90a79c673463e14094377baf9fc8695e
              May 29 10:14:08 labosa rexray[477423]: Formed: Thu, 04 May 2017 07:38:11 AEST
              May 29 10:14:08 labosa rexray[477423]: libStorage
              May 29 10:14:08 labosa rexray[477423]: ----------
              May 29 10:14:08 labosa rexray[477423]: SemVer: 0.6.0
              May 29 10:14:08 labosa rexray[477423]: OsArch: Linux-x86_64
              May 29 10:14:08 labosa rexray[477423]: Branch: v0.9.0
              May 29 10:14:08 labosa rexray[477423]: Commit: fa055d6da595602715bdfd5541b4aa6d4dcbcbd9
              May 29 10:14:08 labosa rexray[477423]: Formed: Thu, 04 May 2017 07:36:11 AEST
              May 29 10:14:08 labosa rexray[477423]: time="2017-05-29T10:14:08+10:00" level=error
              msg="error starting libStorage server" error.driver=ceph time=1496016848215
              May 29 10:14:08 labosa rexray[477423]: time="2017-05-29T10:14:08+10:00" level=error
              msg="default module(s) failed to initialize" error.driver=ceph time=1496016848216
              May 29 10:14:08 labosa rexray[477423]: time="2017-05-29T10:14:08+10:00" level=error
              msg="daemon failed to initialize" error.driver=ceph time=1496016848216
              May 29 10:14:08 labosa rexray[477423]: time="2017-05-29T10:14:08+10:00" level=error
              msg="error starting rex-ray" error.driver=ceph time=1496016848216
              


              That's because the service is called rbd it seems. So, the config file ended up looking like this:

                root@labosa:/var/log# cat /etc/rexray/config.yml
                libstorage:
                  service: rbd
                
                rbd:
                  defaultPool: rbd
                


                Now to install docker:

                  root@labosa:/var/log# sudo apt-get update
                  root@labosa:/var/log# sudo apt-get install linux-image-extra-$(uname -r) \
                      linux-image-extra-virtual
                  root@labosa:/var/log# sudo apt-get install apt-transport-https \
                      ca-certificates curl software-properties-common
                  root@labosa:/var/log# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
                  root@labosa:/var/log# sudo add-apt-repository \
                      "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
                      $(lsb_release -cs) \
                      stable"
                  root@labosa:/var/log# sudo apt-get update
                  root@labosa:/var/log# sudo apt-get install docker-ce
                  


                  Now let's make a rexray volume.

                    root@labosa:/var/log# rexray volume ls
                    ID  Name  Status  Size
                    root@labosa:/var/log# docker volume create --driver=rexray --name=mysql \
                        --opt=size=1
                    A size of 1 here means 1gb
                    mysql
                    root@labosa:/var/log# rexray volume ls
                    ID         Name   Status     Size
                    rbd.mysql  mysql  available  1
                    


                    Let's start the container.

                      root@labosa:/var/log# docker run --name some-mysql --volume-driver=rexray \
                          -v mysql:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql
                      Unable to find image 'mysql:latest' locally
                      latest: Pulling from library/mysql
                      10a267c67f42: Pull complete 
                      c2dcc7bb2a88: Pull complete 
                      17e7a0445698: Pull complete 
                      9a61839a176f: Pull complete 
                      a1033d2f1825: Pull complete 
                      0d6792140dcc: Pull complete 
                      cd3adf03d6e6: Pull complete 
                      d79d216fd92b: Pull complete 
                      b3c25bdeb4f4: Pull complete 
                      02556e8f331f: Pull complete 
                      4bed508a9e77: Pull complete 
                      Digest: sha256:2f4b1900c0ee53f344564db8d85733bd8d70b0a78cd00e6d92dc107224fc84a5
                      Status: Downloaded newer image for mysql:latest
                      ccc251e6322dac504e978f4b95b3787517500de61eb251017cc0b7fd878c190b
                      


                      And now to prove that persistence works and that there's nothing up my sleeve... />
                        root@labosa:/var/log# docker run -it --link some-mysql:mysql --rm mysql \
                            sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" \
                            -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'
                        mysql: [Warning] Using a password on the command line interface can be insecure.
                        Welcome to the MySQL monitor.  Commands end with ; or \g.
                        Your MySQL connection id is 3
                        Server version: 5.7.18 MySQL Community Server (GPL)
                        
                        Copyright (c) 2000, 2017, Oracle and/or its affiliates. All rights reserved.
                        
                        Oracle is a registered trademark of Oracle Corporation and/or its
                        affiliates. Other names may be trademarks of their respective
                        owners.
                        
                        Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
                        
                        mysql> show databases;
                        +--------------------+
                        | Database           |
                        +--------------------+
                        | information_schema |
                        | mysql              |
                        | performance_schema |
                        | sys                |
                        +--------------------+
                        4 rows in set (0.00 sec)
                        
                        mysql> create database demo;
                        Query OK, 1 row affected (0.03 sec)
                        
                        mysql> use demo;
                        Database changed
                        mysql> create table foo(val char(5));
                        Query OK, 0 rows affected (0.14 sec)
                        
                        mysql> insert into foo(val) values ('a'), ('b'), ('c');
                        Query OK, 3 rows affected (0.08 sec)
                        Records: 3  Duplicates: 0  Warnings: 0
                        
                        mysql> select * from foo;
                        +------+
                        | val  |
                        +------+
                        | a    |
                        | b    |
                        | c    |
                        +------+
                        3 rows in set (0.00 sec)
                        


                        Now let's re-create the container and prove the data remains.

                          root@labosa:/var/log# docker stop some-mysql
                          some-mysql
                          root@labosa:/var/log# docker rm some-mysql
                          some-mysql
                          root@labosa:/var/log# docker run --name some-mysql --volume-driver=rexray \
                              -v mysql:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql
                          99a7ccae1ad1865eb1bcc8c757251903dd2f1ac7d3ce4e365b5cdf94f539fe05
                          
                          root@labosa:/var/log# docker run -it --link some-mysql:mysql --rm mysql \
                              sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -\
                              P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'
                          mysql: [Warning] Using a password on the command line interface can be insecure.
                          Welcome to the MySQL monitor.  Commands end with ; or \g.
                          Your MySQL connection id is 3
                          Server version: 5.7.18 MySQL Community Server (GPL)
                          
                          Copyright (c) 2000, 2017, Oracle and/or its affiliates. All rights reserved.
                          
                          Oracle is a registered trademark of Oracle Corporation and/or its
                          affiliates. Other names may be trademarks of their respective
                          owners.
                          
                          Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
                          
                          mysql> use demo;
                          Reading table information for completion of table and column names
                          You can turn off this feature to get a quicker startup with -A
                          
                          Database changed
                          mysql> select * from foo;
                          +------+
                          | val  |
                          +------+
                          | a    |
                          | b    |
                          | c    |
                          +------+
                          3 rows in set (0.00 sec)
                          
                          So there you go.

                          Tags for this post: docker ceph rbd rexray
                          Related posts: So you want to setup a Ceph dev environment using OSA; Juno nova mid-cycle meetup summary: containers

                          Comment

                          Syndicated 2017-05-28 18:45:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

So you want to setup a Ceph dev environment using OSA

Support for installing and configuring Ceph was added to openstack-ansible in Ocata, so now that I have a need for a Ceph development environment it seems logical that I would build it by building an openstack-ansible Ocata AIO. There were a few gotchas there, so I want to explain the process I used.

First off, Ceph is enabled in an openstack-ansible AIO using a thing I've never seen before called a "Scenario". Basically this means that you need to export an environment variable called "SCENARIO" before running the AIO install. Something like this will do the trick?L:

    export SCENARIO=ceph
    


    Next you need to set the global pg_num in the ceph role or the install will fail. I did that with this patch:

      --- /etc/ansible/roles/ceph.ceph-common/defaults/main.yml       2017-05-26 08:55:07.803635173 +1000
      +++ /etc/ansible/roles/ceph.ceph-common/defaults/main.yml       2017-05-26 08:58:30.417019878 +1000
      @@ -338,7 +338,9 @@
       #     foo: 1234
       #     bar: 5678
       #
      -ceph_conf_overrides: {}
      +ceph_conf_overrides:
      +  global:
      +    osd_pool_default_pg_num: 8
       
       
       #############
      @@ -373,4 +375,4 @@
       # Set this to true to enable File access via NFS.  Requires an MDS role.
       nfs_file_gw: true
       # Set this to true to enable Object access via NFS. Requires an RGW role.
      -nfs_obj_gw: false
      \ No newline at end of file
      +nfs_obj_gw: false
      


      That of course needs to be done after the Ceph role has been fetched, but before it is executed, so in other words after the AIO bootstrap, but before the install.

      And that was about it (although of course that took a fair while to work out). I have this automated in my little install helper thing, so I'll never need to think about it again which is nice.

      Once Ceph is installed, you interact with it via the monitor container, not the utility container, which is a bit odd. That said, all you really need is the Ceph config file and the Ceph utilities, so you could move those elsewhere.

        root@labosa:/etc/openstack_deploy# lxc-attach -n aio1_ceph-mon_container-a3d8b8b1
        root@aio1-ceph-mon-container-a3d8b8b1:/# ceph -s
            cluster 24424319-b5e9-49d2-a57a-6087ab7f45bd
             health HEALTH_OK
             monmap e1: 1 mons at {aio1-ceph-mon-container-a3d8b8b1=172.29.239.114:6789/0}
                    election epoch 3, quorum 0 aio1-ceph-mon-container-a3d8b8b1
             osdmap e20: 3 osds: 3 up, 3 in
                    flags sortbitwise,require_jewel_osds
              pgmap v36: 40 pgs, 5 pools, 0 bytes data, 0 objects
                    102156 kB used, 3070 GB / 3070 GB avail
                          40 active+clean
        root@aio1-ceph-mon-container-a3d8b8b1:/# ceph osd tree
        ID WEIGHT  TYPE NAME       UP/DOWN REWEIGHT PRIMARY-AFFINITY 
        -1 2.99817 root default                                      
        -2 2.99817     host labosa                                   
         0 0.99939         osd.0        up  1.00000          1.00000 
         1 0.99939         osd.1        up  1.00000          1.00000 
         2 0.99939         osd.2        up  1.00000          1.00000 
        


        Tags for this post: openstack osa ceph openstack-ansible

        Comment

        Syndicated 2017-05-27 18:30:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

The Collapsing Empire




ISBN: 076538888X
LibraryThing
This is a fun fast read, as is everything by Mr Scalzi. The basic premise here is that of a set of interdependent colonies that are about to lose their ability to trade with each other, and are therefore doomed. Oh, except they don't know that and are busy having petty trade wars instead. It isn't a super intellectual read, but it is fun and does leave me wanting to know what happens to the empire...

Tags for this post: book john_scalzi
Related posts: The Last Colony ; The End of All Things; Zoe's Tale; Agent to the Stars; Redshirts; Fuzzy Nation


Comment

Syndicated 2017-05-17 21:46:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

Python3 venvs for people who are old and grumpy

I've been using virtualenvwrapper to make venvs for python2 for probably six or so years. I know it, and understand it. Now some bad man (hi Ramon!) is making me do python3, and virtualenvwrapper just isn't a thing over there as best as I can tell.

So how do I make a venv? Its really not too bad...

First, install the dependencies:

Things I read today: the best description I've seen of metadata routing in neutron

I happened upon a thread about OVN's proposal for how to handle nova metadata traffic, which linked to this very good Suse blog post about how metadata traffic is routed in neutron. I'm just adding the link here because I think it will be useful to others. The OVN proposal is also an interesting read.

Tags for this post: openstack nova neutron metadata ovn
Related posts: Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Nova vendordata deployment, an excessively detailed guide; One week of Nova Kilo specifications; Specs for Kilo; Juno Nova PTL Candidacy; Juno nova mid-cycle meetup summary: scheduler

Comment

Syndicated 2017-05-07 17:52:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

Light to Light, Day Three

The third and final day of the Light to Light Walk at Ben Boyd National Park. This was a shorter (8 kms) easier walk. A nice way to finish the journey.



Interactive map for this route.

                     

Tags for this post: events pictures 20170313 photo scouts bushwalk
Related posts: Light to Light, Day Two; Exploring the Jagungal; Light to Light, Day One; Scout activity: orienteering at Mount Stranger; Potato Point

Comment

Syndicated 2017-04-04 17:42:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

Light to Light, Day Two

Our second day walking the Light to Light walk in Ben Boyd National Park. This second day was about 10 kms and was on easier terrain than the first day. That said, probably a little less scenic than the first day too.



Interactive map for this route.

             

Tags for this post: events pictures 20170312 photo scouts bushwalk
Related posts: Light to Light, Day Three; Exploring the Jagungal; Light to Light, Day One; Scout activity: orienteering at Mount Stranger; Potato Point

Comment

Syndicated 2017-04-04 16:59:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

Light to Light, Day One

Macarthur Scouts took a group of teenagers down to Ben Boyd National Park on the weekend to do the Light to Light walk. The first day was 14 kms through lovely undulating terrain. This was the hardest day of the walk, but very rewarding and I think we all had fun.



Interactive map for this route.

                                       

See more thumbnails

Tags for this post: events pictures 20170311 photo scouts bushwalk
Related posts: Light to Light, Day Three; Light to Light, Day Two; Exploring the Jagungal; Scout activity: orienteering at Mount Stranger; Potato Point

Comment

Syndicated 2017-04-04 16:01:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

Nova vendordata deployment, an excessively detailed guide

Nova presents configuration information to instances it starts via a mechanism called metadata. This metadata is made available via either a configdrive, or the metadata service. These mechanisms are widely used via helpers such as cloud-init to specify things like the root password the instance should use. There are three separate groups of people who need to be able to specify metadata for an instance.

User provided data

The user who booted the instance can pass metadata to the instance in several ways. For authentication keypairs, the keypairs functionality of the Nova APIs can be used to upload a key and then specify that key during the Nova boot API request. For less structured data, a small opaque blob of data may be passed via the user-data feature of the Nova API. Examples of such unstructured data would be the puppet role that the instance should use, or the HTTP address of a server to fetch post-boot configuration information from.

Nova provided data

Nova itself needs to pass information to the instance via its internal implementation of the metadata system. Such information includes the network configuration for the instance, as well as the requested hostname for the instance. This happens by default and requires no configuration by the user or deployer.

Deployer provided data

There is however a third type of data. It is possible that the deployer of OpenStack needs to pass data to an instance. It is also possible that this data is not known to the user starting the instance. An example might be a cryptographic token to be used to register the instance with Active Directory post boot -- the user starting the instance should not have access to Active Directory to create this token, but the Nova deployment might have permissions to generate the token on the user's behalf.

Nova supports a mechanism to add "vendordata" to the metadata handed to instances. This is done by loading named modules, which must appear in the nova source code. We provide two such modules:

  • StaticJSON: a module which can include the contents of a static JSON file loaded from disk. This can be used for things which don't change between instances, such as the location of the corporate puppet server.
  • DynamicJSON: a module which will make a request to an external REST service to determine what metadata to add to an instance. This is how we recommend you generate things like Active Directory tokens which change per instance.


Tell me more about DynamicJSON

Having said all that, this post is about how to configure the DynamicJSON plugin, as I think its the most interesting bit here.

To use DynamicJSON, you configure it like this:

  • Add "DynamicJSON" to the vendordata_providers configuration option. This can also include "StaticJSON" if you'd like.
  • Specify the REST services to be contacted to generate metadata in the vendordata_dynamic_targets configuration option. There can be more than one of these, but note that they will be queried once per metadata request from the instance, which can mean a fair bit of traffic depending on your configuration and the configuration of the instance.


The format for an entry in vendordata_dynamic_targets is like this:

<name>@<url>


Where name is a short string not including the '@' character, and where the URL can include a port number if so required. An example would be:

testing@http://127.0.0.1:125


Metadata fetched from this target will appear in the metadata service at a new file called vendordata2.json, with a path (either in the metadata service URL or in the configdrive) like this:

openstack/2016-10-06/vendor_data2.json


For each dynamic target, there will be an entry in the JSON file named after that target. For example::

        {
            "testing": {
                "value1": 1,
                "value2": 2,
                "value3": "three"
            }
        }


Do not specify the same name more than once. If you do, we will ignore subsequent uses of a previously used name.

The following data is passed to your REST service as a JSON encoded POST:

  • project-id: the UUID of the project that owns the instance
  • instance-id: the UUID of the instance
  • image-id: the UUID of the image used to boot this instance
  • user-data: as specified by the user at boot time
  • hostname: the hostname of the instance
  • metadata: as specified by the user at boot time


Deployment considerations

Nova provides authentication to external metadata services in order to provide some level of certainty that the request came from nova. This is done by providing a service token with the request -- you can then just deploy your metadata service with the keystone authentication WSGI middleware. This is configured using the keystone authentication parameters in the vendordata_dynamic_auth configuration group.

This behavior is optional however, if you do not configure a service user nova will not authenticate with the external metadata service.

Deploying the same vendordata service

There is a sample vendordata service that is meant to model what a deployer would use for their custom metadata at http://github.com/mikalstill/vendordata. Deploying that service is relatively simple:

$ git clone http://github.com/mikalstill/vendordata
$ cd vendordata
$ apt-get install virtualenvwrapper
$ . /etc/bash_completion.d/virtualenvwrapper (only needed if virtualenvwrapper wasn't already installed)
$ mkvirtualenv vendordata
$ pip install -r requirements.txt


We need to configure the keystone WSGI middleware to authenticate against the right keystone service. There is a sample configuration file in git, but its configured to work with an openstack-ansible all in one install that I setup up for my private testing, which probably isn't what you're using:

[keystone_authtoken]
insecure = False
auth_plugin = password
auth_url = http://172.29.236.100:35357
auth_uri = http://172.29.236.100:5000
project_domain_id = default
user_domain_id = default
project_name = service
username = nova
password = 5dff06ac0c43685de108cc799300ba36dfaf29e4
region_name = RegionOne


Per the README file in the vendordata sample repository, you can test the vendordata server in a stand alone manner by generating a token manually from keystone:

$ curl -d @credentials.json -H "Content-Type: application/json" http://172.29.236.100:5000/v2.0/tokens > token.json
$ token=`cat token.json | python -c "import sys, json; print json.loads(sys.stdin.read())['access']['token']['id'];"`


We then include that token in a test request to the vendordata service:

curl -H "X-Auth-Token: $token" http://127.0.0.1:8888/


Configuring nova to use the external metadata service

Now we're ready to wire up the sample metadata service with nova. You do that by adding something like this to the nova.conf configuration file:

[api]
vendordata_providers=DynamicJSON
vendordata_dynamic_targets=testing@http://metadatathingie.example.com:8888


Where metadatathingie.example.com is the IP address or hostname of the server running the external metadata service. Now if we boot an instance like this:

nova boot --image 2f6e96ca-9f58-4832-9136-21ed6c1e3b1f --flavor tempest1 --nic net-name=public --config-drive true foo


We end up with a config drive which contains the information or external metadata service returned (in the example case, handy Carrie Fischer quotes):

# cat openstack/latest/vendor_data2.json | python -m json.tool
{
    "testing": {
        "carrie_says": "I really love the internet. They say chat-rooms are the trailer park of the internet but I find it amazing."
    }
}


Tags for this post: openstack nova metadata vendordata configdrive cloud-init
Related posts: One week of Nova Kilo specifications; Specs for Kilo; Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno Nova PTL Candidacy; Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: ironic

Comment

Syndicated 2017-02-02 19:49:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

Giving serial devices meaningful names

This is a hack I've been using for ages, but I thought it deserved a write up.

I have USB serial devices. Lots of them. I use them for home automation things, as well as for talking to devices such as the console ports on switches and so forth. For the permanently installed serial devices one of the challenges is having them show up in predictable places so that the scripts which know how to drive each device are talking in the right place.

For the trivial case, this is pretty easy with udev:

$  cat /etc/udev/rules.d/60-local.rules 
KERNEL=="ttyUSB*", \
    ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6001", \
    ATTRS{serial}=="A8003Ye7", \
    SYMLINK+="radish"


This says for any USB serial device that is discovered (either inserted post boot, or at boot), if the USB vendor and product ID match the relevant values, to symlink the device to "/dev/radish".

You find out the vendor and product ID from lsusb like this:

$ lsusb
Bus 003 Device 003: ID 0624:0201 Avocent Corp. 
Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 007 Device 002: ID 0665:5161 Cypress Semiconductor USB to Serial
Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 004 Device 002: ID 0403:6001 Future Technology Devices International, Ltd FT232 Serial (UART) IC
Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 009 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 008 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub


You can play with inserting and removing the device to determine which of these entries is the device you care about.

So that's great, until you have more than one device with the same USB serial vendor and product id. Then things are a bit more... difficult.

It turns out that you can have udev execute a command on device insert to help you determine what symlink to create. So for example, I have this entry in the rules on one of my machines:

KERNEL=="ttyUSB*", \
    ATTRS{idVendor}=="067b", ATTRS{idProduct}=="2303", \
    PROGRAM="/usr/bin/usbtest /dev/%k", \
    SYMLINK+="%c"


This results in /usr/bin/usbtest being run with the path of the device file on its command line for every device detection (of a matching device). The stdout of that program is then used as the name of a symlink in /dev.

So, that script attempts to talk to the device and determine what it is -- in my case either a currentcost or a solar panel inverter.

Tags for this post: linux udev serial usb usbserial
Related posts: SMART and USB storage; Video4Linux, ov511, and RGB24 palettes; ov511 hackery; Ubuntu, Dapper Drake, and that difficult Dell e310; Roomba serial cables; Via M10000, video, and a Belkin wireless USB thing

Comment

Syndicated 2017-01-31 12:04:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

1097 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!