We needed more space for cinder and had no nice way to expand it on our existing cinder server so after banging my head a bit I got assistance from Giulio Fidente who was able to show me a working config that let me figure out what I was missing. Below I document it so others might be able to find it, too.

NOTE: this works under folsom on rhel 6.4. I cannot vouch for anything else -but Giulio had it running on grizzly I think so…

Usage:

You have an existing cinder server setup and running – which includes
a volume server, an api service and a scheduler service. You need to
add more space and you have a system where that can run.

Here’s all you need to do:

1. install openstack-cinder on the server you want to be a new volume server

2. make sure your new system can access the mysql server on your primary
controller system

3. make sure tgtd knows to import the files /etc/cinder/volumes

add
include /etc/cinder/volumes/*
to:
/etc/tgt/targets.conf

4. make sure your other computer nodes can access the iscsi-target port
iscsi-target 3260/tcp on the system you want to add as an cinder-volume server

5. setup your /etc/cinder/cinder.conf
example:

[DEFAULT]
sql_connection = mysql://cinder_user:cinder_pass@mysqlhost/cinder
api_paste_config=/etc/cinder/api-paste.ini
auth_strategy = keystone
rootwrap_config = /etc/cinder/rootwrap.conf
rpc_backend = cinder.openstack.common.rpc.impl_qpid
qpid_hostname = qpid_hostname_ip_here
volume_group = cinder-volumes
iscsi_helper = tgtadm
iscsi_ip_address = my_volume_ip
logdir = /var/log/cinder
state_path = /var/lib/cinder
lock_path = /var/lib/cinder/tmp
volumes_dir = /etc/cinder/volumes

6. start tgtd and openstack-cinder-volume

service tgtd start
service openstack-cinder-volume start

7. check out /var/log/cinder/volume.log

8. Verifying it worked:
on your cloud controller run:
cinder-manage host list
you should see all of your volume servers there.

9. creating a volume. – just make a volume as usual – the scheduler
should default to the volume server with the most space available

10. on your new cinder-volume server run lvs to look for the new volume.

Advertisements

Great presentation slidedeck:

 

https://speakerdeck.com/jpmens/ansible-an-introduction

 

introducing ansible.

 

euca-terminate-instances

November 2, 2012

As I find the need I write functionality I need into the existing euca2ools using their lovely cli python api.

I hate trying to remember an instance id. I know the ip of the host or I know the dns name of the hose. I don’t need to go find the instance id to know which one I want to kill.

But euca-terminate-instances is silly and won’t let me pass in an ip or a hostname. Nor will it let me specify globs 😦

So I wrote this

http://fedorapeople.org/cgit/skvidal/public_git/scripts.git/tree/euca/my-terminate-instances.py

It takes public or private ips, public or private dns names (the ones euca or ec2 has) or instance ids.

It also lets you pass file-globs to them. So you can do things like:

my-terminate-instances i-\*

and kill everything you’re running. Isn’t that fun!

enjoy

 

ansible and cloud instances

October 31, 2012

A few days ago I posted about using ansible to provision persistent ip instances in a public or private cloud.

Today I finished up the next piece I needed to make this work for copr builders w/transient ips/instances.

I needed a way to create a new instance, provision it as if it was one of the normal inventory of our systems, and return back the ip of that was given to it via eucalyptus (or ec2) automatically. And when I was done with it – I didn’t want any record of it preserved in our inventory system. I wanted it gone.

The problem was ansible wants to act only on the hosts it knows about in its inventory. This is quite understandable. But since I’m not specifying an ip to this host and I don’t know it in advance I wanted a way, in a single playbook to do this.

So I wrote add_host an action_plugin for ansible. These let you interact with the internal state of the inventory/playbook/etc while executing a playbook.

All it does is to add the host to the in-memory inventory the ansible playbook object has. And it adds that host to a group you specify. This is so in the second play in the playbook you can say ‘oh operate on hosts in this special group name’ and that will be the only host in that group.

I’ve sent in a pull request for it but it’s not been accepted quite yet. 🙂 However, if you want to try it out you can just toss it into your  action_plugins dir in ansible and call it.

Here’s an example playbook. It’s very similar to the one for creating a persistent instance. In the fedora public ansible repo we are, in fact, importing the same set of tasks from another file to set them up the same.

It just means if anyone wants an instance in our private cloud running f17 or el6 it is incredibly trivial to make one available for you.