mailman archiver failure

April 22, 2013

If you see this traceback in your /var/log/mailman/error file

 

 

File “/usr/lib/mailman/Mailman/Queue/Runner.py”, line 120, in _oneloop
self._onefile(msg, msgdata)
File “/usr/lib/mailman/Mailman/Queue/Runner.py”, line 191, in _onefile
keepqueued = self._dispose(mlist, msg, msgdata)
File “/usr/lib/mailman/Mailman/Queue/ArchRunner.py”, line 73, in _dispose
mlist.ArchiveMail(msg)
File “/usr/lib/mailman/Mailman/Archiver/Archiver.py”, line 216, in ArchiveMail
h.processUnixMailbox(f)
File “/usr/lib/mailman/Mailman/Archiver/pipermail.py”, line 583, in processUnixMailbox
self.add_article(a)
File “/usr/lib/mailman/Mailman/Archiver/pipermail.py”, line 635, in add_article
article.parentID = parentID = self.get_parent_info(arch, article)
File “/usr/lib/mailman/Mailman/Archiver/pipermail.py”, line 669, in get_parent_info
if parentID and not self.database.hasArticle(archive, parentID):
File “/usr/lib/mailman/Mailman/Archiver/HyperDatabase.py”, line 273, in hasArticle
self.__openIndices(archive)
File “/usr/lib/mailman/Mailman/Archiver/HyperDatabase.py”, line 251, in __openIndices
t = DumbBTree(os.path.join(arcdir, archive + ‘-’ + i))
File “/usr/lib/mailman/Mailman/Archiver/HyperDatabase.py”, line 65, in __init__
self.load()
File “/usr/lib/mailman/Mailman/Archiver/HyperDatabase.py”, line 170, in load
self.dict = marshal.load(fp)
ValueError: bad marshal data

It is due to a corrupted archive database. Those live in /var/lib/mailman/archives/private/$list/database/*

In order to figure out which one it is – you have to run this:

 

#!/usr/bin/python

import os, sys
sys.path.insert(0, ‘/usr/lib/mailman’)

import Mailman.Archiver
import marshal
for fn in sys.argv[1:]:
if os.path.exists(fn):
c = marshal.load(open(fn))

 

against the files in the dir I mentioned above.

like this

python thatscript /var/lib/mailman/archives/private/$list/database/2013-April*

That will tell you if a file is busted, (it will print out an exception) but it won’t fix it.

You will probably need to run it against all of the current files for all the lists you have :(

Once you figure out which lists are broken you SHOULD be able to run

bin/arch –wipe listname /var/lib/archives/private/$list.mbox/$list.mbox

and have it recreate the whole thing.

 

I’m trying to produce a simple list of instances on the fedora

openstack instance. I want to produce a list every 10m or so and diff

it against the last copy of that list and output the changes.

Here’s what I came up with:
http://fedorapeople.org/cgit/skvidal/public_git/scripts.git/tree/openstack/gather-diff-instances.py

it is based originally on nova-manage. It runs as root on the head system in our cloud and just dumps out json, then diffs the json.

Everything works but I’m trying to figure out if this is the ‘right’
way of going about this.
I thought about doing it via nova instead of using the nova-manage
direct-to-db api but I had 2 problems:

1. I would need to save the plaintext admin pw somewhere on disk to
poll for that info

2. or get a token which I would have to renew every 24 hours

We’re using the above the script as a simple cron job that lets us know
what things are changing in our cloud (who is bringing up new
instances, how many, what ips they are attaching to them, etc)

Additionally, is there a way in the db api to easily query the tenant and user info from keystone? I’d like to expand out the user uuid into username/project name.

 

 

gitproxy

March 22, 2013

gitproxy:
http://skvidal.fedorapeople.org/misc/gitproxy

Dealing with a potential problem we were trying to figure out a way to proxy/redirect git:// calls from one server to another one. This is a fairly ridiculous script I hacked up in the wee-small hours of thursday morning after talking to+Sitaram Chamarty on #git  for a while.

I fully expect this won’t work well under load but it does seem to function in my small tests here.

 

 

diffing two ini files

March 22, 2013

Ever need to diff 2 ini files but their sections and options aren’t in the right order?

Well, I do. I googled but I couldn’t find anything trivially available that did this.

I swear I wrote this once before but I couldn’t find it when I looked through my dirs of misc scripts so:

http://fedorapeople.org/cgit/skvidal/public_git/scripts.git/tree/inidiff

hope it is useful to someone.

A number of people have been surprised by this feature, even though it is documented, so I thought I’d mention it.

Ansible can run actions async. This means it connects to the client system, starts the process and disconnects.

In general you would want all your plays to be synchronous (do thing X, wait for it to be done/watch it, do thing Y).

However, there are times when what you want to do will take a VERY long time or could kill your ssh connection off.

An example is a yum update:

tasks:

- name: yum update

action: command yum -y update

That can take a long time, depending on what’s going on. You want to monitor what it does, but you don’t want a timeout or a reset ssh session/network to kill off that process.

So what do you do? You make it async:

tasks:

- name: yum update

action: command yum -y update

async: 7200

poll: 15

That means – run yum -y update – wait for up to 7200s and poll every 15s to check on the status of the action.

Here’s where we’re using it in fedora:

http://infrastructure.fedoraproject.org/cgit/ansible.git/tree/playbooks/package-update.yml#n11

However, this means if your ssh or network were to die – the yum update process would still run to completion.

But if your connection does die and you cannot check on the status of the job what do you do?

Well -you can connect to any system as the user who was running the job and look in ~/.ansible_async

there will be a file in there for each job that was being run. It may just be a place holder and empty (if the job is still running) or it made be filled with the results if the job is finished.

Pretty handy for a variety of tasks.

 

So – the trick with resizing a qcow based instances is this:

1. either you have to resize the partitions in the initramfs of the instance (which is not yet available as something we can easily do, but we’re working on it :)

2. you have to resize the partition on the live instance and then reboot the instance.

Since 1 is going to take more time/testing – I went ahead to make 2 as painless as possible.

using the cloud-utils and ansible I came up with this:

http://skvidal.fedorapeople.org/misc/openstack-qcow-disk-resize.yml

Put in the hosts you want to run it against. It installs cloud-utils, resizes the partitions using growpart, reboots, waits for the instance to come back alive, then does the fs resizing.

I timed it – it took a total of 1m2s and that includes installing cloud-utils, waiting at minimum 10s for the instance to reboot and then resizing the  actual fs.

The example I gave checks the values from growpart – so it  won’t run more than once (and it won’t run if you cannot resize). So you can run this play over and over and not reboot it all the time. I’m thinking I’ll probably include this as a tasklist for a quick instance provisioning playbook.

 

dealing with image creation in all clouds is completely full of suck. I’d more or less come to terms on it with euca but now I’m trying to do the same thing with openstack and encountering some super-duper happy fun times.

I have a rhel6 img which works and boots – it is qcow2 based so it handles kernel updates, etc properly, yay. However it handles resizing  the / filesystem exactly not at all.

If I make the rhel6 img an ami and upload kernel/ramdisk – it’ll resize (well dump it out) into a large filesystem – no problem – but it handles kernel updates not at all.

I would like to have both, I think I deserve to have both, i’ll be damned if any of my testing comes up with both. I’ve done a fair amount of googling and LOTS of things I’ve found say that the qcow2 or raw img should just work in either openstack essex or folsom but I am not having that experience.

Anyone have a suggestion?

Follow

Get every new post delivered to your Inbox.