Systems and Automation

stuff I like

Resize Qcow2 Images Without Virt-resize

I’m in the process of switching from kickstart builds for new VM’s to image based clones. I needed to find a simple way to add space to a qcow2 image file along with one of the partitions in it and the volumes it contained. Most of the guides I found used virt-resize to resize the image. While virt-resize makes resizing qcow2 images really easy it has the side affect of making sparse images into non-spare images. This can end up using a considerable amount of space depending on how much additional space you are adding. The rest of the options I found have you converting qcow2 to raw and then back again which is slow and also can consume large amounts of space.

Libguestfs to the rescue!

Now there are a few requirements, namely this only works if the partition you want to modify is either the last partition or the only partition and you need to do this all while the VM is shutdown. In my case it’s the last partition I need to expand, the disk layout has boot at partition 1 and partition 2 contains two LVM volumes. It’s both of these volumes that I want to expand via libguestfs.

full gist available here with some error checking and a few more options.

Lets go!

First lets increase the qcow2 image size using the standard qemu-img tool, lets add 40GB.

1
qemu-img resize vmimage.img +40G

Now using whatever language you like (libguestfs has lots of bindings for various languages) fire up the guestfs back end. Note that you can do this in the shell/Bash if you like but calculating the proper partition sector start location is a little harder (not impossible just a bit more work). Everything below is in Python, copy it to a script or just type along in the Python command line.

1
2
3
4
5
6
import guestfs

# For return values we want a dict object
g = guestfs.GuestFS(python_return_dict=True)
g.add_drive_opts('vmimage.img')
g.launch()

Lets assume your dev name is /dev/sda but you can use the list_devices function to find it if you are not sure. Now lets get the partition list:

1
partitions = g.part_list('/dev/sda')

We need the start location of partition 2:

1
part_start = partitions[1]['part_start']

Grab the block size of the image:

1
blk_size = g.blockdev_getss('/dev/sda')

Calculate the starting sector by dividing our partition start location by the block size:

1
start_sector = part_start / blk_size

Now the scary part, we need to delete partition 2:

1
g.part_del('/dev/sda', 2)

Ok great the partition is gone, lets create a new one now with the info we gathered above. The ‘-1’ bit below tells libguestfs that you want the last sector as your end sector. This is a nice shortcut so you don’t need to calculate it yourself:

1
2
3
4
try:
  g.part_add('/dev/sda', 'p', start_sector, -1)
except RuntimeError as msg:
  print "Part add failed due to re-read of partition table, this is normal and expected.. Ignored.."

As noted above because the partition table can’t be re-read by the kernel you will get an error message. This is ok, to resolve this lets just stop and restart the libguestfs system to force it to reload the partition table:

1
2
3
g.shutdown()
g.add_drive_opts('vmimage.img')
g.launch()

Ok now we can resize the PV inside the partition we just expanded:

1
g.pvresize('/dev/sda2')

Now in this case I know the exact volume names but if you didn’t know it you can use the lvs function in libguestfs to list them. Since I know them already I’m just going to include it in the command below which takes my swap volume from whatever it was (2GB previously) to 6GB now.

1
g.lvresize('/dev/vgroot/swap.vol', 6144)

Ok great, swap vol is now 6GB! Lets expand the root volume with any remaining space in the PV. This uses a slightly different guestfs command that fills free space with a percentage number. For my particular use case I just want to fill all remaining space or 100% remaining space as shown below:

1
g.lvresize_free('/dev/vgroot/root.fs', 100)

Last steps, lets resize the actual filesystem on the root volume and then check it to make sure everything is OK:

1
2
g.resize2fs('/dev/vgroot/root.fs')
g.e2fsck('/dev/vgroot/root.fs', correct=True)

Now shutdown the guestfs process since we are all done:

1
2
g.shutdown()
g.close()

That’s it! Looks hard but it’s pretty simple, the hardest part is calulating your proper start sector. Now go boot up your VM and you will find that swap has increased to 6GB and your root volume has an additional 34GB of space in it!

This all happens in less than a minute if you script/automate it all and doesn’t have any of the conversion or sparse to non-sparse issues that virt-resize has.

Enjoy!

Vundle for Vim

I have a new favorite Vim plugin manager called Vundle.

It’s simple and doesn’t have all the overhead of some of the other Vim plugin managers/pref bundles.

I’m only using a small handful of plugins (shown below)

1
2
3
4
5
6
7
8
9
10
11
Bundle 'SuperTab'

" Repos from GitHub
Bundle 'bling/vim-airline'
Bundle 'tpope/vim-fugitive'
Bundle 'davidhalter/jedi-vim'
Bundle 'rodjek/vim-puppet'
Bundle 'elzr/vim-json'

" installed bundles go here
Bundle 'Syntastic'

Vundle has been going through some upgrades lately which is making it even easier to use.

Take 10 minutes and check it out.

Audio Fingerprinting on Raspberry Pi (Part 1)

A short guide on how to do some basic audio fingerprinting on a Raspberry Pi also known as “my quest to notify myself when the dryer is done”. Lets start with the excellent Chromaprint library. Build Chromaprint on Pi

Install pre-reqs:

sudo apt-get install libboost1.50-all-dev ffmpeg libtag1-dev zlib1g-dev resample libresample1-dev cmake libffms2-dev  

Download package from Chromaprint source

mkdir /opt/src
cd /opt/src
wget https://bitbucket.org/acoustid/chromaprint/downloads/chromaprint-1.1.tar.gz
tar -zxvf chromaprint-1.1.tar.gz
cd chromaprint\*
cmake -DCMAKE_BUILD_TYPE=Release -DBUILD_EXAMPLES=OFF -DCMAKE_CXX_FLAGS="-Ofast -mfpu=vfp -mfloat-abi=hard -march=armv6zk -mtune=arm1176jzf-s" -DBUILD_TOOLS=ON
make
sudo make install

Python stuff

If you don’t already have PIP installed go grab it:

sudo easy_install pip

Build/install Python packages now:

sudo pip install pyacoustid audioread

Create a test WAV file 10 seconds in length of whatever audio sound/noise/whatever that you want to fingerprint. In this case I’m using my USB microphone plugged into the Pi since there is no onboard microphone or audio input (and no analog inputs either). You can also use the PyAudio package you installed to record the wav file.

arecord -D plughw:1,0 -d 10 -r 44100 --channels=2 --format=cd /tmp/test.wav

Fingerprint your wave file (in python). Note: You can just as easily use ‘fpcalc’ in the shell to get a compressed or raw fingerprint from chromaprint libs.

import chromaprint
import acoustid
duration, fingerprint = acoustid.fingerprint_file('/tmp/test.wav')
fp_raw = chromaprint.decode_fingerprint(fingerprint)[0]
print "Compressed fingerprint: %s" % fingerprint
print "Raw fingerprint: %s" % fp_raw

If all went well you should not have seen any errors and your compressed and raw fingerprint should have been printed to the screen. If you fingerprint only had “AQAA” in it then something is wrong with your chromaprint library. The available libchromaprint packages along with the python-pyaudio package did not work for me at all and consistently resulted in no fingerprints. Instead building the custom chromaprint libs and then installing pyaudio via pip resulted in a working setup.

If you do have fingerprint errors make sure your wav file is not empty and if it’s not empty then try to do a clean build on the chromprint libs and re-install.

Note: The example ‘fpcalc’ won’t actually build from the 1.1 chromaprint libs. It throws some strange compiler errors that I couldn’t resolve.

I’m going to be recording 2 10 second files in a loop and that can be a bit hard on the SD card in the Raspberry Pi so instead I’m using the built in TMPFS location in /run so I don’t do a lot of unnecessary writes to the SD card.

That’s all for now, come back for part 2 where I wrap it all up into a single script that records in a thread while analyzing the previous recording and sending notifications when matches are found.

Multi-master Puppet Setup With DNS SRV Records

I recently switched from a single Puppet master to a Multi-master setup which consists of a single CA server and 2 new masters. During this change I also took the time to upgrade my clients (and server) from 2.7.25 to 3.4.2 and Ruby from 1.8.7 to 2.0.0p353 oh and I also switched over to the new DNS SRV records setup.

While I’m not going to rehash the entire setup and all steps taken to get there I do want to include some high level steps that were not exactly clear after reading the Puppet labs docs on multi-master setups. Hopefully this helps others trying to accomplish the same thing.

For my setup I used a standalone CA and 2 masters. Follow the Puppet labs guides/docs to build out your CA and masters with whatever software you like. I used the blessed Apache + Passenger setup. For multi-masters there is no special setup required on the Apache/Passenger side of things, just set them up as usual with the exception of your config.ru file (see below).

If you are upgrading from a single master 2.X setup you also need to remove any $servername references from your manifests. Most likely this will be in manifests/site.pp file.

Don’t just copy your config.ru from your old setup if you ran Puppet 3.x or older. Use the new config.ru.passenger.3 in the Puppet labs github repo. If you skip this step you will have a series of odd problems that you won’t be able to resolve any other way. Make sure you chown the config.ru file as puppet:puppet since Passenger uses the owner of the file as the user to run as.

Before you start your new CA or master servers you have to generate the SSL certs properly. On the CA: make sure your /etc/puppet/puppet.conf contains the lines below (adjust config as needed to support your setup):

 [main]
  pluginsource = puppet:///plugins
  pluginsync = true
  use_srv_records = true
  srv_domain = mydomain.com

 [master]
  ca = true
  dns_alt_names=myca1.mydomain.com,myca1
  # Bits for Passenger/Apache
  certname=puppetca.mydomain.com
  ssl_client_header=SSL_CLIENT_S_DN
  ssl_client_verify_header=SSL_CLIENT_VERIFY

Now run the command below on your CA to generate your CA certs with the proper dns_alt_names. Puppetca is an alias pointing at my hosts real name, DNS alt names should contain your hosts real name.

puppet cert generate puppetca.mydomain.com --dns_alt_names=myca1.mydomain.com,myca1

Verify that your cert looks correct with the command below, it should list your puppetca plus the alternate DNS names you specified.

puppet cert list puppetca.mydomain.com

Your CA is now ready to run, fire up the web server and double check your weblogs for any errors. Assuming all is good now you can switch over to one of your masters and make sure your config contains the bits below. The really important line is ca=false for any server that is not your CA server.

[main]
   pluginsource = puppet:///plugins
   pluginsync = true
   use_srv_records = true
   srv_domain = mydomain.com

[master]
   ca = false
   # Bits for Passenger/Apache
   certname=master1.mydomain.com
   ssl_client_header=SSL_CLIENT_S_DN
   ssl_client_verify_header=SSL_CLIENT_VERIFY

Run your master by hand the first time:

puppet master --no-daemonize --verbose

The master will generate it’s cert and send it over to the CA server to get signed. If you are using autosigning just wait for the cert to be signed, if not go sign it on the CA server.

Once that cert is signed you can hit CTRL-C and stop your master, now start it back up using the real web service. Once again check the weblogs for any errors. Try running the puppet agent by hand on this master now and see how it goes. You should get a clean run.

Now head over to your 2nd or 3rd master and repeat the steps above for the masters.

With your masters and your CA server working you can now tackle the clients. Using your existing puppet master (if you have one) add all the lines in the [main] section above to your clients. You can safely do this ahead of time because the 2.X clients don’t support those features and will just ignore them.

Now upgrade your packages via whatever tools you use to do package upgrades, for my setup I have a custom build of Ruby 2.0 packaged as an RPM using a fairly standard SPEC file. I then used the FPM utility to package up Puppet, Facter and all dependancies (don’t forget about Augeas if you use it).

Now on my hosts I can do a ‘yum install ruby20-puppet’ and everything gets upgraded. Make sure your Puppet.conf file has those srv_domain bits above and then delete your clients ‘ssl’ directory. Run the agent, it should automatically switch over to the new CA and masters and generate a cert, go sign it (or turn on autosigning), once signed the client should finish it’s run as usual.

One final note: Currently Puppet pluginsync is broken with 3.4.2 (and below) when using DNS SRV records. This should be fixed in a later version but the simple workaround for now is to remove the implied $servername portion in pluginsync and instead let it use the server that the client connected to by putting this line in each and EVERY puppet.conf file for both agents and masters in the [main] section.

 [main]
  pluginsource = puppet:///plugins

Vim in All It’s Awesomeness

I have tried many text editors (and various IDE’s) over the years and I always come back to VIM for just about everything, especially anything programming related. Vim on it’s own is great but with some plugins and some simple changes it’s awesome. Everything from syntax highlighting (using Syntastic) to autocompletion of functions and variable names. Since I write a lot of code in Python and Puppet (Syntastic has checkers for both and much more) which has saved me an incredible amount of time since I started using it.

Do yourself a favor and take 5 minutes to go get the incredible Janus: Vim Distribution. It’s a quick and painless install. From the Janus README:

$ brew install macvim    (optional - requires [Homebrew](http://brew.sh/))
$ curl -Lo- https://bit.ly/janus-bootstrap | bash

The above two commands are all you need, that second command will also backup your existing Vim files in your home directory so you don’t lose anything you may already have setup.

Customization is fairly simple using ~/.vimrc.before and ~/.vimrc.after files in your home dir. The only changes I make to the base Janus setup is shown below, put these changes in ~/.vimrc.after if you like.

" Clear searches easily with ,/ after
nmap <silent> ,/ :nohlsearch<CR>

" Give a shortcut key to NERD Tree
map <F2> :NERDTreeToggle<CR>

" Disable F1 help crap, map to ESC instead
map <F1> <Esc>
imap <F1> <Esc>

One of my favorite commands once you have Janus installed is to reformat an entire file with a simple keystroke:

  • \<leader>fef formats the entire file

The default \<leader> character is \ so \fef reformats your current file. Stop living in the dark ages, go install Janus right now!

Git - Move Subdirectory to New Repo

I had a need to move/detach a subdirectory that was inside a larger Git repository into it’s own smaller/standalone repository. After a few Google searches it turns out this is a fairly common and relatively easy thing to do.

Not required but I like to start with a fresh clone of the repo I’m working with into a temp directory.

mkdir tmp
cd tmp
git clone my_original_repo_url

Now clone the repo again (this time it’s a local only clone of the repo above):

git clone --no-hardlinks my_original_repo new_repo_name
cd new_repo_name

Extract just the subdirectory you want:

git filter-branch --subdirectory-filter mysubdir

Now lets remove the old remotes, any unneeded history and repack the repo:

git remote rm origin
git update-ref -d refs/original/refs/heads/master
git reflog expire --expire=now --all
git repack -ad

Now you can add your new remote(s) in and push your changes up to the server:

git remote add origin my_new_repo_url
git push origin master

Note: If you are using Gitorious it may at this point complain about a ‘invalid ref’ when you push it to the server. As far as I can tell this does not cause any problems and only occurs on the first push.

So that covers making your new repo from a subdirectory now lets go remove the now old subdirectory from the original repo so we don’t commit to it by accident. I’m using a simplified removal process, you could remove all references and commit info for the subdirectory if you like but for my case that was overkill.

cd ../my_original_repo
rm -rf mysubdir
git rm -r mysubdir
git commit -m "Removing subdir, it has been moved to its own repo now"
git push origin master

All done! Your subdirectory has now been moved from the original repository into a new repository with all your history and commits intact.

Gitorious/Redis Installer Updated

In regards to my last post gitorious-and-redis-service I have updated my clone of the ce-installer to include the changes up to v2.4.12 and rolled all my changes from that post into the latest version.

My CE-Installer clone: https://www.gitorious.org/~kholloway/gitorious/kholloways-ce-installer

I have requested a merge with the mainline ce-installer which is viewable at: https://www.gitorious.org/gitorious/ce-installer/merge_requests/2

Enjoy!

Gitorious and the Redis Service

Gitorious has a very nice status command via /usr/bin/gitorious_status that quickly shows you if all your Gitorious services are up and running (see screenshot).

It’s missing one very important service though, Redis!

Redis is now the default messaging service when you do a fresh install via the Gitorious CE-Installer but it’s not included in the status check, it’s missing from the /admin/diagnostics page and it’s missing a Monit check file to restart it if it dies. Keeping it running is pretty important for a working Gitorious install because without it many of the web page operations like creating a new project or team will fail and it won’t be very clear from the logs why it failed.

Lets fix some of those problems.

First off lets patch the gitorious_status script with the patch below which should work on any modern Linux variant.

Save the lines below as: /tmp/my.patch

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
*** gitorious_status 2013-03-25 16:11:04.475121039 -0500
--- gitorious_status_redis 2013-03-25 16:16:35.380135982 -0500
*************** unicorn_status() {
*** 26,31 ****
--- 26,35 ----
      check_process_and_report "ps -p $PID" "Unicorn"
  }

+ redis_status() {
+     check_process_and_report "/etc/init.d/redis status" "Redis"
+ }
+
  # Upstart's exit codes are a beast of its own
  resque_status() {
      check_process_and_report "/sbin/initctl status resque-worker" "Resque"
*************** sphinx_status
*** 80,82 ****
--- 84,87 ----
  memcached_status
  sshd_status
  mysqld_status
+ redis_status

Apply your patch to the status command:

 patch /usr/bin/gitorious_status < /tmp/my.patch

Your status command should now show the details of the Redis service as shown below.

Next lets create a Monit config file for Redis which will watch the process and restart if needed. I use Puppet and a custom Monit module I wrote for this but it’s not required. For now lets just manually create the file and you can integrate it into your configuration management tool later if you like. Note that the Monit file below is specific to Redhat, change the pidfile and start/stop lines as needed to match your OS.

Copy the contents below into: /etc/monit.d/redis.monit

1
2
3
4
5
check process redis-server with pidfile /var/run/redis/redis.pid
  start program = "/sbin/service redis start"
  stop program = "/sbin/service redis stop"
  if does not exist for 1 cycles then restart
  if 5 restarts within 5 cycles then alert

Now lets restart Monit so it picks up the change. On Redhat that’s done like so:

service monit restart

Check that it’s setup correctly in Monit:

monit summary

OR

monit status

So now you have Redis monitored by Monit and the gitorious_status command shows you if it’s up or down but your /admin/diagnostics page is still missing any status about it. That last bit is not too hard to fix but for now I’m leaving that up to the Gitorious folks to patch along with the incorrect status about the gitorious-poller service which is not in use any longer.

My New RSS Setup After the Death of Google Reader

My setup for now, so far working very well.

FeedaFever on my server (I love owning my own data!) Reeder iOS app on the iPhone and I’m hoping the Reeder dev adds FeedaFever support for Reeder on iPad and Mac soon.

Fever has a usable web interface on both iPad and iPhone if you don’t have a native app but Reeder is the gold standard on iOS for an RSS reader.

Alternative iPhone app choice for FeedaFever: Sunstroke

Some alternatives I looked at and still might use down the road are NewsBlur and Feedly. NewsBlur is hosted with free accounts or premium accounts and if you want it’s open source so you can download and run your own server.

Feedly looks nice also but I’m not used to the layout/setup yet.